Background
I work for a software manufacturer who is looking for ways to certify our users.  Not just on written tests, but actual use of the software.  If we are going to certify the users we want to know they are following the correct process and doing it in a timely manner.  Proctored exams are a difficult thing from the man power perspective and leave a lot of room for error.  When we heard about the Experience API and started to talk about it we thought it had potential.

The Plan
We started looking at having our software generate xAPI statements as the user worked their way through the software.  We could also timestamp those clicks so we could tell how long the user was taking to perform these actions.  Now, let’s take a step back for one minute, our software has thousands of fields, checkboxes, and radio buttons.  It would be a LOT of data to get all of those clicks.  In our mock up and test system we have limited the amount of data collected to entering the panel and creating an entity.  This also allowed us to simplify the verbs we would use in the beginning.  For the panel, a panel is where most of the work in the software is performed, we use the verb interacted.

xAPI Panel

Sample of a panel.


So the statement is “User” interacted with “panel name”.  Then when the user clicks the create button (or the mesh button in the example above) to create an entity the verb used is “created”.  So the statement is “User” created “entity”.  We also get a command that runs when the panel is exited.  It runs with the verb created right now, which is not ideal, but for initial testing it is OK.  We could also timestamp the commands to see how long it takes the user to complete the action.  The other thing this gives us is the ability to see if the user runs the same command back to back without exiting.  Seeing this would show us user struggle in a particular panel.  With that plan, we could give the user the requirements for the test and the expected result and then let them go.  We would have an expert create a baseline set of statements that we could then compare the users test against to ultimately decide if they were certified.  If not certified we could make suggestions on content that might be useful to help them improve their skills.
Where are we now? 
We have the software creating the statements and writing them to a text file.
Interacted With Panel Statement
Statement 1
Created Entity Statement
Statement 2
We have the database for the LRS created and will start working on the front end GUI very soon.  Once we have that we will try it out in our training room during a live class.  This will give us a good test of the LRS and the statements that are generated.  We will also have a sample “expert” run created to compare results.  This will give as a great test of all the systems and allow us to tweak before a deployment
The future….
We would like to take the data that this can generate and build a system to help users get immediate and productive help when needed.  We will be able to tell what panels a user has gone through in a session.  Using this data we can suggest help topics, videos, blog posts, etc. to help them when they ask for help.  We are looking at an icon the user would press to activate this function.  We could also use this data to help figure out how the interface can be improved and what panels our users frequent.
We still have a lot of work to do and I will continue to blog our progress as we go forward.