- Get Started
- Write Code
Posted by Kirsty Hughan
Posted 13 December 2017
Last week wrapped up I/ITSEC 2017, the largest training and simulation event in the world. This year, we exhibited and talked with people about how we help solve problems around eLearning standards like xAPI and SCORM (along with the updated DoDI 1322.26). What was notable about this year, for me, was the introduction to a new term: “system of systems.” For those unfamiliar, system of systems refers to a collection of systems that are brought together to create a new, more complicated system that’s greater than the sum of its parts. System of systems is an idea used throughout organizations or softwares, but at I/ITSEC, many people were talking about it in the context of xAPI.
It’s safe to say that every DoD agency uses multiple systems for training. They may have one or multiple LMSs, AR tools, VR tools, authoring tools, content management tools, physical simulations, in-person training..the list can go on. Because of the complexity of their ecosystem, they must think strategically about how each system works within the whole. Thus, the idea of a system of systems.
What arose in conversations at I/ITSEC was how well-suited xAPI is for supporting the creation and reporting on a system of systems. xAPI is at its core a communication protocol that helps multiple, separate pieces communicate in the same way. Using xAPI, the DoD could connect experiences from in-person training to those in an LMS.
We saw some great tools that leverage modern technology for training, particularly when it comes to AR and VR. Traditionally, each of these tools would be self-contained. But with xAPI and a systems of systems approach, each of these tools can become part of a larger plan that connects disparate systems and experiences.
We look forward to learning more about how DoD agencies (or those outside the DoD too!) use xAPI to support the creation of their system of systems. If you ever have any questions about how you can do this or how we’ve helped other clients create their ecosystem, let us know. We like talking about the standards.
Posted by Kirsty Hughan
Posted 11 December 2017
Last Friday, we were excited to be involved in xAPI Party. The celebration marked the end of the xAPI Cohort run by Torrance Learning (their next Cohort starts February 1) and our Director of Products TJ Seabrooks gave a demo. Since many of the folks at the xAPI Cohort were familiar with the free LRS in SCORM Cloud, we wanted to share a story of how we helped one of our clients with a super particular xAPI problem, so TJ shared how to add attachments to an xAPI course in Lectora.
This specific example comes from a client who is large and in a highly regulated industry. Before working with us, their certificates were tied to the end of each course. If a learner were to lose the certificate, they’d have to go back, relaunch the course and redownload the certificate. They needed the ability to maintain and later present certificates for learners so that any administrator in the organization could go into the LRS, view the learners’ scores and download certificates for each learner.
Our solution was to build out a simple reporting system that let them view xAPI attachments as part of a grade book reporting system. xAPI is particularly well-suited to this solution because it is reusable: any content you author can be launched from any LMS that supports xAPI and any LRS can store and fetch the attachments. This is unlike SCORM, with which you would need to build a custom solution that only works for a single system.
Since our customer was already familiar with Lectora, we provided instructions for setting an action in a Lectora course that lets us send our own custom event when we’re finished with the course. If you don’t use Lectora, you could adapt these steps to another authoring tool that supports xAPI pretty easily (and if you’re struggling – just reach out to us).
To include TinCanJS in the project, you need to create an HTML Extension object (Figure 1) and set the “Type” property to “Top of file scripting” (Figure 2).
Click edit (seen in Figure 2 above) and add the TinCanJS file like you would on a webpage (through a <script> tag with a relative URL). Or paste the code in between <script> tags (Figure 3).
Something to note: any code in an HTML Extension has to be valid HTML, so if any JS added isn’t inside a <script> tag, it will break.
Next, to create the screenshot of the page, use html2canvas. You’ll need to add the html2canvas source code the same way you did for TinCanJS. If you want to capture and send the current document, i.e. the page where you loaded the JS, pass “document.body” to the html2canvas() function. This function returns a promise, so use a .then() to process the screenshot. The parameter passed to the .then() is an HTML canvas object, so to get the content and the content-type, we have to use the canvas.toDataURL() function. The format for a DataURL is as follows:
You will need to parse this string to get the content-type and the content itself (code for parsing shown in Figure 4). The content will be the raw binary data, which is what should be placed in the content section of the attachment.
Once you send the statement, you can run queryStatements() on the LRS, which will return a StatementsResult object. When you run queryStatements(), be sure to set the “attachments” flag to “true” (shown in Figure 5). This flag lets the LRS know to return both the statements and any attachments that it has.
Once the StatementsResult object is returned, you can iterate through its list of statements until you find the statement that has the attachment. To download the file, you’ll need to run the following code:
var link = document.createElement(“a”);
link.download = “Test.png”;
link.href = “data:”+<attachment>.contentType+”;base64,”+<attachment>.content;
Where <attachment> is the TinCan.Attachment object whose content you want to download. This code reconstructs the dataURL, and then uses that to download the file. One thing to note is that the file extension of “link.download” should match the filetype of the attachment (if the file is a .jpeg, it should be .jpeg, if it’s a .pdf, it should be .pdf, etc.).
This is a basic guide for creating, sending and receiving attachments inside Lectora. It can be completely customized to your needs. For example, the object passed to html2canvas() can be any HTML element, not just the entire document. Therefore, if you only wanted to print a certain <div> on the page, you can pass that element instead.
Also, the TinCanJS library is designed to make sending and receiving attachments easier for the user, so the queryStatements() function handles verifying that the right attachment content is put with the rest of its data and attached to the correct statement.
If you have any questions implementing these steps or are curious about how we can help you with xAPI in general, reach out to us. In the meantime, we’d recommend checking out the free LRS in SCORM Cloud and the xAPI Open Source libraries.
Posted by Kirsty Hughan
Posted 11 October 2017
Last week, the Department of Defense (DoD) signed the updated DoDI 1322.26 Distributed Learning (DL). The latest DoDI advises all entities within the DoD to procure eLearning technology solutions that are compliant with the SCORM or Experience API (xAPI) specifications.
This Instruction replaces the 2006 version of DoDI 1322.26, “Development, Management, and Delivery of Distributed Learning,” which mandated (as opposed to advised) the use of SCORM in all eLearning technology used by the DoD. With the updated DoDI released, DoD entities can source the right DL solution based on their requirements, as opposed to being limited by the SCORM-focused scope of the older Instruction.
The 2006 DoDI required any DL technology to be SCORM conformant. After xAPI was released in 2013, it was hard for government organizations to purchase modern products as xAPI was not supported by the existing Instruction and there was no way to verify if an xAPI solution conformed to the specification. Now, government organizations have the flexibility to procure the right technical solution based on their requirements, and a means to verify that the products conform to either SCORM or xAPI.
We are excited because this is the culmination of a lot of work for many people at both ADL and Rustici Software. In 2015, we at Rustici were awarded a BAA from ADL to help them revise the 2006 DoDI 1322.26. You can read more about that story on the Rustici Software blog if you’d like.
Lucky for you, ADL recently launched a list of Conformant LRSs as part of their xAPI Adopter Registry. If you’re looking to procure an xAPI conformant LRS, this is a great place to start. If you’re looking for resources about xAPI conformance, check out the official xAPI reference and support resource for DoDI 1322.26.
Posted by Tim Martin
Posted 21 September 2017
A couple of days ago, I wrote about the state of ADL and Rustici Software’s take on it. One of the real community leaders, Aaron Silvers, then shared his perspective, partially in response. If you read them both, you’ll see some overlap and gaps in our responses, but the thing I want to address is that it seemed Aaron was asking a question or making a request of me (Tim?) or Rustici Software in the process.
Important note for those unfamiliar with this space: I work at Rustici Software, a for-profit software company. Since we started working with standards in 2003, we’ve been active within the community and try to build software that spares customers having to deal with the standards. This website, like scorm.com before it, is how we interact with and provide resources to that community.
Aaron may not have been asking these questions, but in order to answer his, I have to explore two questions:
There are two kinds of work that are clear yeses for us.
Number two is a little less obvious, but just as true. Back in the SCORM days, one of the fundamental problems was that it was simply tough to tell what was going on when a LMS launched a piece of content. As good developers do, the venerable Mike Rustici added debugging tools so he could see what was going on. (Keep in mind, this was way back in the days prior to good debugging tools being built directly into the browsers.) Mike was solving a problem he had, but he quickly saw the broader utility of those debugging tools.
We listed that debugging log as a top feature of SCORM Engine from day one. We also decided that it was worth sharing with the world. We wrapped a little bit of code and interface around our core product (SCORM Engine), labeled it SCORM Test Track, and shared it. It’s been subsumed by SCORM Cloud now, but that capability brought thousands of people to Rustici Software and introduced them to things that we do well.
Those debug logs, and Test Track, have had real, lasting, positive impact, for both the community and for us at Rustici Software. If we’re going to do work that fails at number one (making money directly), then we want to have an impact.
For most of the last 15 years, ADL has been the primary organizing force in the corporate elearning standards space. This force is realized in two ways:
This had led to real and important work. Project Tin Can was a successful initial effort on our part, funded entirely by ADL, that led to what you now know as xAPI. Similarly, ADL funded the work that DISC did in 2016-2017 that led to an xAPI profile definition specification. This money from ADL provided incentive, and ADL’s guidance provided direction.
ADL has served as the arbiter, allowing certain things to become a part of the core xAPI specification, and pushing others into other areas (cmi5, for example). They also made decisions about which community projects to highlight, which ones to work from.
Our rules about taking work are somewhat different with regard to standards bodies. On multiple occasions over the last 3 years, work that Rustici has done and offered to the community in various ways (OSS or hosted service) has been passed over or recreated. This includes:
So here’s the crux of it: Based on the current budgetary environment in the US, ADL does not currently have the ability to fund additional research, nor do they have a large number of resources to do work in house. They have retained, however, their position of authority; they decide what’s in, or they do until they don’t.
At some point, we had to start asking ourselves this question: If ADL doesn’t explicitly approve work we’re doing for community use ahead of time with their funding, does it serve us or anyone for us to take on big chunks of work like this? Simply, under what circumstances are we willing to do work to support the community without being paid?
So I have a question for the community… for you, the reader who trudged through just this many words. If we stand up an xAPI Profile Server and a service to test for valid, well-structured xAPI Profiles, on our servers, evolving it at the pace and in the manner we see fit based on the problems expressed to us by our customers and the community, will you use it? Would you allow us to play a significant, central role in that way? And to ADL, would you approve of that?
My sense is that the community would like for us to build these things, but only under very specific conditions.
OK. I’m about 1000? words into a post and I’ve answered one question. But I’m going to stop here. The answer to this one precedes the answer to the second: How does Rustici Software decide which work to do? We’ll come back to that one in a post we publish next week.* Until then, let us know if you’re open to using tools that we build.
* Update: We are still pulling together our thoughts on which work we plan to do based on conversations with standards folks and our own internal team. This is coming, it’s just going to take a little longer than we thought.
Posted by Tim Martin
Posted 15 September 2017
Shelly Blake-Plock announced last night via LinkedIn that he would be leading a Technical Advisory Group (TAG) for IEEE LTSC (Learning Technologies Standards Committee). This is good news, as Shelly is going to carry a real load in leading that group. In his own words, Shelly describes the work in this way:
Our initial purpose is to create an IEEE technical report as a reference and implementation guide for xAPI 1.0.3. More broadly, we’ll be providing an open place for discussion among xAPI stakeholders and we’ll potentially be making recommendations about needs to support widespread use of the specification based on our activity in writing the report.
Our start point is the xAPI 1.0.3 specification. We’ll discuss all aspects of xAPI such as xAPI Profiles and the relation of xAPI to SCORM and cmi5. The end point is open-ended and in our discussion we will work to define the scope of the TAG.
My version: We’re glad the initial purpose of this group points toward standardization. IEEE stamping xAPI would encourage adoption, particularly outside of the US. It would send a positive message to the community at large that xAPI is a real and complete and adoptable thing.
My priority for this group is to remain focused on the standardization of xAPI 1.0.3, rather than evolution. Broader conversations about profiles and other things that xAPI requires (e.g. evolution of the specification and surrounding specifications) are happening in many venues, and I hope this doesn’t spread the community too thin. Instead, I hope they can successfully take the steps that help IEEE consider it for standardization. This is just step one of many in that regard.
So, thanks to Shelly for leading this. We, as Rustici Software, will be sending along one of our experts to participate as well. Ben Clark played an active role in the evolution of SCORM during the 2000s, and was the true leader on Project Tin Can, which led to the advent of xAPI. He’s pretty well informed.
If you’re the adventurous sort, Shelly has invited all comers. His LinkedIn post will point you in the right direction.