I have been in the learning business longer than I would like to admit. I have seen a lot of changes over the past 20 years, but one thing has remained steady throughout: SCORM.
If you are a learning professional and don’t know about SCORM, you’ve probably been hiding under a rock somewhere. Just in case, allow me to provide you with a quick refresher: The acronym “SCORM” stands for “Shareable Content Object Reference Model”. It is the mechanism that allows courses to “communicate” with most Learning Management Systems (LMS), letting the system know information about a given learner, such as when they started a course, when they completed a course and their associated scores.
An important aspect of SCORM is that it utilizes a conceptual feature called a “Sharable Content Object” (SCO). You can tell by the name this is a learning object/content that can be reused within or across multiple courses. While SCOs seems like a nice feature theoretically, I have never known anyone to use them in a way that could legitimately be called “reusable”. Instead of providing a benefit to course developers, this contributes to what makes SCORM a standard that generally misses the mark for creating robust learning applications.
As implied previously, SCORM is not a new concept. It has been the standard for creating LMS courseware since 2000. In fact, SCORM 1.2 (released in 2001) is still used by a staggeringly high number of professional content developers today. Even the most recent update, SCORM 2004 v4, was released over twelve years ago. Generally speaking, each update has not served to improve the standard in any considerable way. Instead, each version simply adds a few additional fields to track learner data while occasionally including data that is “shareable” across SCOs and courses. To put it briefly, SCORM is a 20-year-old standard whose time has come and gone.
At this point, I imagine you’re asking yourself two questions: “Why am I reading about SCORM on an AI blog ?” and “Why criticize a standard that has been the backbone of LMSs for two decades?”
One of the primary reasons why SCORM needs to be sunset is because it is hamstringing our collective ability to gather meaningful learner data. If you are simply looking to collect pass/fail status, scores on traditional “check on learning” assets, or course completion times, SCORM is the standard for you. However, if you are looking to gather a more robust set of data (i.e., anything more than what has been presented above), you are pretty much out of luck.
Before you ask, yes, I am aware that there are work arounds to achieve broader goals using SCORM. Yet, they are just that – work arounds. I have tried them all and still found the need to implement a third-party database to collect the information that my client really wanted. Without good, meaningful data, we will never be able to move into the realm of AI/ML as it applies to learning. As disconcerting as it may sound, we – as an industry – are not preparing for the inevitability of AI. We cannot continue to remain stagnant and watch another innovation pass us by. Failing to make effective changes as the industry evolves around us will only cause us to fall further behind. How much longer before catching up becomes impossible?
The answer to the second question falls along the same lines. If we want to be taken seriously as an industry, we MUST adopt a viable alternative to SCORM and quickly. We can’t move beyond traditional page turners and checks on learning without having something to track the data that is being generated by applications such as simulators, games and VR learning applications. These interventions provide a ton of data, and capturing that learner data should be our first priority right now. The development of highly interactive learning applications needs to be supplemented with a robust back-end tracking system, one that can provide large data sets to analyze, in turn, providing meaningful insights into the results of that data.
One area that would benefit greatly from updating our current standard would be Return on Investments (ROI). If we are being honest with ourselves, our industry has always had a challenge with ROI metrics. In our collective defense, it is a really tough problem. Having a mechanism that can collect meaningful learner data and being able to analyze that data is likely the key. By continuing to rely on SCORM, we are keeping ourselves trapped in a system that has no hope of collecting the kind of data we need to provide our clients with actual ROI statistics.
If I had to guess, I’m probably in the Autumn of my learning career. Before I retire, I would like to see the industry move into the place of prominence it so greatly deserves. In order to do so, there will need to be a shift in the way we see ourselves and our role in a broad array of industries. We should be pushing ourselves to move away from a SCORM standard and into one of the more flexible standards that have emerged over the last few years. Setting a date to close out the SCORM standard would be a great start.
I will be writing a series of blogs over the next several months to talk about the challenges of collecting learner data, what is currently being done to mitigate those challenges, and how we should be starting to think about the use of AI/ML in analyzing learner information to create more impact driven learning applications.
Please feel free to reach out to me at my email: email@example.com. I’d love to hear your feedback!