Open Educational Resources
Open educational resources (OERs) made a dramatic appearance with the 2002 debut of MIT’s Open Courseware initiative. In the almost two decades since, OERs have not noticeably disrupted the traditional business model of higher education or aﬀected daily teaching approaches at most institutions. This is regretable, since OERs could unify and advance the disconnected developments in digital textbooks, MOOCs, as well as blended and ﬂipped classroom pedagogies, by forming the essence of a global, open enterprise learning content management system.
There are a number of factors that may impede usage in Switzerland and elsewhere, including lacking professional development of instructors, however, in addition, there are hurdles due to the current technological infrastructure. Four such infrastructure hurdles may be: discoverability, quality control, bridging the last mile, and acquisition. Overcoming these hurdles is the impetus behind openLCMS.
Note: the remainder chapter is based on an earlier position paper in EDUCAUSE Review.
Hurdle 1: Discoverability
While modern search engines generally do a good job ﬁnding appropriate content to answer speciﬁc questions, they are ill suited to ﬁnd educational content that builds on prerequisite knowledge and takes the learner the next step toward mastery of a certain concept. This is why most digital libraries and repositories prefer to maintain their own cataloging data (“metadata”) and why attempts were made to standardize metadata speciﬁcally for educational content (e.g., Dublin Core). However, these catalogs are frequently incomplete; populated by automated harvesting processes without regard to educational context; or, even worse, crippled by entries that are wrong . While diligence and investment in catalog maintenance could remedy those shortcomings, a more systemic problem is the absence of sequencing data; the lack of deﬁned taxonomies and association rules makes it unclear which resources build on which other resources.
Many of the repositories, even inside of uniﬁed eﬀorts like the Open Courseware Consortium, remain disconnected from each other. Even if repositories are nominally connected through federated search, as in the National Science Digital Library (eventually renamed “Distributed Learning”), this frequently means ﬁnding the least common denominator of the available metadata. The resulting search results are frequently no better than a search on the open web — regrettable, since these projects house excellent content resources.
Particularly for OERs, the current type of static metadata is not a good ﬁt: authors of OERs are notoriously negligent about ﬁlling out metadata ﬁelds. For free content, with few exceptions (notably MIT Open Courseware), there is no infrastructure for anybody else to do the cataloging. Thus, this type of static metadata is essentially useless, and educators cannot ﬁnd the content they need.
The solution for this problem could be surprisingly simple: dynamic metadata based on crowdsourcing. As educators identify and sequence content resources for their teaching venues, this information is stored alongside the resources, e.g., “this resource was used before this other resource in this context and in this course.” This usage-based dynamic metadata is gathered without any additional work for the educator or the author. The repository “learns” its content, and the next educator using the system gets recommendations based on other educators’ choices: “people who bought this also bought that.” The result is a data-driven recommendation system, based on dynamic metadata. Simple? No, currently impossible, because the deployment of a resource is usually disconnected from the repository: content is downloaded from a repository and uploaded into a course management system (CMS), where it is sequenced and deployed. There is no feedback loop.
Hurdle 2: Quality Control
Quality control has traditionally been the forte of publishing companies: editors and reviewers carefully go over the content to eliminate not only typos, but thoroughly check facts, formulations, and conceptual correctness. Errors in the materials can be very painful when teaching a class, particularly when it comes to homework or exams. Educators thus place high value on quality control. Once again, OERs are at an apparent disadvantage, usually lacking editorial staﬀ. Some repositories thus resort to explicit peer review — generally a good approach, but not a scalable one.
If an educator chooses a resource, that also is peer review. This type of peer review is not punitive in nature; instead, it provides explicit peer approval and only implicitly the lack thereof. If many students in many courses work successfully through the resource, reliability is established. Particularly for assessment resources, diﬃculty, time-on-task, and other analytics can be gathered to establish their reliability and viability. If explanatory content is used in the context of assessment content, learning eﬀectiveness can be established by looking at intervening content accesses between failed and successful attempts to solve a problem, essentially data-mining the access paths. All of this data once again contributes to the dynamic metadata of the resources to establish quality, search rankings, and the base for recommender functionality.
Quality control and recommender functionality is not only limited to educators choosing materials for their learners, but eventually can be used to the system itself choosing individually and adaptively. For decades, such a system has been the goal of many initiative, but it remained largely elusive, because once again deployment is disconnected from the repository. Once again, there is no feedback loop (see Fig. 1.1).
Ask an OER provider how much impact they have — how many learners they actually reach with their content — and they usually don’t know. Cannot know, really. One download could mean thousands of learners, or zero if the faculty member subsequently decides to not use the content after all.
The disconnect has yet another consequence: If a mistake is found in a resource and corrected, the downloaded copy inside of some CMS is still wrong. There is no way to push the improved version of a resource to the learners, so even if there is quality control, its fruits do not necessarily reach the learners. One can circumvent this particular problem by linking directly into the repository, but aside from the problem of possible stale links, assessment in turn could not send performance data to the course’s grade book — the disconnect goes both ways.
Hurdle 3: Last Mile
For the vast majority of course topics, all of the information that would be in textbooks is freely available in online format. But this information is scattered, embedded into other contexts, or of the wrong granularity — how can an instructor serve it to students in an organized fashion, coupled with meaningful assessment? Traditional insular CMSs oﬀer little assistance; they represent a bottleneck on the last mile between OERs and learners. The process of downloading content from a repository and uploading it to a course management infrastructure, besides being clumsy, is not necessarily in the skill repertoire of the average faculty member. Also, in many repositories, content already exists in context: there are menus, links to other content, branding features, even banner ads. Without major eﬀort (at times prohibited by copyright or restrictive licenses), this content cannot be disentangled from its habitat. Leaving the repository’s context in place, however, will likely have students drifting oﬀ into cyberspace. For content to be truly reusable and re-mixable, it needs to be context-free. The CMS should establish the context: sequencing, menus, and branding.
openLCMS will work standalone, but also plug into existing CMSs like Moodle, Blackboard, or Brightspace via a standard LTI interface. Thus, the last mile is bridged while students can continue working within familiar environments.
Formative assessment content should be embedded into explanatory content so that both learners and educators get meaningful feedback as they work through the curriculum. Graded formative assessment should be embedded into the materials and feed directly into the grade book — the content should drive and control the CMS, not the other way around.
Implementing these concepts moves course management to true learning content management, i.e., requires a learning content management system (LCMS). For OERs in particular, educators select content resources from multiple authors across multiple institutions, and the content immediately becomes available to learners with CMS navigation and grade book integration (Fig. 1.2).
Hurdle 4: Acquisition
For OERs to make a diﬀerence in traditional higher education requires convincing traditional higher education faculty to contribute teaching materials. But — either theOER movement is ahead of its time, or its pure ideology is unrealistic. Or perhaps the majority of faculty have a diﬀerent understanding or expectation of ”openness” that keeps them from contributing. It is thus worthwhile looking at deﬁnitions of ”openness” and then at associated sources of faculty reluctance.
The authority on “openness” is Creative Commons, which is also almost two decades old. The organization provides open licenses that codify the legal rights to reuse, revise, remix, and redistribute educational resources. To be an OER, a resource must be in the public domain or released under an open license that permits its free use and repurposing by others. Here, “public” or “others” includes learners, which is one of two major sources of faculty anxiety: What about homework, exam, and other assessment content? Can students see exam content? Can they see homework solutions, and can they publish solution guides?
One may argue that if an assessment outcome boils down to a simple, shareable answer key, it is not authentic and not addressing any real-world problems. Most faculty would agree, but what can instructors do when they have 300 students in a lecture hall and little or no assistance in grading?
What faculty might expect here is another kind of “openness” — open to their peers, i.e., other faculty. They in turn can assign assessment content, but faculty still control what students see. Assessment content, particularly if electronically graded, is not ”open source.” A repository must not only preserve the integrity of the entrusted content but also has stewardship obligations. Current OER licenses have no provision for this kind of openness, and repositories have no way of enforcing them when content is deployed outside these systems. To address this concern, the platform also needs to provide a means for controlling roles and privileges, so that the identity of content gatekeepers can be veriﬁed.
Another source of faculty unease is professional credit. Journals have impact factors and citation indices, and textbooks have sales ﬁgures, but OERs thrown out into the open have none of these. When it comes to annual evaluation time, even if the educational materials license requires attribution, there is no reporting back of actual impact. Additional anxiety is caused by the possibility of derivative works: who keeps track of the trail, once the materials have left the repository? Thus, once again, the disconnect between repository and deployment becomes a hindrance to wider use.
The vision outlined here is not new. It goes back to a 2003 EDUCAUSE National Learning Infrastructure Initiative (NLII, now ELI) workgroup on the Next Generation Course Management System — but no such system has emerged in the intervening decades. Why?
The ﬁrst question to answer is whether a “supersized” CMS is feasible. Certainly, a monolith of learning content repository, sequencing tools, integrated course management, and feedback mechanisms is too ambitious and unwise to build. Instead of a monolith, the proposal instead is to establish an ecosystem, in which all of the pieces can be put into place.
People might argue that there is no room for a new CMS, which is why one of the modes that openLCMS can function is as a plugin to existing CMSs. Yet, openLCMS will also offer services to run standalone.
There are privacy concerns: a cross-institutional system cannot lead to private student data in places where it has no business. Depending on an institution’s interpretation of applicable laws, student data may not even leave campus, and a hybrid model between local installations (possibly in the form of appliances) and global infrastructure and services is needed. We believe that such a system is possible and desirable:
- Educators will be able to identify and sequence the best of granular open, proprietary, and commercial content into educational playlists for their learners.
- As educators contribute, reuse, and remix content, they will build educational experiences that diﬀer fundamentally from static e-texts: dynamic online course packs that combine targeted and proven learning content with eﬀective assessment and analytics.
- Building on the power of data mining, crowdsourcing, and social networking, the platform will form, nurture, and support collaborative communities of practice of educators around the world.
- The system will provide an end-to-end solution from digital library functions, digital rights management, recommendation and sequencing tools, all the way to the course management functionality required to immediately deploy the online course packs: streamlined, eﬃcient, reliable.
Such a system provides a habitat for OERs, an ecological system in which they can thrive alongside other species of educational content.
Existing Model System
A model system does exist. The LON-CAPA system established in 1999 is currently used at 160 diﬀerent institutions. It implements all layers of the architecture described and currently hosts 660,000 reusable learning resources, including 290,000 randomizing online problems (see Fig. 1.3).
So far, 7,700 courses have relied on this resource pool, producing a signiﬁcant amount of dynamic metadata. There is de-identiﬁed analytics data from 138 million assessment transactions, and over 2.8 million weighted association rules between content resources were extracted. This led to a use-based taxonomy and a prototype recommender system. Clearly, dynamic metadata is far richer than any kind of static metadata attached to content resources.