Open Educational Resources

Open educational resources (OERs) made a dramatic appearance with the 2002 debut of MIT’s Open Courseware initiative. In the almost two decades since, OERs have not noticeably disrupted the traditional business model of higher education or affected daily teaching approaches at most institutions. This is regretable, since OERs could unify and advance the disconnected developments in digital textbooks, MOOCs, as well as blended and flipped classroom pedagogies, by forming the essence of a global, open enterprise learning content management system. 

There are a number of factors that may impede usage in Switzerland and elsewhere, including lacking professional development of instructors, however, in addition, there are hurdles due to the current technological infrastructure. Four such infrastructure hurdles may be: discoverability, quality control, bridging the last mile, and acquisition. Overcoming these hurdles is the impetus behind openLCMS.

Note: the listing of these hurdles is based on an earlier position paper in EDUCAUSE Review.

Adoption Hurdles

Hurdle 1: Discoverability

While modern search engines generally do a good job finding appropriate content to answer specific questions, they are ill suited to find educational content that builds on prerequisite knowledge and takes the learner the next step toward mastery of a certain concept. This is why most digital libraries and repositories prefer to maintain their own cataloging data (“metadata”) and why attempts were made to standardize metadata specifically for educational content (e.g., Dublin Core). However, these catalogs are frequently incomplete; populated by automated harvesting processes without regard to educational context; or, even worse, crippled by entries that are wrong [5]. While diligence and investment in catalog maintenance could remedy those shortcomings, a more systemic problem is the absence of sequencing data; the lack of defined taxonomies and association rules makes it unclear which resources build on which other resources.

Many of the repositories, even inside of unified efforts like the Open Courseware Consortium, remain disconnected from each other. Even if repositories are nominally connected through federated search, as in the National Science Digital Library (eventually renamed “Distributed Learning”), this frequently means finding the least common denominator of the available metadata. The resulting search results are frequently no better than a search on the open web — regrettable, since these projects house excellent content resources.

Particularly for OERs, the current type of static metadata is not a good fit: authors of OERs are notoriously negligent about filling out metadata fields. For free content, with few exceptions (notably MIT Open Courseware), there is no infrastructure for anybody else to do the cataloging. Thus, this type of static metadata is essentially useless, and educators cannot find the content they need.

The solution for this problem could be surprisingly simple: dynamic metadata based on crowdsourcing. As educators identify and sequence content resources for their teaching venues, this information is stored alongside the resources, e.g., “this resource was used before this other resource in this context and in this course.” This usage-based dynamic metadata is gathered without any additional work for the educator or the author. The repository “learns” its content, and the next educator using the system gets recommendations based on other educators’ choices: “people who bought this also bought that.” The result is a data-driven recommendation system, based on dynamic metadata. Simple? No, currently impossible, because the deployment of a resource is usually disconnected from the repository: content is downloaded from a repository and uploaded into a course management system (CMS), where it is sequenced and deployed. There is no feedback loop.

Hurdle 2: Quality Control

Quality control has traditionally been the forte of publishing companies: editors and reviewers carefully go over the content to eliminate not only typos, but thoroughly check facts, formulations, and conceptual correctness. Errors in the materials can be very painful when teaching a class, particularly when it comes to homework or exams. Educators thus place high value on quality control. Once again, OERs are at an apparent disadvantage, usually lacking editorial staff. Some repositories thus resort to explicit peer review — generally a good approach, but not a scalable one.

If an educator chooses a resource, that also is peer review. This type of peer review is not punitive in nature; instead, it provides explicit peer approval and only implicitly the lack thereof. If many students in many courses work successfully through the resource, reliability is established. Particularly for assessment resources, difficulty, time-on-task, and other analytics can be gathered to establish their reliability and viability. If explanatory content is used in the context of assessment content, learning effectiveness can be established by looking at intervening content accesses between failed and successful attempts to solve a problem, essentially data-mining the access paths. All of this data once again contributes to the dynamic metadata of the resources to establish quality, search rankings, and the base for recommender functionality.

Quality control and recommender functionality is not only limited to educators choosing materials for their learners, but eventually can be used to the system itself choosing individually and adaptively. For decades, such a system has been the goal of many initiative, but it remained largely elusive, because once again deployment is disconnected from the repository. Once again, there is no feedback loop (see Fig. 1.1).

Ask an OER provider how much impact they have — how many learners they actually reach with their content — and they usually don’t know. Cannot know, really. One download could mean thousands of learners, or zero if the faculty member subsequently decides to not use the content after all.

The disconnect has yet another consequence: If a mistake is found in a resource and corrected, the downloaded copy inside of some CMS is still wrong. There is no way to push the improved version of a resource to the learners, so even if there is quality control, its fruits do not necessarily reach the learners. One can circumvent this particular problem by linking directly into the repository, but aside from the problem of possible stale links, assessment in turn could not send performance data to the course’s grade book — the disconnect goes both ways.

Figure 1.1: OER use as a one-way street. Downloaded OERs get uploaded, sequenced, and deployed in a CMS, frequently without any assessment of learning success. No information flows back to the original asset.
Figure 1.1: OER use as a one-way street. Downloaded OERs get uploaded, sequenced, and deployed in a CMS, frequently without any assessment of learning success. No information ows back to the original asset.

Hurdle 3: Last Mile

For the vast majority of course topics, all of the information that would be in textbooks is freely available in online format. But this information is scattered, embedded into other contexts, or of the wrong granularity — how can an instructor serve it to students in an organized fashion, coupled with meaningful assessment? Traditional insular CMSs offer little assistance; they represent a bottleneck on the last mile between OERs and learners. The process of downloading content from a repository and uploading it to a course management infrastructure, besides being clumsy, is not necessarily in the skill repertoire of the average faculty member. Also, in many repositories, content already exists in context: there are menus, links to other content, branding features, even banner ads. Without major effort (at times prohibited by copyright or restrictive licenses), this content cannot be disentangled from its habitat. Leaving the repository’s context in place, however, will likely have students drifting off into cyberspace. For content to be truly reusable and re-mixable, it needs to be context-free. The CMS should establish the context: sequencing, menus, and branding.

openLCMS will work standalone, but also plug into existing CMSs like Moodle, Blackboard, or Brightspace via a standard LTI interface. Thus, the last mile is bridged while students can continue working within familiar environments.

Formative assessment content should be embedded into explanatory content so that both learners and educators get meaningful feedback as they work through the curriculum. Graded formative assessment should be embedded into the materials and feed directly into the grade book — the content should drive and control the CMS, not the other way around.

Implementing these concepts moves course management to true learning content management, i.e., requires a learning content management system (LCMS). For OERs in particular, educators select content resources from multiple authors across multiple institutions, and the content immediately becomes available to learners with CMS navigation and grade book integration (Fig. 1.2).

Hurdle 4: Acquisition

For OERs to make a difference in traditional higher education requires convincing traditional higher education faculty to contribute teaching materials. But — either theOER movement is ahead of its time, or its pure ideology is unrealistic. Or perhaps the majority of faculty have a different understanding or expectation of ”openness” that keeps them from contributing. It is thus worthwhile looking at definitions of ”openness” and then at associated sources of faculty reluctance.

The authority on “openness” is Creative Commons, which is also almost two decades old. The organization provides open licenses that codify the legal rights to reuse, revise, remix, and redistribute educational resources. To be an OER, a resource must be in the public domain or released under an open license that permits its free use and repurposing by others. Here, “public” or “others” includes learners, which is one of two major sources of faculty anxiety: What about homework, exam, and other assessment content? Can students see exam content? Can they see homework solutions, and can they publish solution guides?

One may argue that if an assessment outcome boils down to a simple, shareable answer key, it is not authentic and not addressing any real-world problems. Most faculty would agree, but what can instructors do when they have 300 students in a lecture hall and little or no assistance in grading?

What faculty might expect here is another kind of “openness” — open to their peers, i.e., other faculty. They in turn can assign assessment content, but faculty still control what students see. Assessment content, particularly if electronically graded, is not ”open source.” A repository must not only preserve the integrity of the entrusted content but also has stewardship obligations. Current OER licenses have no provision for this kind of openness, and repositories have no way of enforcing them when content is deployed outside these systems. To address this concern, the platform also needs to provide a means for controlling roles and privileges, so that the identity of content gatekeepers can be verified.

Another source of faculty unease is professional credit. Journals have impact factors and citation indices, and textbooks have sales figures, but OERs thrown out into the open have none of these. When it comes to annual evaluation time, even if the educational materials license requires attribution, there is no reporting back of actual impact. Additional anxiety is caused by the possibility of derivative works: who keeps track of the trail, once the materials have left the repository? Thus, once again, the disconnect between repository and deployment becomes a hindrance to wider use.

Figure 1.2: Closing the loop. In an integrated system, usage data and analytics can flow back to the original asset at every stage. These provide quality measures and a basis for recommendation systems. Formative assessment from embedded problems is available to both learners and faculty in a timely manner.
Figure 1.2: Closing the loop. In an integrated system, usage data and analytics can ow back to the original asset at every stage. These provide quality measures and a basis for recommendation systems. Formative assessment from embedded problems is available to both learners and faculty in a timely manner.