"Quality" in Open Education

"Quality" in Open Education

Training quest 3 has us exploring "ideals of quality" across two of the largest/highest profile open education initiatives.  I hear "quality" and immediately think in terms of comparative worth--excellence along any number of dimensions from durability to fit to taste and texture. While I could easily write a post about OLI's Modern Biology animations or student argumentation skills in MIT's Seminar in Ethnography and Fieldwork, discussions of quality as a global characteristic don't seem particularly fruitful here. But what if we think instead in terms of the first definition of quality: "an essential or distinctive characteristic, property, or attribute." Instead of value, then, quality is more about values.

So, what do MIT and OLI value? What do they consider the essential or distinctive characteristics of what they're trying to do, of who they are as organizations?

Organizational values are often expressed as culture and manifest in incentive and reward systems. While we don't have access to most of this information about the major players in Open Education, we do know some of what they are tracking, what they are measuring and exploring. And these evaluation plans and reports give us a window into the values, the ideals of quality, guiding the initiatives today.

The last evaluation report from MIT OpenCourseware, published in 2005, includes sections on Access, Use and Impact. Access has to do mostly with site traffic--the site had 8.5 million visits during the evaluation period with a 51% bounce rate. 46% of visitors are from outside the US, 28% are returning visitors who spent an average of 10 min 41 sec on the site, &c. This seems particularly consistent with part II of the initiative's dual mission: "To extend the reach and impact of MIT OCW and the opencourseware concept."

The Use section explores questions of motivation, usability, and the special permutation of conversion MIT calls "success." Why did each visitor come to the site? Did they accomplish those goals? What materials/resources helped them do that? This section includes responses to questions about video delivery and file type preferences, explorations of student vs. educator vs. self-learner goals, and at least a cursory reading of perceived quality and relevance from users. Part I of the dual mission expresses a fairly strong correlating value: "To provide free access to virtually all MIT course materials for educators, students, and individual learners around the world."

The Impact section gets a little stickier. For this evaluation, the MIT team focused on three questions: What is the impact of OCW on individual teachers and learners? What is the impact of OCW on learning communities? and What is the impact of OCW on the open sharing of educational materials? The first two questions were answered almost exclusively within MIT campus, through student and faculty interviews. Supplemental data came from questions about perceived positive impact on an opt-in survey completed by 3% of visitors which the evaluators warn is significantly biased toward returning, international, student visitors. Answers to the third question range from national and international media citations, editorials calling for similar efforts at other major universities, and a graphical exploration of the initiative's role within the open education movement along dimensions of tools, content, and implementation resources. Questions exploring the impact on instructional materials seem to support the statement that "MIT OpenCourseWare is an idea - and an ideal - developed by the MIT faculty who share the Institute's mission to advance knowledge and educate students" and questions about impact on the movement itself tie back nicely into that bit about extending the reach and impact of the institution.

According to these metrics, MIT's "ideals of quality" include:

  • Reach (breadth of material, volume of visitors, % of faculty members contributing)
  • Effectiveness (goal-achievement and usability) and
  • Momentum (media coverage, evidence of ripples in open ed, &c)

The Open Learning Initiative hasn't published an evaluation report of the project as a whole. However, their evaluation plans and the research findings they seem to post in place of traditional site traffic or SROI statistics offer similar insights into underlying values.

OLI describes plans for evaluation studies of each course along the following dimensions:

1. Contextual: Where in the learning process are learners interacting with this course? In what relationship to educations institutions? What are the user demographics? 2. Effectiveness: How do student competencies (domain-specific and otherwise) change as a result of their interaction with the course? 3. Component: How well does each element of the course experience function? How do users interact with them? 4. Learning: What advantages do the innovative components of OLI courses offer online learners? 5. Assessment: How do the multiple choice course assessments stack up against psychometric standards? 6. Design: Why were the courses designed the way they were? Are those designs in harmony with sound instructional principles? 7. Dissemination: How adaptable are OLI courses to contexts other than the one they were designed in?

The weight on learning outcomes and subdivision by course points to a slightly narrower focus and a smaller unit of analysis than MIT. The increased granularity (assessing the impact of individual learning objects and monitoring the actual usage of online experiments) makes sense given the organization's focus on "innovative online instructional components like cognitive tutors, virtual laboratories, group experiments and simulations" These same differences are reflected in the initiative's mission statement of "working to help the World Wide Web make good on its promise of widely accessible and effective online education." The proposed studies further reinforce the pre-eminence of "crucial elements of instructional design grounded in cognitive theory, formative evaluation for students and faculty, and iterative course improvement based on empirical evidence" in OLI's organizational approach.

The Open Learning Initiative's proposed research and evaluation questions suggest "ideals of quality" that include:

  • Learning (both the learners and the developers, both conceptual and procedural)
  • Theoretical Soundness (alignment with instructional theory and cognitive psychology) and
  • Continual Improvement (all studies are formative, each element is evaluated)

Whether either stated objectives or evaluation criteria accurately reflect the values and priorities of the organization is a sticky, multi-dimensional question for another time. Still, it's been an interesting exploration.

"Why we say we're open..."

"Ye Have Need of Patience"

"Ye Have Need of Patience"