David Wiley Blog Posts from July 30, 2012 to October 18, 2014





http://opencontent.org/blog/

During conversations this week at the semi-annual meeting of the Shuttleworth Foundation Fellows, I was struck by (what is for me) a new way of contextualizing and understanding “open” – as one of a long line of technological innovations that radically improve productivity.

History is filled with technological innovations that have increased our “productivity,” making it significantly less expensive for us to engage in some activity than it had been prior to the innovation. I have often thought of open as being part of the family tree of information technology innovations that includes inventions like writing, the printing press, computers, and the internet. But my previous conceptualization of these inventions was limited to a general notion of “inventions that enable us do that we couldn’t before.” This framing does not explicitly consider their impact of open on our productivity in a market sense. It was the juxtaposition of a conversation about sustainability with Fellows Peter Bloom and Johnny West against Jeremy Rifkin’s The Zero Marginal Cost Society, which I recently finished reading, that really catalyzed this new perspective.

Specifically, it hit me as I listened to Peter talk about his work with Rhizomatica, a project that provides open source cellular infrastructure and service in rural Mexico. As opposed to the proprietary approach to cellular infrastructure, in which it might cost a $100,000 to put up a radio tower, Rhizomatica can put up a radio tower based on open source software for about $7500. As opposed to a traditional monthly cellphone bill of $100 or more, Rhizomatica provides cell service for $1.70 per month.

Listening to him talk I was reminded of my own work with Lumen. Whereas it can easily cost a traditional publisher $250,000 to create a textbook under the incumbent, royalty-based content model, we can facilitate faculty creating an OER-based replacement for that textbook for under $10,000. And rather than producing an end product that can cost students $200 or more, we can provide hosting, integration, and support for that OER-based textbook replacement for $5.

In both these cases – textbooks and cellular service – open approaches create productivity gains ranging between one and two orders of magnitude in size. Orders of magnitude – meaning they make it between 10x and 100x cheaper than the incumbent way of doing things.

I was already intimately aware of the orders of magnitude impact open can have on the cost of textbooks. But seeing it mirrored back almost perfectly in the case of cell phone infrastructure and service unlocked something for me. One instance is an anomaly, but two starts to look like a trend.

If open can create these orders of magnitude productivity gains in the cases of both textbooks and cellular service, where else does it create them? A second’s reflection surfaces cases like writing software and encyclopedias… But those kinds of examples weren’t what was setting my radar off. There’s something about the idea of cellular service falling pray to these orders of magnitude productivity gains from open (OMPGO for brevity) that feels like a virus jumping from one species to another. Cell service isn’t the kind of thing that’s supposed to be susceptible to OMPGO, at least not intuitively. Something serious is going on here.

Upon reflection I’ve slowly been having this species-jumping realization in my own little microcosm of focus, education. Textbooks are intuitively susceptible to OMPGO, but learning outcomes, assessments, and credentials are not. The extension of open and, consequently OMPGO, to the fundamental pieces necessary to engage in education – learning outcomes, content, assessments, and credentials – makes up what I call the open education infrastructure. The potential impacts of the open education infrastructure on primary, secondary, and higher education are endless. What would our institutions and practices look like if they could be built upon freely available, openly licensed sets of learning outcomes, textbook replacements, assessments, and credentialing mechanisms? What types of alternatives to our traditional institutions would emerge in this fertile ground in which experimentation and innovation becomes orders of magnitude less expensive?

But cellular service… Where else can OMPGO travel? A second conversation with Peter later in the week suggested that it is already impacting energy, clean water, and a range of other functions at the foundation of society. In addition to the open education infrastructure, do we dare begin talking about the open society infrastructure? This got me thinking of Marcin, a Shuttleworth Fellowship alum who is creating the Global Village Construction Set, an open source platform that allows for the easy fabrication of the 50 industrial machines necessary to build a small civilization with modern comforts. Each of his designs bares the characteristic OMPGO signature of being orders of magnitude less expensive than their commercial counterparts (e.g., their open source tractor).

If OMPGO can work its magic on energy, clean water, machines, telecommunications, and education, where else can it go? What kind of matrix or framework could we build that might help us identify other OMPGO opportunities?

Little would make me happier than a fully developed open education infrastructure operating as part of a broader open infrastructure supporting an advanced society – where power, water, phone, internet, education, and other key infrastructure pieces were 10x – 100x less expensive than they are now. What a world that would be! Wouldn’t you like to be part of creating that world?

Postscript.

At the bottom of the OMPGO phenomenon lies a technological innovation called the open license. Open licenses stand in clear opposition to the ultimate viral copyright machinery, the Berne Convention, which automatically forces copyright onto each and every creative work whether the author desires it or not. Rather than envisioning a society built exclusively on protections and royalties, as Berne does, open licenses enable a society also built on sharing and cooperation. (And importantly, these two visions of society are not incompatible – the Internet, unarguably the biggest engine of the modern market, is built almost entirely on an infrastructure comprised of open source software.)

While it’s contours are still blurry, I can see in the far distance a vision of an entire society built more fully on open infrastructure, with the impact of OMPGO spread generously throughout every sector. It takes my breath away. For now, I’ll keep chipping away on the education part of the problem with the Lumen team and others in the space. But WOW there is so much work to do, in so many different spaces, and so much yet for us to learn from each other across spaces. My conversations with Peter and others at the Shuttleworth meeting helped me appreciate that more than ever.

{ 2 comments }

Another Way of Thinking About Open…

One of the many improvements in my life since I started running has been the number of books I’ve been able to read (trans: listen to). Some of my recent running reading has been swirling around in my head during the semi-annual meeting of Shuttleworth Foundation Fellows I’m currently attending in Malta. This meeting is always a fantastic opportunity to think, rethink, and reconceptualize “open.” No, I haven’t changed my mind about open. But I do think there is an additional way to think about open, another perspective that can add to our understanding of the construct, that I don’t hear people talking about.

One of the arguments in Jeremy Rifkin’s The Zero Marginal Cost Society goes like this:

  • History is filled with technological innovations that increased productivity, meaning that producing a widget after the technology was introduced was significantly cheaper than producing it before it was introduced (c.f., the assembly line).
  • In competitive markets where firms compete on price, improved productivity and the resulting lower costs mean that a firm can lower its prices in order to gain advantage over a competitor.
  • We are reaching the point where productivity gains are driving the marginal cost of many products toward zero.
  • When the marginal cost to produce a product tends toward zero, competition makes the consumer price of the product tend toward zero, which ultimately results in profits on that product tending toward zero.
  • When there is very little profit to be made, investors will not put their capital to work in that market because they can receive a greater return by investing it elsewhere.
  • Consequently, in highly competitive markets where technology has created huge productivity gains – driving marginal costs and profits toward zero – capitalism is quickly reaching the point where it has “run its course.” That is, when those with capital are unwilling to invest it in a market, that market can no longer be said to be a part of the “capitalist system.”

Rifkin makes several points in the book, but this observation that ‘capitalism is “eating itself” by investing in the creation of technologies that drive productivity gains so far that margins fall to the point that capital stops flowing into certain sectors’ is fascinating to me.

The Internet, of course, is the quintessential example of a technological innovation that has driven incredible productivity gains. It is now orders of magnitude less expensive to create and offer a whole range of products and services than it was in the days Before Internet.

The aha I had this morning, participating in the breakout group on sustainability, is that you can think of “open” as being a member of this family of technological innovations. That is to say that, among the many ways you can think of open, one way to think of open is as a particularly powerful innovation in a long line of technological innovations that drive productivity gains.

Take the specific context of textbooks. The move from physically setting type to digital authoring created huge productivity gains, the move from physical distribution to online distribution created huge productivity gains, and the move from reliance on proprietary / royalty-based content to openly licensed content created huge productivity gains. Consequently, open textbooks are orders of magnitude less expensive to create than traditional print textbooks, and they’re an order of magnitude less expensive to produce than digital commercial products. (When I say that open textbooks are one or more orders of magnitude less expensive to create than their traditional counterparts, I mean textbooks which you assemble from pre-existing OER – not textbooks you produce from scratch and place an open license on.)

This increased productivity in open textbook creation is one of the many benefits that we will enjoy more fully once the open education infrastructure has been built out more completely. Each layer in the infrastructure is an additional productivity multiplier. Imagine the kinds of things we’ll be able to rapidly build and iterate on… Imagine the kinds of things students will be able to do and iterate on… The costs will be so minimal that the opportunities to be creative in how we approach learning will be nearly unlimited.

I’m not entirely sure what new connections and understandings will arise from placing open in this new context, but I love, Love, LOVE new ways of thinking about and understanding open.

{ 3 comments }

Another Incredible Week for Lumen!

It’s been another incredible week at Lumen – we have more exciting news to share!

First, Lumen has been selected as one of seven winners of the Next Generation Courseware Challenge, a $20 million grant competition from the Bill & Melinda Gates Foundation to build exemplary, affordable digital course materials that improve student success among low-income and disadvantaged learners. This work includes a singularly awesome set of design, research, development, and content partners, together with an amazing consortium of schools that will co-design and test the resulting open, competency-based courses. (http://lumenlearning.com/ann-courseware-challenge/)

Second, yesterday we won the EDUCAUSE Game Changers Business Competition, recognizing the great work our team is doing and validating Lumen’s potential to sustainably grow to impact millions of students, faculty, and institutions. (http://www.educause.edu/annual-conference/exhibitor/educause-game-changers-business-competition)

I’m feeling overwhelmed with gratitude for amazing and talented colleagues, wonderful partners, and great opportunities to do good work. Here’s to more awesomeness!



http://opencontent.org/blog/page/2

In his seminal essay The Cathedral and the Bazaar, Eric Raymond popularized the following quote he attributes to Antoine de Saint-Exupery:

“Perfection (in design) is achieved not when there is nothing more to add, but rather when there is nothing more to take away.”

For 15 years the makers of learning management systems have been swimming upstream against this truth. They would benefit greatly by meditating on this principle, together with the more general Occam’s Razor and the more specific Zawinski’s Law.

What would an LMS look like if its creators earnestly sought this kind of perfection in design? The system, as shaved by Occam’s Razor, would be comprised of only four parts:

1. A way to speak to other institutional systems like the SIS in order to do things like create courses, populate courses with the appropriate people, and send grades back to the SIS.

2. A way to authenticate the people populated from the SIS.

3. An “LTI” implementation.

4. A data store to hold state information that “LTI” tools might want to share with one another.

(I say “LTI” in quotes because today’s LTI doesn’t fully enable this, but a future LTI could.)

No quizzing engine, no content presentation tool, no e-reader, no repository, no grade book, no discussion forum, no blog, no wiki, no analytics dashboard, no nothing. No user-facing tools or features of any kind. Empty – and interoperable.

In this “nothing left to take away” version, the LMS becomes the smooth, brown, plastic oval of Mr. Potato Head. All of the traditional “features” of the LMS are independent, swappable components that plug in via LTI – the way Mr. Potato Head’s happy eyes are swappable for his angry eyes. Or, if you prefer a more technical analogy, the LMS becomes an operating system like iOS (but hopefully WAY more open) and all previous system features become apps that you can install and uninstall as you will.

The fact that many LMS vendors are rushing to enable the creation of app stores confuses me, because they seem to be building the pyre on which their companies will burn. Once users have the option of bypassing core services offered by the LMS and using better versions written by others, they will certainly begin taking advantage of it. (For example, a discussion tool written by a group or company that only does discussion tools will always be better than the discussion tool inside the LMS which is one of 35 features and gets no love or attention.) Users will quickly realize that they have no need for the clunky versions of core tools produced by LMS vendors, and since they are not using them they will demand to stop paying for them in short order. Statements akin to “Put your discussion board in the LTI App marketplace, and if it’s better than the others we’ll pay to use it” will make their way into RFPs. There will be nothing left of the product formerly known as the LMS beyond provisioning courses, authenticating users, and coordinating apps that speak a newer version of “LTI” – because that’s all schools will pay for.

This is a vision of the future LMS I can get behind – interoperability, variety, choice, smaller pieces somewhat loosely joined – though at some point it probably makes sense to stop referring to this thin interoperability layer as a learning management system.

{ 5 comments }

The White House Promotes Open Education

Today President Obama announced that, in addition to the commitments already outlined in the US Open Government National Action Plan, the United States will take additional steps to make government more open, transparent, and accessible for all Americans. The announcement included the following commitments:

Promote Open Education to Increase Awareness and Engagement

Open education is the open sharing of digital learning materials, tools, and practices that ensures free access to and legal adoption of learning resources. There is a growing body of evidence that the use of open education resources improves the quality of teaching and learning, including by accelerating student comprehension and by fostering more opportunities for affordable cross-border and cross-cultural educational experiences. The United States is committed to open education and will:

  • Raise open education awareness and identify new partnerships. The U.S. Department of State, the U.S. Department of Education, and the Office of Science and Technology Policy will jointly host a workshop on challenges and opportunities in open education internationally with stakeholders from academia, industry, and government. The session will foster collaboration among OGP members and other interested governments and will produce best practices to inform good policies in open education.
  • Pilot new models for using open educational resources to support learning. The State Department will conduct three pilots overseas by December 2015 that use open educational resources to support learning in formal and informal learning contexts. The pilots’ results, including best practices, will be made publicly available for interested educators.
  • Launch an online skills academy. The Department of Labor (DOL), with cooperation from the Department of Education, will award $25 million through competitive grants to launch an online skills academy in 2015 that will offer open online courses of study, using technology to create high-quality, free, or low-cost pathways to degrees, certificates, and other employer-recognized credentials. This academy will help students prepare for in-demand careers. Courses will be free for all to access on an open learning platform, although limited costs may be incurred for students seeking college credit that can be counted toward a degree. Leveraging emerging public and private models, the investments will help students earn credentials online through participating accredited institutions, and expand the open access to curriculum designed to speed the time to credit and completion. The online skills academy will also leverage the burgeoning marketplace of free and open-licensed learning resources, including content developed through DOL’s community college grant program, to ensure that workers can get the education and training they need to advance their careers, particularly in key areas of the economy.

YES! This is a major victory for open education in the US. This win is the result of lots of hard work by many dedicated, talented people. And I’d like to think that our work has contributed to the White House’s recognition that “there is a growing body of evidence improves the quality of teaching and learning.” Feels good. I think we made a difference today.

{ 4 comments }

The following is a pre-print of an essay set to appear in Bonk et al.’s forthcoming book MOOCs and Open Education around the World. It may undergo some additional editing before publication. Unlike the rest of the content on opencontent.org, this article is published under the Creative Commons Attribution ShareAlike license v4.0, as per my contract with Routledge. This essay remixes some material that was previously published on opencontent.org.

In this piece I briefly explore the damage done to the idea of “open” by MOOCs, advocate for a return to a strengthened idea of “open,” and describe an open education infrastructure on which the future of educational innovation depends.

MOOCs: One Step Forward, Two Steps Back for Open Education

MOOCs, as popularized by Udacity and Coursera, have done more harm to the cause of open education than anything else in the history of the movement. They have inflicted this harm by promoting and popularizing an abjectly impoverished understanding of the word “open.” To fully appreciate the damage they have imposed requires that I lightly sketch some historical context.

The openness of the Open University of the UK, first established in 1969 and admitting its first student in 1971, was an incredible innovation in its time. In this context, the adjective “open” described an enlightened policy of allowing essentially anyone to enroll in courses at the university – regardless of their prior academic achievement. For universities, which are typically characterized in metaphor as being comprised of towers, silos, and walled gardens, this opening of the gates to anyone and everyone represented an unprecedented leap forward in the history of higher education. For decades, “open” in the context of education primarily meant “open entry.”

Fast-forward 30 years. In 2001 MIT announced its OpenCourseWare initiative, providing additional meaning to the term “open” in the higher education context. MIT OCW would make the materials used in teaching its on campus courses available to the public, for free, under an “open license.” This open license provided individuals and organizations with a broad range of copyright-related permissions: anyone was free to make copies of the materials, make changes or improvements to the materials, and to redistribute them (in their original or modified forms) to others. All these permissions were granted without any payment or additional copyright clearance hurdles.

While there are dozens of universities around the world that have adopted an open entry policy, in the decade from 2001 – 2010 open education was dominated by individuals, organizations, and schools pursuing the idea of open in terms of open licensing. Hundreds of universities around the globe maintain opencourseware programs. The open access movement, which found its voice in the 2002 Budapest Open Access initiative, works to apply open licenses to scholarly articles and other research outputs. Core learning technology infrastructure, including Learning Management Systems, Financial Management Systems, and Student Information Systems are created and published under open licenses (e.g., Canvas, Moodle, Sakai, Kuali). Individuals have begun contributing significantly to the growing collection of openly licensed educational materials, like Sal Khan who founded the Khan Academy. Organizations like the William and Flora Hewlett Foundation are pouring hundreds of millions of dollars into supporting an idea of open education grounded in the idea of open licensing. In fact, the Hewlett Foundation’s definition of “open educational resources” is the most widely cited:

OER are teaching, learning, and research resources that reside in the public domain or have been released under an intellectual property license that permits their free use and re-purposing by others. Open educational resources include full courses, course materials, modules, textbooks, streaming videos, tests, software, and any other tools, materials, or techniques used to support access to knowledge (Hewlett, 2014).

According to Creative Commons (2014), there were over 400 million openly licensed creative works published online as of 2010, and many of these can be used in support of learning.

Why is the conceptualization of “open” as “open licensing” so interesting, so crucial, and such an advance over the simple notion of open entry? In describing the power of open source software enabled by open licensing, Eric Raymond (2000) wrote, “Any tool should be useful in the expected way, but a truly great tool lends itself to uses you never expected.” Those never expected uses are possible because of the broad, free permissions granted by open licensing. Adam Thierer (2014) has described a principle he calls “permissionless innovation.” I have summarized the idea by saying that “openness facilitates the unexpected” (Wiley, 2013). However you characterize it, the need to ask for permission and pay for permission makes experimentation more costly. Increasing the cost of experimentation guarantees that less experimentation will happen. Less experimentation means, by definition, less discovery and innovation.

Imagine you’re planning to experiment with a new educational model. Now imagine two ways this experiment could be conducted. In the first model, you pay exorbitant fees to temporarily license (never own) digital content from Pearson, and you pay equivalent fees to temporarily license (never own) Blackboard to host and deliver the content. In a second model, you utilize freely available open educational resources delivered from inside a free, open source learning management system. The first experiment cannot occur without raising venture capital or other significant funding. The second experiment can be run with almost no funding whatsoever. If we wish to democratize innovation, as von Hippel (2005) has described it, we would do well to support and protect our ability to engage in the second model of experimentation. Open licenses provide and protect exactly that sort of experimental space.

Which brings us back to MOOCs. The horrific corruption perpetrated by the Udacity, Coursera, and other copycat MOOCs is to pretend that the last forty years never happened. Their modus operandi has been to copy and paste the 1969 idea of open entry into online courses in 2014. The primary fallout of the brief, blindingly brilliant popularity of MOOCs was to persuade many people that, in the educational context, “open” means open entry to courses which are not only completely and fully copyrighted, but whose Terms of Use are more restrictive than that of the BBC or New York Times. For example:

You may not take any Online Course offered by Coursera or use any Statement of Accomplishment as part of any tuition-based or for-credit certification or program for any college, university, or other academic institution without the express written permission from Coursera. Such use of an Online Course or Statement of Accomplishment is a violation of these Terms of Use.

The idea that someone, somewhere believes that open education means “open entry to fully copyrighted courses with draconian terms of use” is beyond tragic. Consequently, after a decade of progress has been reversed by MOOCs, advocates of open education once again find ourselves fighting uphill to establish and advance the idea of “open.” The open we envision provides just as much access to educational opportunity as the 1960s vision championed by MOOCs, while simultaneously enabling a culture of democratized, permissionless innovation in education.

An “Open” Worth the Name

How, then, should we talk about “open?” What strengthened conception of open will promote both access and innovation? I believe we must ground our open thinking in the idea of open licenses. Specifically, we should advocate for open in the language of the 5Rs. “Open” should be used as an adjective to describe any copyrightable work that is licensed in a manner that provides users with free and perpetual permission to engage in the 5R activities:

  1. Retain – the right to make, own, and control copies of the work (e.g., download, duplicate, store, and manage)
  2. Reuse – the right to use the work in a wide range of ways (e.g., in a class, in a study group, on a website, in a video)
  3. Revise – the right to adapt, adjust, modify, or alter the work itself (e.g., translate it into another language)
  4. Remix – the right to combine the original or revised work with other open works to create something new (e.g., incorporate the work into a mashup)
  5. Redistribute – the right to share copies of the original work, your revisions, or your remixes with others (e.g., give a copy of the work to a friend)

These 5R permissions, together with a clear statement that they are provided for free and in perpetuity, are articulated in many of the Creative Commons licenses. When you download a video from Khan Academy, some lecture notes from MIT OpenCourseWare, an article from Wikipedia, or a textbook from OpenStax College – all of which use a Creative Commons license – you have free and perpetual permission to engage in the 5R activities with those materials. Because they are published under a Creative Commons license, you don’t need to call to ask for permission and you don’t need to pay a license fee. You can simply get on with the business of supporting your students’ learning. Or you can conduct some other kind of teaching and learning experiment – and you can do it for free, without needing additional permissions from a brace of copyright holders.

How would a change in the operational definition of “open” affect the large MOOC providers? If MOOC providers changed from “open means open entry” to “open means open licenses” what would the impact be? Specifically, if the videos, assessment, and other content in a Coursera or Udacity MOOC were openly licensed would it reduce the “massive” access that people around the world have to the courses? No. In fact, it would drastically expand the access enjoyed by people around the world, as learners everywhere would be free to download, translate, and redistribute the MOOC content. MOOCs could become part of the innovation conversation.

Despite an incredible lift-off thrust comprised of hype and investment, MOOCs have failed to achieve escape velocity. Weighed down by a strange 1960s-meets-the-internet philosophy, MOOCs have started to fall back to earth under the pull of registration requirements, start dates and end dates, fees charged for credentials, and draconian terms of use. It reminds me of the old joke, “What do you call a MOOC where you have to register, wait for the start date in order to begin, get locked out of the class after the end date, have no permission to copy or reuse the course materials, and have to pay to get a credential?” “An online class.”

Despite all the hyperbole, it has become clear that MOOCs are nothing more than traditional online courses enhanced by open entry, and not the innovation so many had hoped for. Worse than that, because of their retrograde approach to “open,” MOOCs are guaranteed to be left by the wayside as future educational innovation happens because it is simply too expensive to run a meaningful number of experiments in the MOOC context.

Where will the experiments that define the future of teaching and learning be conducted, then? Many of them will be conducted on top of what I call the open education infrastructure.

Content As Infrastructure

The Wikipedia entry on infrastructure (Wikipedia, 2014) begins:

Infrastructure refers to the basic physical and organizational structures needed for the operation of a society or enterprise, or the services and facilities necessary for an economy to function. It can be generally defined as the set of interconnected structural elements that provide a framework supporting an entire structure of development…

The term typically refers to the technical structures that support a society, such as roads, bridges, water supply, sewers, electrical grids, telecommunications, and so forth, and can be defined as “the physical components of interrelated systems providing commodities and services essential to enable, sustain, or enhance societal living conditions.” Viewed functionally, infrastructure facilitates the production of goods and services.

What would constitute an education infrastructure? I don’t mean a technological infrastructure, like Learning Management Systems. I mean to ask, what types of components are included in the set of interconnected structural elements that provide the framework supporting education?

I can’t imagine a way to conduct a program of education without all four of the following components: competencies or learning outcomes, educational resources that support the achievement of those outcomes, assessments by which learners can demonstrate their achievement of those outcomes, and credentials that certify their mastery of those outcomes to third parties. There may be more components to the core education infrastructure than these four, but I would argue that these four clearly qualify as interconnected structural elements that provide the framework underlying every program of formal education.

Not everyone has the time, resources, talent, or inclination to completely recreate competency maps, textbooks, assessments, and credentialing models for every course they teach. As in the discussion of permissionless, democratized innovation above, it simply makes things faster, easier, cheaper, and better for everyone when there is high quality, openly available infrastructure already deployed that we can remix and experiment upon.

Historically, we have only applied the principle of openness to one of the four components of the education infrastructure I listed above: educational resources, and I have been arguing that “content is infrastructure” (Wiley, 2005) for a decade now. More recently, Mozilla has created and shared an open credentialing infrastructure through their open badges work (Mozilla, 2014). But little has been done to promote the cause of openness in the areas of competencies and assessments.

Open Competencies

I think one of the primary reasons competency-based education (CBE) programs have been so slow to develop in the US – even after the Department of Education made its federal financial aid policies friendlier to CBE programs – is the terrific amount of work necessary to develop a solid set of competencies. Again, not everyone has the time or expertise to do this work. Because it’s so hard, many institutions with CBE programs treat their competencies like a secret family recipe, hoarding them away and keeping them fully copyrighted (apparently without experiencing any cognitive dissonance while they promote the use of OER among their students). This behavior has seriously stymied growth and innovation in CBE in my view.

If an institution would openly license a complete set of competencies, that would give other institutions a foundation on which to build new programs, models, and other experiments. The open competencies could be revised and remixed according to the needs of local programs, and they can be added to, or subtracted from, to meet those needs as well. This act of sharing would also give the institution of origin an opportunity to benefit from remixes, revisions, and new competencies added to their original set by others. Furthermore, openly licensing more sophisticated sets of competencies provides a public, transparent, and concrete foundation around which to marshal empirical evidence and build supported arguments about the scoping and sequencing of what students should learn.

Open competencies are the core of the open education infrastructure because they provide the context that imbues resources, assessments, and credentials with meaning – from the perspective of the instructional designer, teacher, or program planner. (They are imbued with meaning for students through these and additional means.) You don’t know if a given resource is the “right” resource to use, or if an assessment is giving students an opportunity to demonstrate the “right” kind of mastery, without the competency as a referent. (For example, an extremely high quality, high fidelity, interactive chemistry lab simulation is the “wrong” content if students are supposed to be learning world history.) Likewise, a credential is essentially meaningless if a third party like an employer cannot refer to the skill or set of skills its possession supposedly certifies.

Open Assessments

For years, creators of open educational resources have declined to share their assessments in order to “keep them secure” so that students won’t cheat on exams, quizzes, and homework. This security mindset has prevented sharing of assessments.

In CBE programs, students often demonstrate their mastery of competencies through “performance assessments.” Unlike some traditional multiple-choice assessments, performance assessments require students to demonstrate mastery by performing a skill or producing something. Consequently, performance assessments are very difficult to cheat on. For example, even if you find out a week ahead of time that the end of unit exam will require you to make 8 out of 10 free throws, there’s really no way to cheat on the assessment. Either you will master the skill and be able to demonstrate that mastery or you won’t.

Because performance assessments are so difficult to cheat on, keeping them secure can be less of a concern, making it possible for performance assessments to be openly licensed and publicly shared. Once they are openly licensed, these assessments can be retained, revised, remixed, reused, and redistributed.

Another way of alleviating concerns around the security of assessment items is to create openly licensed assessment banks that contain hundreds or thousands of assessments – so many assessments that cheating becomes more difficult and time consuming than simply learning.

The Open Education Infrastructure

An open education infrastructure, which can support extremely rapid, low cost experimentation and innovation, must be comprised of at least these four parts:

  • Open Credentials
  • Open Assessments
  • Open Educational Resources
  • Open Competencies

This interconnected set of components provides a foundation that will greatly decrease the time, cost, and complexity of the search for more effective models of education. (It will provide related benefits for informal learning, as well). From the bottom up, open competencies provide the overall blueprint and foundation, open educational resources provide a pathway to mastering the competencies, open assessments provide the opportunity to demonstrate mastery of the competencies, and open credentials which point to both the competency statements and results of performance assessments certify to third parties that learners have in fact mastered the competency in question.

When open licenses are applied up and down the entire stack – creating truly open credentials, open assessments, open educational resources, and open competencies, resulting in an open education infrastructure – each part of the stack can be altered, adapted, improved, customized, and otherwise made to fit local needs without the need to ask for permission or pay licensing fees. Local actors with local expertise are empowered to build on top of the infrastructure to solve local problems. Freely.

Creating an open education infrastructure unleashes the talent and passion of people who want to solve education problems but don’t have time to reinvent the wheel and rediscover fire in the process.

“Openness facilitates the unexpected.” We can’t possibly imagine all the incredible ways people and institutions will use the open education infrastructure to make incremental improvements or deploy novel innovations from out of left field. That’s exactly why we need to build it, and that’s why we need to commit to a strong conceptualization of open, grounded firmly in the 5R framework and open licenses.

References

Coursera. (2014). Terms of Use. https://www.coursera.org/about/terms

Creative Commons. (2014). Metrics. https://wiki.creativecommons.org/Metrics

Hewlett. (2014). Open Educational Resources. http://www.hewlett.org/programs/education/open-educational-resources

Mozilla. (2014). Open Badges. http://openbadges.org/

Raymond, E. (2000). The Cathedral and the Bazaar. http://www.catb.org/esr/writings/cathedral-bazaar/cathedral-bazaar/ar01s08.html

Thierer, A. (2014). Permissionless Innovation.

http://mercatus.org/permissionless/permissionlessinnovation.html

Von Hippel, E. (2005). Democratizing Innovation. http://web.mit.edu/evhippel/www/democ1.htm

Wikipedia. (2014). Infrastructure. https://en.wikipedia.org/wiki/Infrastructure

Wiley, D. (2005). Content is Infrastructure. http://opencontent.org/blog/archives/215

Wiley, D. (2013). Where I’ve Been; Where I’m Going. http://opencontent.org/blog/archives/2723



http://opencontent.org/blog/page/3

Remembering Brent Lambert

I just learned that my colleague and friend Brent Lambert has passed on. As I’m reflecting on our relationship this morning, I want to share a few thoughts and feelings and stories.

I met Brent when he entered the PhD program at USU. He came to us with a Masters degree (in CS), but no Bachelors. That always cracked me up about him – he was quirky like that. He loved making software and was extremely firm in his commitment to openness (and the other principles that guided his life). He had a passion for building things that would make the world a better place. Here’s an old blog post where I include his thinking on learning objects in a brief review of related thinking by folks like Wayne Hodgins, Stephen Downes, and Andy Gibbons.

My relationship with Brent was the beginning of many good things in my life. We worked together on a number of crazy projects that were each ideas WAY ahead of their time. There wasn’t an insanity I could propose that Brent couldn’t code. In 2003 we launched our first collaboration, an open source system for post-publication peer reviewed journals we called Pitch.

That same year we began work on two bigger projects. Open Learning Support was open source, online discussion software designed to allow learning communities to self-form, self-organize, and self-manage. This software was integrated with MIT OCW and Connexions for a time, providing the places for people to ask and answer questions about what they were learning.

And then there was eduCommons. This was definitely the most impactful work we did together, and also started in 2003. eduCommons is an OpenCourseWare Management System – server software that colleges and universities use to run their OCW publishing initiatives. This work we timed just about right. eduCommons came into its own about the same time the OCW Consortium did, and since MIT never did open source the platform they use to manage their OCW initiative, it was the best game in town for a number of years if you wanted to run and manage your own OCW. (Brent would have persuaded you that, even if MIT had opened their platform, eduCommons was still better.) By our count in 2007 or 2008, a full 1/3 of all OCWs in the world were running on eduCommons. Many still do. It’s quite a legacy for Brent, because while many of us worked on eduCommons, it was really his baby.

Brent was a fellow traveler. I remember very early on – maybe in 2004 – we were tapped by the Hewlett Foundation to head down to Rice to do a technical review of an early version of the Connexions platform. As always, Brent stuffed everything he needed for the trip into a single grey backpack. We flew down the evening before the review, headed to the hotel room, and began messing around with the platform. After some initial frustrations and minor injury to our geek pride, we vowed we would not sleep until we figured out how to author a module and then fork it. It ended up taking about 90 minutes.

We went from being a pair of troublemakers, to being a small team, like with Pitch (which we worked on with Corrine Ellsworth), to what we called the OSLO Group (Open, Sustainable Learning Opportunities Group), to COSL (the Center for Open and Sustainable Learning). Brent was still leading the eduCommons team inside COSL when I left USU in 2008.

I broke a lot of new ground with Brent. I wrote my very first Python with him. I joined my first SourceForge project with him. I was always asking him one question or another about why my code wouldn’t work, and he would always patiently correct and teach me.

Brent and his family moved down to Utah Valley a few years back. We never really managed to get together with all the things going on in the lives of two families with lots of kids. I always had it in the back of my mind that we should get together “soon,” as I expect he did. Now it will be a while until I’ll see him again, which makes me sad. But it’s great to think that Brent is on the other side, working just as energetically there to make it a better place.

I shed a few happy tears remembering these good old days with Brent. I’m sure I’ll shed more next time I’m stuck writing some python, go to ask him about it, and remember that he’s temporarily out of reach. But I shed some bitter ones, too, thinking about Michelle and the kids who will miss him so much more than I will. My heart and my prayers go out to them, together with my reassurance that, despite it being a tragically, annoyingly long time, they will absolutely be together with him again one day.

Here’s to you, Brent. Thanks for everything. See you on the other side.

{ 2 comments }

A Response to “OER Beyond Voluntarism”

Well, this has turned into a rather enjoyable conversation. To recap what has unfolded so far:

  • It began with Jose Ferreira inviting me to appear on a panel at the Knewton Symposium,
  • on the panel, I made the claim that in the near future 80 percent of general education courses would replace their commercial textbooks with OER,
  • after the conference, Jose responded to my claim by telling publishers why I was wrong,
  • I responded by explaining that the emergence of companies like Red Hat for OER would indeed make it happen, using the Learning Outcomes per Dollar metric as their principal tool of persuasion, and
  • Michael Feldstein argued that it depends.

Yesterday, Brian Jacobs of panOpen published an essay contributing to the conversation. While I agree that some in the field have yet to pick up on a few of the points he makes, I’m a little perplexed that he would choose to position these points as a response to writing by Michael, Jose, and me. By making these points in a response, he implies that we have yet to understand them. Take this bit for example:

Their comments, though, didn’t tackle what I’ve come to see as the core issue for the OER movement, a foundational assumption that has crimped its progress. The assumption holds that because open-source educational content is like open-source software…its application and uses should follow in a similar way. The short history of the two movements makes clear that this is not the case.

I’ve been accused of many things in my life, but never of missing the difference between open content and open source. As the person who coined the term “open content” sixteen years ago specifically for the purpose of differentiating it from open source, I’ve never had to defend against this particular allegation. Not sure what to say.

Or this:

The OER movement’s almost singular focus on cost can obscure the larger objective — actually getting more students through to graduation while ensuring that they’ve learned (and enjoyed learning) something along the way.

when I spent almost half of the post he is responding to laying out the Learning Outcomes per Dollar metric for empirically measuring the impact of OER use on students’ academic performance. And then demonstrating with actual data from an OER adopter the incredibly powerful ways that OER adoption impacts learning.

Perhaps the article isn’t a response to Jose, Michael, and me at all. Maybe Brian is just using the conversation as an opportunity to underline a few unrelated points he feels need making, and that’s fine. And these little tidbits aren’t what I actually wanted to write about, anyway. Sorry. What I really want to do is unpack and comment on the core argument of the essay. First I’ll disagree, and then I’ll agree.

Disagreeing

As important as [the OpenStax] project is, it doesn’t yet realize the promise of OER as disaggregated high-quality content created and modified from anywhere.

Overworked and underpaid instructors are looking to content and course technology to make their lives easier, not to take on the additional responsibility of managing their own content without financial recognition for that labor.

From these and other portions of the article, I believe Brian’s argument is based on two premises:

  • In order for students to get the full benefit of OER, their faculty need to be aggregating, revising, and remixing OER – really tailoring and customizing it to meet their specific needs
  • This is a lot of additional work for faculty, and they won’t do it unless they are provided with additional incentives

Arguing from these assumptions, he arrives at the following conclusion:

This can be done by charging students nominally for the OER courses they take or as a modest institutional materials fee. When there are no longer meaningful costs associated with the underlying content, it becomes possible to compensate faculty for the extra work while radically reducing costs to students… a system for distributed content development also needs to be accompanied by a system of distributed financial incentives.

So, just stating each step of the argument explicitly to make sure I’m getting it right (hopefully he’ll correct me in the comments if I’m getting it wrong):

  • if we charge students a little when faculty adopt OER,
  • we can use a portion of that revenue to incentivize faculty to do the work of curating disaggregated OER and engaging in the revising and remixing process,
  • (because if we don’t incentivize faculty by paying them, then most will never engage in these activities), and
  • if faculty aren’t aggregating, revising, and remixing disaggregated OER, students won’t get the full benefit of OER.

I largely agree with Brian’s premises, but disagree somewhat with where he takes the argument based on them. (As I’ll argue below, this disagreement is both healthy and a Good Thing.) Here’s where I think the primary differences in our thinking lie.

The “Full” Benefit of OER

First, while I agree in theory that students don’t get the full potential benefit of OER if their faculty don’t engage in the aggregate, revise, and remix process, it’s unclear to me how much benefit students miss out on when faculty simply adopt OER “as is” (though we’re studying this question now). For example, the overwhelming majority of faculty in the college algebra example from my previous post – where passing rates increased from 48% to 60% after faculty switched to OER – did zero aggregating, revising, or remixing. Maybe the change in pass rates would have been even higher if they had, but are we really going to poo-poo an increase of 12 real percentage points in the pass rate? If students are getting much of the potential benefit even when faculty don’t aggregate, revise, and remix, is it worth incurring the additional costs necessary to achieve 100% of the full benefit? This brings us directly back to the Learning Outcomes per Dollar discussion in my previous post. What’s the delta in learning we would place in the numerator? What’s the delta in cost we would place in the denominator?

Why Don’t Faculty Remix?

Second, I disagree with the notion that not getting paid for their time and effort is the primary obstacle to faculty aggregating, revising, and remixing OER. I’ve trained hundreds of faculty in the past two years and have learned some interesting things along the way. One is that the faculty working in the institutions that serve our most at-risk students – those students who would likely benefit the most from OER – are the faculty with the least technical capability. In quite a few cases, these were faculty who needed support for technical tasks as “simple” as attaching a document to an email. Offering them $100 to remix some OER is not going to endow them with the skills – either technical or pedagogical – they need to do this effectively. That takes serious boots-on-the-ground training and support. It can be done, but it’s not a simple matter of offering a faculty stipend. (This also brings us directly back to the Learning Outcomes per Dollar discussion in my previous post.)

Builders, Adapters, and Adopters

Even in cases where faculty adoption of OER was supported by one-time grant funding (i.e., they were getting paid extra), our observation across dozens of campus visits and faculty trainings is that faculty generally fall into one of three categories: builders, adapters, and adopters. Builders have the time, interest, and skill to create, aggregate, revise, and remix OER. Adapters have the time, interest, and skill to make minor tweaks to OER that have been previously packaged in order to work “out of the box.” Adopters simply use OER designed to work right out of the box, just as they found them.

We think the distribution of faculty among these groups is something like 1% builders, 7% adapters, and 92% adopters. (As per our previous research on the number and types of changes faculty made to Flat World Knowledge’s open textbooks, “as with Duncan (2009), we found that the rates of revision and remix were relatively low. Only 7.5% of textbook adoptions over a two-year period were adoptions of custom books. This indicates that while the ability to revise and remix sounds exciting, the number of those who take advantage of this opportunity is relatively small.”) A strategy targeting the 1% – even if it grew to include the next 7% – is unlikely to have the broad impact we all hope OER will achieve. The strategy we’re looking for has to include the 92% without constraining the other 8%.

The Mythical Surplus

Another issue relating to paying faculty is that, as many of us have experienced, the offer of additional funding does not add hours to the day. Many of these faculty are already so overworked and behind on existing commitments that even with a little sweetener they can’t find the time to engage in aggregating, revising, and remixing OER. The entire notion of faculty who would remix if only they were paid assumes a professoriate with surplus time and skill who are looking to maximize their return on the expenditure of that surplus. Unfortunately, that is not the life experience of many faculty. While I freely admit that it’s a terribly hard trap to avoid falling into, this approach seems to disproportionately favor faculty at schools that are much better resourced than their community college cousins.

Incentives, Alignment, and Conflict

My final, and perhaps biggest, issue with paying faculty to adopt OER is the inherent misalignment of incentives it creates. For faculty who previously made their materials choices based primarily on what they thought was best for their students, we now throw money into the mix – “if you choose these materials, we’ll pay you!” And the incentive payment to the faculty member will inevitably be built into the cost which their students pay, raising the price for students in order to financially benefit faculty. Yuck. (Don’t most colleges have conflict of interest policies governing textbook adoptions that directly benefit faculty financially?)

Agreeing

I agree that there are costs associated with adopting OER – someone has to find, vet, properly attribute, load into the local platform, etc. the OER that will be used in classes. Sometimes a faculty member will have the time and skills to do this themselves. Sometimes an institution will provide these kinds of supporting services through the staff of their library and center for teaching and learning. Other institutions won’t have the internal capacity to provide these supports and will have to hire new people or partner with outside organizations for them.

Support Fees

In the latter cases, institutions have to find new sources of funding to pay for those new people or outside support services. There are many ways of doing this. Brian has described the “support fee” model. My experience has been that when you propose to a student “how would you feel if the school instituted a $5 or $10 course support fee in exchange for removing the $170 textbook from the syllabus?”, they happily ask “where do I sign up?” From the student perspective, the economics of this option are hard to argue against.

The INTRO Model

On Monday we’re submitting an article (for a special issue of EPAA) that introduces a new funding model we call the INTRO model – INcreased Tuition Revenue through OER. In this article we use actual enrollment, drop rate, tuition, policy, and other data from a large OER adopting institution to show that:

  • when faculty adopt OER, drop rates decrease significantly
  • when drop rates decrease, the institution refunds significantly less tuition
  • when they refund less tuition, the institution has more funding to spend on things like supporting OER adoption among its faculty

In this particular example we demonstrate that, if the current OER pilot was expanded to all sections of the 20-some courses currently piloting OER, the institution could expect to retain over $100,000 a year in tuition that they’re currently refunding. Some of this new funding could be used to pay for services supporting faculty adoption of OER without charging students.

I’m sure there are other models for funding OER adoption support services out there if we’re creative and open-minded enough to find them.

Parallel Experiments

And I am in total and complete agreement with this statement from Brian’s piece:

What’s needed are lots of entities — for-profit and nonprofit — to experiment with funding models.

YES! We need more experimentation happening, and we need it happening in parallel instead of serially. We can’t all stand around watching the Flat World Knowledge experiment, and only start trying something different when it becomes clear that their approach isn’t quite the right one. As Linus said, in what is possibly my favorite quote:

And don’t EVER make the mistake [of thinking] that you can design something better than what you get from ruthless massively parallel trial-and-error with a feedback cycle. That’s giving your intelligence _much_ too much credit.

Even though I disagree with some of Brian’s conclusions (which is why I’m experimenting with a different business model), I absolutely want him out there experimenting with his particular business model. If I’m sufficiently humble, I’ll learn a thing or two from him before it’s all said and done. (If he’s sufficiently humble, Brian may learn something from me, too.) From this learning a new generation of models will emerge and be tested. They will be followed by another, further refined set of models. That’s how the field moves forward in its understanding of how to support OER adoption at scale, and it’s how at least 80% of general education courses will end up adopting OER in place of commercial textbooks.

{ 3 comments }

OpenCon 2014

#OpenEd14 is getting close! For a wide range of reasons, this year’s 11th annual Open Education Conference looks like it will be the best ever. One thing contributing to the awesomeness of this year’s conference is other events organized around the same time in the same area.

One of these events is OpenCon 2014: The Student and Early Career Researcher Conference on Open Access, Open Education and Open Data, organized by SPARC and the Right to Research Coalition As the name implies, this event is really focused on engaging students and early career individuals and helping them become effective advocates in the openness movement. The meeting will run from November 15-17 in Washington, D.C., and the program includes three days of talks, workshops, and in-the-field advocacy experience (leveraging its location in Washington, DC). Of course, a delegation of participants from OpenCon will also attend OpenEd.

Applications are still open until midnight tonight Pacific time – over 1600 applicants from more than 120 countries have already applied. If you fall into the student / early career category, you should definitely apply.



http://opencontent.org/blog/page/4

I recently had the wonderful opportunity to participate on a panel about OER at the Knewton Education Symposium. Earlier this week, Knewton CEO Jose Ferreira blogged about ‘OER and the Future of Publishing’ for EdSurge, briefly mentioning the panel. I was surprised by his post, which goes out of its way to reassure publishers that OER will not break the textbook industry.

Much of the article is spent criticizing the low production values, lack of instructional design, and missing support that often characterize OER. The article argues that there is a potential role for publishers to play in each of these service categories, leveraging OER to lower their costs and improve their products. But it’s been over 15 years since the first openly licensed educational materials were published, and major publishers have yet to publish a single textbook based on pre-existing OER. Why?

Exclusivity, Publishing, and OER

The primary reason is that publishers are – quite rationally – committed to the business models that made them incredibly successful businesses. And the core of that model is exclusivity – the contractual right to be the only entity that can offer the print or digital manifestation of Professor Y’s expertise on subject X. Exclusivity is the foundation bedrock of the publishing industry, and no publisher will ever meaningfully invest in building up the reputation and brand of a body of work which is openly licensed. Publisher B would simply sit on the sidelines while Publisher A exhausts its marketing budget persuading the world that it’s version of Professor Y’s open materials are the best in their field. Once Professor Y’s brand is firmly associated with high quality, Publisher B will release it’s own version of Professor Y’s open materials, free-riding on Publisher A’s marketing spend. Publisher A’s marketing efforts actually end up promoting Publisher B’s competing product in a very real way. No, publishers will never put OER at the core of their offerings, because open licensing – guaranteed nonexclusivity – is the antithesis of their entire industrial model. Some playing around in the supplementals market is the closest major publishers will ever come to engaging with OER.

New Models Enabled by OER

However, we are seeing the emergence of a new kind of organization, which is neither invested in preserving existing business models nor burdened with the huge content creation, distribution, and sales infrastructure that a large commercial publisher must support. (This sizable infrastructure, that once represented an insurmountable barrier to entry, is quickly becoming a millstone around the neck of big publishers facing the threat of OER.) The new breed of organization is only too happy to take the role of IBM or Red Hat and provide all the services necessary to make OER a viable alternative to commercial offerings. I had to chuckle a little reading the advice to publishers Jose provides in his post, because that list of services could almost have been copied and pasted my company’s website (Lumen Learning): iterative cycles of instructional design informed by data, integration services, faculty support, etc. I agree wholeheartedly that these are the kinds of services that must be offered to make OER a true competitor to commercial textbooks in the market – but I disagree with the idea that publishers will ever be willing to offer them. That realization is part of what led me to quit a tenured faculty job in a prestigious graduate program to co-found Lumen Learning.

All that said, the emergence of these organizations won’t spell the end of large textbook publishers as we know them. Instead, that distinction will go to the simplest possible metric by which we could measure the impact of the educational materials US students spend billions of dollars per year on: learning outcomes per dollar.

Learning Outcomes per Dollar

No educator would ever consciously make a choice that harmed student learning in order to save money. But what if you could save students significant amounts of money without doing them any academic harm? Going further, what if you could simultaneously save them significant money and improve their learning outcomes? Research on OER is showing, time and again, that this latter scenario is entirely possible. One brief example will demonstrate the point.

A recent article published in Educause Review describes Mercy College’s recent change from a popular math textbook and online practice system bundle provided by a major publisher (~$180 per student), to OER and an open source online practice system. Here are some of the results they reported after a successful pilot semester using OER in 6 sections of basic math:

  • At pilot’s end, Mercy’s Mathematics Department chair announced that, starting in fall 2012, all 27 sections (695 students) in basic mathematics would use [OER].
  • Between spring 2011 [no sections using OER] and fall 2012 [all sections using OER], the math pass rate increased from 48.40 percent to 68.90 percent.
  • Algebra courses dropped their previously used licenses and costly math textbooks and resources, saving students a total of $125,000 the first year.

By switching all sections of basic math to OER, Mercy College saved its students $125,000 in one year and changed their pass rate from 48 to 69 percent – a 44% improvement.

If you read the article carefully, you’ll see that Mercy actually received a fair amount of support in their implementation of OER, which was funded through a grant. So let’s be honest and put the full cost-related details on the table. Mercy (and many other schools) are still receiving the support they previously received for free through their participation in the Kaleidoscope Open Course Initiative. Lumen Learning, whose personnel led the KOCI, now provides those same services to Mercy and other schools for $5 per enrollment.

So let’s do the learning outcomes per dollar math:

  • Popular commercial offering: 48.4% students passing / $180 textbook and online system cost per student = 0.27% students passing per required textbook dollar
  • OER offering: 68.9% students passing / $5 textbook and online system cost per student = 13.78% students passing per required textbook dollar

For the number I call the “OER Impact Factor,” we simply divide these two ratios with OER on top:

  • 13.78% students passing per required textbook dollar / 0.27% students passing per required textbook dollar = 51.03

This basic computation shows that, in Mercy’s basic math example, using OER led to an over 50x increase (i.e., a 5000% improvement) in percentage passing per dollar. No matter how you look at it, that’s a radical improvement.

If similar performance data were available for two construction companies, and a state procurement officer awarded a contract to the vendor that produces demonstrably worse results while costing significantly more, that person would lose his job, if not worse. (As an aside, I’m not aware of any source where a taxpayer can find out what percentage of federal financial aid (for higher ed) or their state public education budget (for K-12) is spent on textbooks, making it impossible to even begin asking these kinds of questions at any scale.) While faculty and departments aren’t subject to exactly the same accountability pressures as state procurement officers, how long can they continue choosing commercial textbook options over OER as this body of research grows?

#winning

Jose ends his post by saying “Publishers who can’t beat OER deserve to go out of business,” and he’s absolutely right. But in this context, “beat” means something very different for OER than it does for publishers. For OER, “beat” means being selected by faculty or departments as the only required textbook listed on the syllabus (I call this a “displacing adoption”). Without a displacing adoption – that is, if OER are adopted in addition to required publisher materials – students may experience an improvement in learning outcomes but will definitely not see a decrease in the price of going to college. Hence, OER “beat” publishers only in the case of a displacing adoption. For publishers, the bar is much lower – to “beat” OER, publishers simply need to remain on the syllabus under the “required” heading.

How are OER supposed to clear this higher bar, particularly given the head start publishers have? OER have only recently started to catch up with publishers in many of the areas where publishers have enjoyed historical advantages, like packaging and distribution (c.f. the amazing work being done by OpenStax, BCCampus OpenEd, Lumen Learning, and others). But OER have been beating publishers on price and learning outcomes for several years now, and proponents of OER would be wise to keep the conversation laser-focused on these two selection criteria. In a fortunate coincidence for us, I believe these are the two criteria that matter most.

OER offerings are always going to win on price – no publisher is ever going to offer their content, hosting platform, analytics, and faculty-facing services in the same zip code as $5 per student. (And when we see the emergence of completely adaptive offerings based on OER – which we will – even if they are more expensive than $5 per student they will still be significantly less expensive than publishers’ adaptive offerings.) Even if OER only manage to produce the same learning results as commercial textbooks (a “no significant difference” research result), they still win on price. “How would you feel about getting the same outcomes for 95% off?” All OER have to do is not produce worse learning results than commercial offerings.

So the best hope for publishers is in creating offerings that genuinely promote significantly better learning outcomes. (I can’t describe how happy I am to have typed that last sentence.) The best opportunity for publishers to soundly defeat OER is through offerings that result in learning outcomes so superior to OER that their increased price is justified. Would you switch from a $5 offering that resulted in a 65% passing rate to a $100 offering that resulted in a 67% passing rate? Would you switch to a $225 offering that resulted in a 70% passing rate? There is obviously some performance threshold at which a rational actor would choose to pay 20 or 40 times more, but it’s not immediately apparent to me where it is.

However, if OER can beat publishers on both price and learning outcomes, as we’re seeing them do, then OER deserve to be selected by faculty and departments over traditional commercial offerings in displacing adoptions.

I was the member of the panel Jose quoted as saying that ‘80% of all general education courses taught in the US will transition to OER in the next 5 years,’ and I honestly believe that’s true. The combined forces of the innovator’s dilemma, the emergence of new, Red Hat-like organizations supporting the ecosystem around OER, the learning outcomes per dollar metric, and the growing national frustration over the cost of higher education all seem to point clearly in this direction.

{ 7 comments }

Aaron Wolf is contributing to a nice thread in the comments under my description of the recently revised definition of the “open” in “open content”. I’ve revised my ShareAlike example to distribute blame evenly across Wikipedia and MIT OCW based on his comments. You can see the current version of the definition at http://opencontent.org/definition/.

I want to address an accusation of Aaron’s here. He mentions other “definitions of Open that bother working to be precise and not vague,” in which category he includes the definitions from the Open Knowledge Foundation, the Free Cultural Works moderators, etc., in apparent contrast to my definition. I have a number of problems with all these definitions. I’ll address the OKF definition here just to provide a specific example.

Being ‘Precise and Not Vague’

First, the OKF definition misses the critical distinction between revising and remixing, lumping these both into the category of “modifications and derivative works.” The distinction between revising and remixing is critical because, among other things, one invokes the specter of license incompatibility while the other does not. People need to understand, plan, and manage against this important difference when they work with open content. You might argue that the difference is implied in the OKF definition, but that’s not “precise.”

Second, the OKF definition uses the language of “access” and not the language of “ownership.” In a world where things are moving increasingly toward streaming services where people can always access but never own anything, this is potentially confusing. Again, you can argue that ownership is implied in the definition, but that’s not “precise.”

Third, the OKF definition qualifies works as open based on their “manner of distribution.” After opening with this statement, 10 of the 11 clarifying bullet points begin with either the phrase “The license” or “The rights.” (The one bullet that does not begin this way could be rewritten in a clearer manner if it did.) Obviously, content qualifies as open based on the rights granted to you in its license and not based on its manner of distribution. Again, you can argue that, given the repeating chorus of rights and license language, this is implied in the definition, but that’s not “precise.”

The 5Rs in the new definition deal with each of these issues much more precisely.

Inheriting a Bright Line from the DFSG

I think the primary problem with many of these definitions is that they take the Debian Free Software Guidelines and try to coerce a document written for software to apply to content. This always results in a poor fit that feels forced. Content is different from software in meaningful ways and deserves its own treatment created specifically around its special affordances. (The OKF definition is particularly forced, as it takes a document written for software and tries, in a single derivative work, to coerce it into applying to both content and data, which are also meaningfully different from each other.)

I imagine that when Aaron says ‘definitions like the OKF definition are more “precise,”‘ what he really means to say is that these definitions draw a bright line with regard to which restrictions licensors can place on uses of content (Attribution and ShareAlike) and which ones they can’t (Noncommercial) if they want to be able to call that content “open.” I specifically refuse to draw a line of this kind in defining the open in open content.

There is a continuum of restrictions in the many licenses used for content (BY, BY NC, BY SA, BY NC SA, etc.), and I don’t find drawing an arbitrary line somewhere along that continuum to be a useful exercise. On the contrary, I find it a counterproductive exercise. Drawing this line allows people to believe that choosing a license just barely on the open side of the line (e.g., BY SA) is “good enough” and that there’s no need to consider being more open. In fact, when the continuum is collapsed into two discrete categories – open or not – the phrase “more open” doesn’t even have a meaning any longer. According to the bright line definitions, BY SA is just as open as BY – they both qualify as “open.”

By destroying the continuum of openness, the “bright line of restrictions” approach robs people of the opportunity to ask themselves questions like “should I be more open?” or “how can I be more open?” We should be doing everything we can to encourage people to ponder on those questions. We should help everyone be as open as possible, not simply “open enough.” That’s one of the main reasons why the “open” in “open content” is defined the way it is.

{ 1 comment }

Earlier this week I read the Wikipedia entry on open content. Suffice it to say I was somewhat disappointed by the way the editors of the page interpreted my writings defining the “open” in open content. I think their interpretation was plausible and legitimate, but it is certainly not the message I intended people to take away after reading the definition. So, the fault for my unhappiness is mine for not having been clearer in my writing.

Consequently, I have refined and clarified the definition, which lives at http://opencontent.org/definition/, including a new heading for the section on license requirements and restrictions, and a new section on technical decisions and ALMS analysis. I present the revised definition and commentary below for quick reference. I’d be very interested in your reactions and feedback.

Hopefully the Wikipedians will update the entry soon…

Defining the “Open” in Open Content

The term “open content” describes any copyrightable work (traditionally excluding software, which is described by other terms like “open source”) that is licensed in a manner that provides users with free and perpetual permission to engage in the 5R activities:

  1. Retain – the right to make, own, and control copies of the content (e.g., download, duplicate, store, and manage)
  2. Reuse – the right to use the content in a wide range of ways (e.g., in a class, in a study group, on a website, in a video)
  3. Revise – the right to adapt, adjust, modify, or alter the content itself (e.g., translate the content into another language)
  4. Remix – the right to combine the original or revised content with other open content to create something new (e.g., incorporate the content into a mashup)
  5. Redistribute – the right to share copies of the original content, your revisions, or your remixes with others (e.g., give a copy of the content to a friend)

Legal Requirements and Restrictions
Make Open Content Less Open

While a free and perpetual grant of the 5R permissions by means of an “open license” qualifies a creative work to be described as open content, many open licenses place requirements (e.g., mandating that derivative works adopt a certain license) and restrictions (e.g., prohibiting “commercial” use) on users as a condition of the grant of the 5R permissions. The inclusion of requirements and restrictions in open licenses make open content less open than it would be without these requirements and restrictions.

There is disagreement in the community about which requirements and restrictions should never, sometimes, or always be included in open licenses. Creative Commons, the most important provider of open licenses for content, offers licenses that prohibit commercial use. While some in the community believe there are important use cases where the noncommercial restriction is desirable, many in the community eschew the restriction. Wikipedia, one of the most important collections of open content, requires all derivative works to adopt a specific license. While they clearly believe this additional requirement promotes their particular use case, it makes Wikipedia content incompatible with content from other important open content collections, such as MIT OpenCourseWare.

Generally speaking, while the choice by open content publishers to use licenses that include requirements and restrictions can optimize their ability to accomplish their own local goals, the choice typically harms the global goals of the broader open content community.

Poor Technical Choices
Make Open Content Less Open

While open licenses provide users with legal permission to engage in the 5R activities, many open content publishers make technical choices that interfere with a user’s ability to engage in those same activities. The ALMS Framework provides a way of thinking about those technical choices and understanding the degree to which they enable or impede a user’s ability to engage in the 5R activities permitted by open licenses. Specifically, the ALMS Framework encourages us to ask questions in four categories:

  1. Access to Editing Tools: Is the open content published in a format that can only be revised or remixed using tools that are extremely expensive (e.g., 3DS MAX)? Is the open content published in an exotic format that can only be revised or remixed using tools that run on an obscure or discontinued platform (e.g., OS/2)? Is the open content published in a format that can be revised or remixed using tools that are freely available and run on all major platforms (e.g., OpenOffice)?
  2. Level of Expertise Required: Is the open content published in a format that requires a significant amount technical expertise to revise or remix (e.g., Blender)? Is the open content published in a format that requires a minimum level of technical expertise to revise or remix (e.g., Word)?
  3. Meaningfully Editable: Is the open content published in a manner that makes its content essentially impossible to revise or remix (e.g., a scanned image of a handwritten document)? Is the open content published in a manner making its content easy to revise or remix (e.g., a text file)?
  4. Self-Sourced: It the format preferred for consuming the open content the same format preferred for revising or remixing the open content (e.g., HTML)? Is the format preferred for consuming the open content different from the format preferred for revising or remixing the open content (e.g. Flash FLA vs SWF)?

Using the ALMS Framework as a guide, open content publishers can make technical choices that enable the greatest number of people possible to engage in the 5R activities. This is not an argument for “dumbing down” all open content to plain text. Rather it is an invitation to open content publishers to be thoughtful in the technical choices they make – whether they are publishing text, images, audio, video, simulations, or other media.



http://opencontent.org/blog/page/5

There are several things I’ve read / heard recently that have provoked a response in me but I’ve been negligent in responding publicly. Dumping some of those thoughts out here.

Personalization

Audrey Watters provides the best summary of a recent conversation on personalization in education. A lot of the conversation is around what personalization means and, given any specific definition, should we even be attempting to personalize learning. Obviously, the answer to the latter question depends on how you address the former.

For me, personalization comes down to being interesting. You have successfully personalized learning when a learner finds it genuinely interesting. Providing me with an adaptive, customized pathway through educational materials that bore me out of my mind is not personalized learning. It may be better than forcing me through the same pathway that everyone else takes, but I wouldn’t call it personalized.

In my imagination I have this notion of “Netflix Hell” related to personalized learning (did I hear this example from someone else?). Imagine if your only option for watching movies was to login to Netflix and watch the movies it recommended to you, in the order it recommend them. Who wants that? Who would pay for that? This is essentially where the current “vision” of personalization is taking us. But that vision aims too low – we need to help students find their learning interesting. If making learning interesting is what we mean by personalizing learning, we should absolutely be doing that.

Monopolies

On a different note, Fred Wilson wrote a great post recently about Platform Monopolies that does a terrific job of making an argument for OER. You should read the whole post, but this quote summarizes it nicely:

So, as an investor, when you see a dominant market power emerge, you should start asking yourself “what will undo that market power?” And you should start investing in that.

If higher education textbook publishers have not emerged as a dominant market power, I don’t know who has. And of course I think OER are what will undo that market power. As high quality OER continue to expand into additional subject matter areas, and as efficacy research continues to show that learners using OER learn just as much or more than students using publisher materials, this will likely mean trouble for publishers. Very quickly, what today are their competitive advantages – their huge authoring, publishing, and sales machineries – will transform into gigantic liabilities.

{ 5 comments }

Despite thoughtful disagreement about the term “infrastructure” from people I greatly respect, I continue to find the term extraordinarily useful in my own thinking about how we improve education. As interest in competency-based education continues to grow, we have an incredible opportunity to expand and to open the core pieces of the education infrastructure. But before I go further, a few words about “infrastructure” to make sure we’re all on the same page.

The Wikipedia entry on infrastructure begins:

Infrastructure refers to the basic physical and organizational structures needed for the operation of a society or enterprise, or the services and facilities necessary for an economy to function. It can be generally defined as the set of interconnected structural elements that provide a framework supporting an entire structure of development…

The term typically refers to the technical structures that support a society, such as roads, bridges, water supply, sewers, electrical grids, telecommunications, and so forth, and can be defined as “the physical components of interrelated systems providing commodities and services essential to enable, sustain, or enhance societal living conditions.” Viewed functionally, infrastructure facilitates the production of goods and services.

What would constitute an education infrastructure? What types of components are included in the set of interconnected structural elements that provide the framework supporting education?

I can’t imagine a way to conduct a program of education without all four of the following components: competencies or learning outcomes, educational resources that support the achievement of those outcomes, assessments by which learners can demonstrate their achievement of those outcomes, and credentials that certify their mastery of those outcomes to third parties. Certainly there are more components to the core education infrastructure than these four, but I would argue that these four clearly qualify as interconnected structural elements that provide the framework underlying every program of formal education.

Why Bother With an Open Education Infrastructure?

Recently I’ve had the opportunity to spend time thinking about practical ways of spreading the influence of openness across the entire education infrastructure. But why continue focusing on infrastructure at all? I want to make it as simple, fast, and inexpensive as possible for people and institutions to experiment with new models of education – much in the same way the Reclaim Hosting folks are deploying open educational technology infrastructure to make it fast, cheap, and easy for folks to experiment with the technologies underlying new models. Not everyone has the time, resources, talent, or inclination to completely recreate competency maps, textbooks, assessments, and credentialing models for every course they teach. Similarly on the technology side, not everyone has the time or inclination to code up a new blogging platform from scratch every time they want to post an article online. It simply makes things faster, easier, cheaper, and better for everyone when their is high quality, openly available infrastructure already deployed that we can remix and build upon.

Opening the Education Infrastructure

Historically, we have only applied the principle of openness to one of the four components of the education infrastructure I listed above: educational resources. If you’re not familiar with the 5Rs model of thinking about open educational resources (OER), give that summary a quick read. I have been arguing that “content is infrastructure” for about a decade now. More recently, Mozilla has created and shared an open credentialing infrastructure through their open badges work. But little has done to promote the cause of openness in the areas of competencies and assessments.

Open Competencies

I think one of the primary reasons competency-based education (CBE) programs have been so slow to develop in the US – even after the Department of Education made its federal financial aid policies friendlier to CBE programs – is the terrific amount of work necessary to develop a solid set of competencies. Again, not everyone has the time or expertise to do this work. It’s really hard. And because it’s so hard, many institutions with CBE programs treat their competencies like a secret family recipe, hoarding them away and keeping them fully copyrighted (apparently without experiencing any cognitive dissonance while they promote the use of OER among their students). This behavior has seriously stymied growth and innovation in CBE in my view.

If an institution would openly license a complete set of competencies, that would give other institutions a foundation on which to build new programs, models, and other experiments. The open competencies could be revised and remixed according to the needs of local programs, and they can be added to, or subtracted from, to meet those needs as well. This act of sharing would also give the institution of origin an opportunity to benefit from remixes, revisions, and new competencies added to their original set by others.

Furthermore, openly licensing more sophisticated sets of competencies, like the quantitative domain maps I wrote about a few weeks ago, provides a public, transparent, and concrete foundation around which to marshall empirical evidence and build supported arguments about the scoping and sequencing of what students should learn.

Open competencies are the core of the open education infrastructure because they provide the context that imbues resources, assessments, and credentials with meaning – from the perspective of the instructional designer, teacher, or program planner. (They are imbued with meaning for students through additional means as well.) You don’t know if a given resource is the “right” resource to use, or if an assessment is giving students an opportunity to demonstrate the “right” kind of mastery, without the competency as a referent. (For example, an extremely high quality, high fidelity, interactive chemistry lab simulation is the “wrong” content if students are supposed to be learning world history.) Likewise, a credential is essentially meaningless if a third party like an employer cannot refer to the skill or set of skills its possession supposedly certifies.

Open Assessments

For years, creators of open educational resources have declined to share their assessments in order to “keep them secure” so that students won’t cheat on exams, quizzes, and homework. This security mindset has prevented sharing of assessments.

In CBE programs, students often demonstrate their mastery of competencies through “performance assessments.” Unlike some traditional multiple choice assessments, performance assessments require students to demonstrate mastery by performing a skill or producing something. Consequently, performance assessments are very difficult to cheat on. For example, even if you find out a week ahead of time that the end of unit exam will require you to make 8 out of 10 free throws, there’s really no way to cheat on the assessment. Either you will master the skill and be able to demonstrate that mastery or you won’t.

Because performance assessments are so difficult to cheat on, keeping them secure should not be a concern, making it possible for performance assessments to be openly licensed and publicly shared. Once they are openly licensed, these assessments can be retained, revised, remixed, reused, and redistributed.

Another way of alleviating concerns around the security of assessment items is to create openly licensed assessment banks that contain hundreds or thousands of assessments – so many assessments that cheating becomes more difficult and time consuming than simply learning.

An Open Infrastructure Stack for Education

Open Credentials
Open Assessments
Open Educational Resources
Open Competencies

This interconnected set of components provides a foundation which will greatly decrease the time, cost, and complexity of the search for innovative and effective new models of education. (It will provide related benefits for informal learning, as well). From the bottom up, open competencies provide the overall blueprint and foundation, open educational resources provide a pathway to mastering the competencies, open assessments provide the opportunity to demonstrate mastery of the competencies, and open credentials which point to both the competency statements and results of performance assessments certify to third parties that learners have in fact mastered the competency in question.

When open licenses are applied up and down the entire stack – creating truly open credentials, open assessments, open educational resources, and open competencies, resulting in an open education infrastructure – each part of the stack can be altered, adapted, improved, customized, and otherwise made to fit local needs without the need to ask for permission or pay licensing fees. Local actors with local expertise are empowered to build on top of the infrastructure to solve local problems. Freely.

And that’s why I keep talking about infrastructure. We can’t solve other people’s problems for them, but we can make it infinitely easier for them to solve their own problems. Providing an open education infrastructure unleashes the talent and passion of people who want to solve education problems but don’t have time to reinvent the wheel and rediscover fire in the process.

Both Lumen as an organization and I as an individual are strongly committed to developing and deploying this infrastructure and being some of the actors who build on top of it. We’ve just begun work on a major CBE project, and I’m ecstatic to be working with institutional partners who understand the power of open and share our vision and commitment to making the open education infrastructure a reality. Of course we won’t build out the missing pieces of the entire infrastructure during one project, but we are going to move the ball significantly down the field.

Always remember, “openness facilitates the unexpected.” We can’t possibly imagine all the incredible ways people and institutions will use the open education infrastructure to incrementally improve or completely reinvent themselves. And that’s exactly why we need to build it.

{ 3 comments }

The ideas expressed in the Reclaim Your Domain and IndieWebCamp work continue to inform my thinking about the 5th R (retain) and the notion that students should be able to “Own Your Content, Own Your Data” when it comes to online learning.

A few weeks ago I ran across Known which fascinated me but looked to be too immature to use yet. Then Jim described Tim Owens’ experiments with Known. That gave me enough confidence to dig into the code myself and see if I couldn’t get it running.

But what is Known and why is it so interesting? Known is a publication platform that uses the “POSSE” publication model, where POSSE stands for “Publish (on your) Own Site, Syndicate Elsewhere”. You can post photos, status updates, checkins, etc., to your own site and have them syndicated out to other sites if you like (e.g., push your checkins to Foursquare or you status updates to Twitter of Facebook.)

The POSSE model is just beautiful. It represents everything empowering about the Reclaim and Retain work. In fact, the more I wrapped my head around it, the more excited I got.

As a first step, I took a computer here at the house and put a clean Ubuntu install on it and set it up on the home network. Then I configured dyndns to point a new domain – http://davidwiley.social/ at the box. Finally, I installed Known on the box together with the plugins for Facebook, Twitter, and Foursquare (haven’t got the Flickr plugin working yet.) Now I’m publishing all my photos, status updates, checkins, etc. to an open source system, on my own domain, running on an open source OS, on my own hardware, on my own network, and pushing some of that content out to the silos where my friends are probably expecting to find it. “Own your content, own your data” indeed.

There’s something unspeakably gratifying about owning every link in the chain of publication of your own content. The feeling of demoting the social silos like Facebook to the role of syndication endpoints may be even more gratifying. And did I mention – (friends with Known installs + RSS + Feedly) = (decentralized Facebook replacement)? What is that bell I hear tolling?

It puts the old joke about Blackboard and Facebook in a new context:

Q: What would happen if Facebook worked like Blackboard?

A: Every 15 weeks Facebook would delete all your photos and status updates and unfriend all your friends.

The question immediately arises – when will we be able to POSSE into our formal learning environments? Could it be done today? For example, could we write a Known plugin that would let us POSSE into Canvas? Knowing what I do of their API, I think we could.

How would that change students’ relationships with their courses and institutions? Maybe this is already where the Reclaim folks are going, and I’m only just catching up, but give each student (1) their own domain, (2) a Known install, and (3) the ability to POSSE into the LMS – and just think about the implications. What does “submitting” homework mean now? What does an e-portfolio mean now? How do assessments need to change when there are worked examples of assignments everywhere? And where was I ever going to point the Evidence metadata in an open badge before students had this?

And why ignore faculty? Just make each faculty member’s Known installation speak LTI (that’s your Blackboard plugin) and what happens to faculty ownership, licensing, and control of their content? Hmmm… Known speaking LTI… To paraphrase Elton John, “Goodbye, xpLOR, though I never really knew you at all.”

Perhaps I’m overly excited. But I don’t think so. Empowering people to truly own their content and own their data, on their own domain, with POSSE capabilities, will change things. Perhaps we will finally reach the point where people quit using jailbreak as a verb. I haven’t really addressed it here, but I’ll explore the relationship between POSSE and “open” more in a future blog post. I just have to unexplode my brain first.



http://opencontent.org/blog/page/6

I’ve just started working on a major competency-based education (CBE) initiative with Lumen (specifics coming soon), which has helped me see that the principles of open education are, generally speaking, nowhere to be found in the competency-based education space. To be clear – many institutions are using OER in their CBE programs, but almost every institution doing CBE seems to hoard their competencies like the family recipe for a secret sauce. You know what this made me think…

As part our new project, Lumen will be creating openly licensed competency maps. Not just openly licensed lists of competencies, or even the imperceptibly more nuanced indented lists of competencies. We’ll be openly licensing full on, multi-dimensional maps of each competency space, complete with membership information about which competencies fall along which dimensions of expertise (based first on a theoretical model, and then continuously improved over time using statistical models driven by empirical data) together with difficulty estimates of each competency in each dimension (again, based initially on a theoretical model, and then continuously improved over time using an IRT-based model driven by empirical data). These continuously improved, openly licensed competency maps will provide much deeper insight into the multiple trajectories from novice to competence in each domain, together with a characterization and ordering of the smaller competencies inside each dimension of competence.

If you don’t know my work on Quantitative Domain Mapping, you can see the technique applied to first semester music theory in this working paper from 2001, which also draws out some of the pedagogical implicatons of discovering that the relationships between the smaller competencies in your domain are not actually what you thought they were. (If you’re really interested, the QDM line of my work actually began in my dissertation, where it’s described in pages 59-67.) My thinking has of course evolved over the years, but it’s exciting to be picking up this strand of work again. The synergies between DQM and openness are amazing and full of promise.

I believe work of this nature – that is, bringing the principles of openness, sharing, and continuous improvement – into our work on the competencies themselves is some of the most important work lying ahead of the field in the next five years. It is utterly absent now, and we need to integrate openness into our work on competencies while this body of work is still relatively young and flexible.

{ 4 comments }

Making an Impact

If you’re interested in learning how to make a sustainable difference in the world, you absolutely must study and meditate on the caps lock wisdom of FAKE GRIMLOCK, the Robot Dinosaur. No matter how often I read him, I find his writing inspiring, hysterical, and worth pondering at length. His halting style of writing practically begs you to stop and reflect on what exactly he’s roaring at you. And you should.

{ 1 comment }

The Incompleteness of Connectivism

Stephen has written a terrific post on connectivism as a learning theory. This is one of the briefest – and consequently, best – statements I’ve read on the subject.

Let me begin by saying that I’m a fan of connectivism. Personally, I’m inclined to be persuaded by the connectivist account as Stephen, George, and others have articulated it. But – while I haven’t read every piece written on the topic – those I have read contain a gaping hole which I feel must be addressed before the theory can be considered complete and, therefore, a legitimate alternative to longer established learning theories.

Stephen explains:

When I say of connectivism that ‘learning is the formation of connections in a network’ I mean this quite literally. The sort of connections I refer to are between entities (or, more formally, ‘nodes’)… In particular, I define a connection as follows (other accounts may vary): “A connection exists between two entities when a change of state in one entity can cause or result in a change of state in the second entity.”

In Stephen’s account, connections are defined as a kind of relationship between entities. However, I have never read a connectivist account of where entities come from, or a connectivist description of their nature. And defining an undefined word exclusively in terms of a second undefined word kicks the semantic can down the road. And building a learning theory on a term with such a definition seems “risky.”

But as I said above, this is not a critique of what has been written about connectivism – I’ve found that writing to be quite persuasive. This is simply a statement about what remains to be considered and written about before connectivism can be considered sufficiently complete.



http://opencontent.org/blog/page/7

Back in December Michael Feldstein wrote a terrific post about Pearson’s new initiative around “efficacy.” There has been a great thread of comments attached to his (as always) excellent piece of writing. I’ve been wanting to add my thoughts on the topic for a while. I’m finally getting around to it.

The Conversation Can’t Be About Efficacy (Only)

Many of you know I am hugely inspired by Bloom’s work on the “2 sigma problem.” In many ways, Bloom’s work is the last word in instructional efficacy – for three decades now there has been no mystery whatsoever about the most effective way to teach. Bloom and his group showed conclusively that the average student who:

  1. received their instruction via individual (or very small group) tutoring, and
  2. whose tutors took a mastery-based approach to instruction

performed two standard deviations better than the average student who received traditional classroom-based instruction (hence the “2 sigma” name). Another way of saying this is that the average student in the tutored group outperformed 98% of students in the traditional classroom group. So if Pearson or anyone else wanted to evangelize and propagate truly effective education, they just need to train tutors on mastery-based approaches and provide one for every student in the world.

But there’s a problem with this approach, and here is one of the rare moments when the heady realm of educational research actually reaches down and touches the lowly earth. Bloom recognizes that his discovery of this incredibly effective way to support student learning is of only academic interest because, even though it shows the amazing potential of the average student to learn significantly more than s/he does in a typical classroom environment, there is no way to make it work in the real world:

“The tutoring process demonstrates that most of the students do have the potential to reach this high level of learning. An important task of research and instruction is to seek ways of accomplishing this under more practical and realistic conditions than one-to-one tutoring, which is too costly for most societies to bear on a large scale. This, then, is the ‘2 sigma’ problem.” (p. 6; emphasis in original)

In today’s jargon, this kind of tutoring doesn’t scale. And this is the reason we refer to Bloom’s work as the “2 sigma problem.” The problem isn’t that we don’t know how to drastically increasing learning. The two-part problem is that we don’t know how to drastically increase learning while holding cost constant. Many people have sought to create and publish “grand challenges” in education, but to my mind none will ever be more elegant than Bloom’s from 30 years ago:

“If the research on the 2 sigma problem yields practical methods – which the average teacher or school faculty can learn in a brief period of time and use with little more cost or time than conventional instruction – it would be an educational contribution of the greatest magnitude.” (p. 6; emphasis in original)

So the conversation can’t focus on efficacy only – if there were no other constraints, we actually know how to do “effective.” But there are other constraints to consider, and to limit our discussions to efficacy is to remain in the ethereal imaginary realm where cost doesn’t matter. And cost matters greatly.

Analogous Problems in Big Pharma and Big Publishing

In this sense, the problem isn’t unique to pedagogies (e.g., tutoring). Imagine, for example, a Big Pharma company that discovers a cure for cancer. They announce to the world with much fanfare that “the three most common cancers are cured!” and that the medication is available worldwide immediately. However, the pills cost $10,000 every six months and must be taken for four years to affect the cure. What percentage of people in the world with these three cancers will actually be cured? Far less than 1%. There’s a significant difference between the “effectiveness” of a product in a clinical study, where you guarantee that every participant will take your pill, and the real world where insane (immoral?) pricing means that only the extremely wealthy are able to take your pill. Big Pharma’s claims about having “cured” anything seem tone deaf at best when the cure is priced out of the reach of normal people.

Textbooks are to students as medicines are to doctors – that is, faculty prescribe the textbooks but students are the ones that have to pay for them. And when a Big Publisher releases an exciting report about the efficacy of a new textbook or product like MyMathLab, that study will have been conducted in a controlled lab setting where Pearson guarantees that every student is using the product. “Great!” a faculty member reading the report might think, “we’ve cured math cancer! I’ll adopt this product!” But if MyMathLab is so expensive that the majority of students in a course can’t afford to buy it, we’re back to not having cured anything. Big Publishers’ claims about “highly effective materials” seem completely out of touch to students (and their parents) who can’t afford them.

Again, we can’t talk exclusively about efficacy. When we try to, we lapse back into the tree falling in a forest – if a huge proportion of the population can’t get access to some company’s textbook / MyLab / whatever, can we honestly claim that it’s highly effective? Access is critical. There can be no efficacy without access. And when access conditions in the research lab do not mirror access conditions in the real world, efficacy studies tell us nothing about the actual efficacy of a product. We have to add a consideration of students’ ability to actually access and use (and as I have argued elsewhere, own a copy of) the product to discussions about efficiacy.

Efficiency and the Golden Ratio

If we want to actually change the experience of students in the real world, rather than talking about efficacy we need to talk about the relationship between efficacy and cost – efficiency. And to me, the best way to talk about efficiency is using a measure I’ve been calling the “golden ratio.” This is a measurement that puts efficacy in the numerator and cost in the denominator, and though I’ve been talking about “standard deviations per dollar” for five or so years now, my own conception of the measure continues to evolve, and I want to develop it further in this article. But suffice it to say that any conversation of efficacy that ignores cost is purely academic in the worst sense of the word (see definition 2).

Let’s begin with something obvious but important: by definition, an average investment typically results in average learning. In other words, by definition, paying the costs typically associated with putting a teacher at the front of a room of students generally results in the amount of learning we typically see. Now, the golden ratio doesn’t account for these baseline costs or this baseline amount of learning – the ratio compares changes in costs to changes in learning. Let’s return to the Bloom 2 sigma work as an example.

Bloom has already provided us with a numerator – when students each receive their instruction from a tutor (instead of as part of a larger class), there is a two standard deviation gain in learning over students who receive traditional classroom-based instruction. However, Bloom doesn’t give us a denominator – he doesn’t tell us what that tutoring would really cost. If we assume that a reasonable full-time tutor costs about the same as the average full-time teacher gets paid, and subtract the cost of the classroom teacher pro-rated across all students, then this Bloom-style tutoring adds about $50,000 per student to the annual cost of instruction. In the golden ratio (rg) way of thinking, we might ask “is an additional 2 standard deviations of learning worth an additional $50,000 per student? Does rg = 0.0004 seem like a good deal?” Of course, whether it’s worth it or not is completely irrelevant because there is no way the overwhelming majority of organizations could afford it.

The Golden Ratio and Publisher Textbooks

I’m persuaded that one of the reasons we see lower rates of student success in higher education, especially among at-risk students, is that they cannot afford access to the core instructional materials intended to support learning in their courses. In a recent survey of over 15,000 students, 23% of students report that they frequently do not buy required textbooks due to cost, and 64% of students report skipping required textbooks at some point due to cost. Textbooks and related services are, quite simply, immorally expensive (US $1.5B adjusted operating profit in 2012 for a single publisher?), and the disappearing ink strategies publishers and schools recommend aren’t legitimate responses to the problem.

Now, I’m not saying that I believe that a better designed textbook can make a 2 sigma difference in student learning – because I don’t – but textbooks are in the same two-part problem category as Bloom’s 2 sigma problem. There is a potential benefit that textbooks, online services, and other products can provide (a numerator), and there are associated costs that we ask students to bear (the denominator). How do we maximize the numerator while holding the denominator steady? Unfortunately, we almost NEVER talk about these educational products in these terms. In fact, in higher education we almost never talk about changes in learning beyond changes to the metric “percent completing with a C or better.” But I think we can still use this more crude numerator to calculate some golden ratios that are both interesting and useful. For the remainder of this post let’s define rg as pass rate (in percent) divided by the cost of required textbooks per student (in dollars), and I’ll use “textbook” as a shorthand for all kinds of materials, online services, etc.

As an example, what is the pass rate for college algebra courses where faculty assign only a Pearson textbook, and what is the pass rate for other sections of the same course where they assign the Pearson textbook plus MyMathLab? And how much extra does MyMathLab cost? Even if we could force every student in a course to purchase the product (so the real world would mirror the lab), would a 7% increase in pass rate be worth an additional $80 per student? Would a 3% increase? Would a 35% increase?

But more interestingly, in the real world where we can’t replicate controlled lab settings and increased costs will lead to decreased student access, will the additional number of students who might pass because of their use of MyMathLab exceed the number of students who will skip the textbook + MyMathLab bundle altogether due to cost and suffer academically as a consequence? What will the actual net impact on learning and pass rates be when a faculty member adopts a $170 bundle like this? As amazing as it seems, the field appears to have neither the vocabulary nor the conceptual frame to even carry on this conversation.

The Golden Ratio and OER

One of the reasons I continue to be so interested in Open Educational Resources generally, and open textbooks specifically, is that they can be made to meet and even exceed Bloom’s practical impact requirements – they’re things “the average teacher or school faculty can learn in a brief period of time and use with little more cost or time than conventional instruction.” Not only can faculty learn to use an open textbook instead of a commercial textbook in a very brief period of time, open textbooks are significantly less expensive for students. So open textbooks can actually attack both parts of Bloom’s challenge – they can improve outcomes (by increasing access to all students, even if the quality of the instructional design is similar) and decrease cost. While open textbooks may not be able to improve learning by two standard deviations for the same cost today, they absolutely can result in better than typical learning at significantly lower than typical cost.

For example, beginning in 2011 we helped a college in the northeast move their College Algebra course away from a $180 MyMathLab bundle to an open textbook, open videos, and a hosted and supported version of MyOpenMath – an open source platform for providing online, interactive homework practice. In Spring Semester 2011, when every section of the course used the $180 bundle, 48.4% of students passed the course. In Fall Semester 2013, after all sections of the course had transitioned to the OER and open source practice system (which Lumen Learning hosts and supports for $5 per student, paid by institutions and not students, for institutions who don’t want to host it themselves), the percentage of students passing the course grew to 68.9%.

So for a scenario like this one, the two ratios would be:

  • Old model: rg = (48.4% pass rate) / ($180 required textbook cost) = 0.27 percent passing per required textbook dollar
  • New model: rg = (68.9% pass rate) / ($5 required textbook cost) = 13.78 percent passing per required textbook dollar

The golden ratio provides a simple, intuitive way to talk about the overall impact of an educational product. It also provides a similarly straightforward way to compare the overall impact of two products. Given all the hype around learning analytics and the sophisticated analysis of big data in education, it seems amazing to me that we can’t get this basic, basic, basic level of data from vendors – the same vendors who are all megaphones and mailing lists about the advanced capabilities of their innovative new data-driven systems. (I wonder why they don’t provide it to us?)

We can also calculate an “OER impact factor” which I’ll designate w (omega for open) – the overall effect of switching from publisher materials to OER – by dividing the golden ratio for OER by the golden ratio for the previously used publisher materials:

  • w = 13.78 / 0.27 = 51.03

I think this would be an extremely interesting metric for open initiatives to explore and report.

rg, w, and the Future of the Efficacy Conversation

Obviously these models could be more nuanced. It’s not clear to me whether the additional clarity we might achieve through increasing their sophistication would be worth the potential drop in understandability. There’s an appealing simplicity to comparing percentage pass rate to required textbook cost. I’ll continue thinking about how the models might be improved (you may have some ideas as well!). And while it will take some time for me to wrap my head around the implications of these models, I think we can already draw out some interesting initial implications. Here are two examples:

  • You can never have an rg larger than 1 when the required textbook cost exceeds $100. There’s something beautiful about that.
  • Placing cost in the denominator – which by definition can never be zero – of rg acknowledges that there is always a cost (even if it is very, very tiny) associated with using OER. It also accounts for those costs whether students pay for their materials or institutions do.

It also occurs to me that in K-12 settings, where institutions pay for textbooks and other materials instead of students, these measures might contribute to ongoing conversations about the efficient use of public funds.

I’m looking forward to thinking more about these measures and employing them in my own research (there’s no better way to understand their utility!). I think w, the OER impact factor, will be particularly useful in talking about the true impact of OER adoptions. There are some issues around the interpretability of specific values for rg and w, and I’ll need to explore the space of possible values and provide some guidance about how to interpret those, too. But overall, even if these simple models don’t end up being used by anyone other than me, I hope they inspire a more realistic discussion of efficacy across the field. There’s no efficacy without access, and access is largely a function of cost. You can’t talk (responsibly) about efficacy without simultaneously addressing cost. Bloom pointed this out 30 years ago, and rg provides one simple way of doing it. Maybe you’ll devise a better one!

Postscript. The power of open goes far beyond cost savings, a point I continue making with my work and advocacy around the 5Rs (more). However, raising that issue during this discussion of rg and w felt like it would just muddy the primary argument about the need to consider cost in addition to efficacy.

{ 11 comments }

Rewiki Makes Me Remember…

Watching Mike’s screencast of the rewiki prototype lead me down memory lane to a tool we built back in the day called Send2Wiki. Here’s a summary from the extensions page at Mediawiki:

  • Provides a bookmarklet that makes it easy to send web pages to a wiki.
  • Converts web page HTML to wiki format (using html2wiki by David J. Iberri).
  • Strips chrome from web pages during the conversion process.
  • Displays information about sent articles in the MediaWiki footer.
  • Optionally translates web page to another language (using Google’s Language Tools).
  • Preserves links by converting relative links to absolute ones.
  • Autodetects license information such as Creative Commons and GFDL licenses.
  • Lets the user specify a license for the the new wiki page.
  • Sends PDFs to the wiki by first converting them to HTML (using PDFTOHTML based on xpdf 2.02 by Derek Noonburg).
  • Creates didilies describing the conversion.

Basically, you setup the extension on your Mediawiki and then installed the bookmarklet in your browser. Then you could push the content from any page you’re looking at into your Mediawiki with the click of a button and a few options:

send2wiki

It looks like Mike’s rewiki work is drifting this direction. Maybe someone will get some benefit / reuse out of this old code after all!

That last bullet reminds me of my favorite tool the old COSL team ever built – scrumdidilyupmtious. Inspired by the social bookmarking tool delicious, this was a social relationship-mapping tool. Instead of helping you bookmark a single site, it helped you capture relationships between two sites. It was implemented as a browser extension and server:

didily

In the case above, you would just type “owns” or “purchased” in the pink box and then hit save. You could then visit the didily site and see “google.com owns writely.com” and all the other relationships you had expressed. More interestingly, you could search the site for a specific URL and get back all the relationships expressed about the URL by everyone, with results coming as RSS, RDF, or HTML.

Ah, we did some cool things back in the day… I still think both these ideas have legs. Hopefully some of it will prove useful!

{ 3 comments }

Clarifying the 5th R

There have been a number of responses to my decision to introduce a 5th R – “Retain” – to my 4Rs framework. Bill, Darren, and Mike have responded, among others. Some parts of the responses lead me to believe that I wasn’t entirely clear in my initial statement, so let me try to clear a few things up.

The original 4Rs were not an attempt to create a new group of permissions that open content licenses needed to support. Many open content licenses, from the CC to the GFDL to the OPL, already granted the rights to reuse, revise, remix, and redistribute long before I created the 4Rs framework. I created the 4Rs framework specifically for the purpose of helping people understand and remember the key rights that open content licenses grant them.

The right to Retain – i.e., make, own, and control – copies of openly licensed content has always been a right granted by open content licenses. Generally speaking, it is impossible to revise, remix, or redistribute an openly licensed work unless you possess a copy of the work. As Mike pointed out, the right to retain is strongly implied in open licenses, but never called out directly. Consequently, it has never been addressed directly in the discourse around open.

I maintain my original purpose for creating the 4Rs framework in adding “Retain” and arriving at 5Rs. The purpose of the framework is to help people understand and remember the key rights that open content licenses grant them. It was becoming increasingly clear to me that both producers and users of open content were unaware of, or forgetting to consider, this critical right to Retain.

As I said above, at least 3 of the original 4Rs are impossible to do without the right to Retain. This makes Retain a fundamental or foundational right, and yet it is completely ignored in the discourse around open. This is why I felt the need to call attention to Retain, and this is why I now place it at the head of the list of 5Rs – retain, reuse, revise, remix, and redistribute.

Thinking along these lines – about how the systems we design can proactively enable people to exercise their right to Retain – has already proven extremely useful to me personally. (As I wrote about recently, we’re in the middle of designing a new OER transclusion system at Lumen.) If the right to Retain is a fundamental right, we should be building systems that specifically enable it. When you specifically add “Enable users to easily download copies of the OER in our system” to your feature list, you make different kinds of design choices. (Unfortunately, you can see all around you that many of the designers of OER systems either failed to think about how to enable users to exercise their right to Retain, or have purposively taken specific actions to prevent users from exercising it.)

Postscript. The name of my blog is “Iterating Toward Openness.” This name is meant to very publicly demonstrate my shortcomings as a thinker about and practitioner of “open.” I didn’t understand everything I needed to about open when I kicked off the open content work in 1998, and I still don’t understand enough about it today. My goal is to be constantly (if incrementally) refining and improving my understanding and appreciation of open. The change from 4Rs to 5Rs reflects one such improvement in my understanding. Who knows – maybe another seven years from now I’ll add a 6th R.



http://opencontent.org/blog/page/8

via MIke Caulfield

I recently received the excellent news that I will receive another year of support as a Shuttleworth Fellow. These fellowships are extremely generous and I’m incredibly grateful for the foundation’s vote of confidence in the work I’m doing supporting widespread OER adoption through Lumen Learning. As many of you know, Shuttleworth Fellows also have the opportunity to pitch the Foundation for project funding. The foundation has also chosen to support our project proposal this year, and I’m extremely excited to start sharing the idea we’re working on with the community.Over the past year Lumen has made great strides in promoting OER adoption, delivering OER and related services to thousands of students at dozens of institutions. However, we’ve struggled to find a highly scalable manner of doing so – for a very specific reason. We started out hosting our Open Course Frameworks in the Canvas learning management system because it’s both the best LMS out there (in my opinion) and is openly licensed. But over and over again we heard from faculty that they don’t want to send students out of their school’s official learning management system and into a second system from which OER are delivered, because this multiplicity of LMSs confuses students (and faculty). Instead, students and faculty want OER to be delivered within their institution’s learning management system. In order to meet this demand we spent much of the past year helping rebuild the same OER-based courses over and over again within several partner schools’ learning management systems. Obviously this process cannot scale to support thousands of schools and millions of students – which is the scale at which OER needs to operate in order to make a significant impact on the affordability and quality of education.

Beginning in January 2014 we launched a prototype of a new approach designed specifically to provide OER and related services to multiple schools in multiple learning management systems. In this new model, we host OER centrally and transclude it directly into multiple LMSs via the Learning Tools Interoperability (LTI) standard. With this transclusion technique, we’re able to host, manage, and improve courses in a single platform and make the content “magically” appear directly inside multiple schools’ learning management systems. The prototype is currently in use by multiple schools in multiple LMSs and has been extremely successful, providing us with a path to drastically improve our ability to scale the impact of OER by making them significantly easier to use.

So, for our Shuttleworth funded project we are going to create an openly licensed, high quality, highly scalable version of this functionality. What we’re calling the “Candela” platform will make OER easily usable in all LMSs (as well as environments supporting other learning world views, like PLE tools) that support LTI.

We’re building the Candela platform on top of WordPress, and expect that this will facilitate a huge number of synergies both anticipated and unanticipated. We’re already looking at integrating a few key tools and plugins that will drastically improve the end user experience – tools like Open Embeddable Assessments for in situ formative assessment and Hypothesis for highlighting and annotating. Types and Views look like promising ways to support CC licensing and inline attribution, as well as fine-grained alignment of OER with learning outcomes. And Candela will give us an excellent context for experimenting with the idea that learners should be able to own their learning content and learning data (full-course content export that preserves your notes and highlights, anyone?). And given all the energy and momentum in the open ed community around WordPress I can’t imagine what kinds of unforeseen random goodness will come in the future.

Finally, I’m super excited to announce that Lumen is working with FunnyMonkey on Candela development. I’ve been hoping for a chance to work with Bill for years now, and at last we have one.

Overall, I’m extremely excited about this work and the degree to which it will enable us to make OER easy for faculty to adopt and adapt and easy for students to learn from and own forever. I’d love to hear your feedback on our approach… Thoughts? What are we missing? How should we be working together?

{ 3 comments }

The Access Compromise and the 5th R

It’s been seven years since I introduced the 4Rs framework for thinking about the bundle of permissions that define an open educational resource, or OER. The framework of permitted activities – reuse, revise, remix, and redistribute – has gained some traction in the field, and I’m happy that people have found it useful. The 4Rs play a critical role in my own thinking about OER, and my operational definition of OER now includes two main criteria: (1) free and unfettered access to the resource, and (2) whatever copyright permissions are necessary for users to engage in the 4R activities. But while the framework has served the field well – and has shaped my own thinking, too – I believe the time has come to expand it.

A year ago I wrote a piece on adaptive instructional systems, and how publishers are moving away from selling content to leasing access to services as a way of responding to the threat to their business models posed by open educational resources. I called it an “attack on personal property”:

When you own a copy, the publisher completely loses control over it. When you subscribe to content through a digital service (like an adaptive learning service), the publisher achieves complete and perfect control over you and your use of their content.

Over the last year my thinking about the attack on personal property has slowly expanded and generalized to include not just publishers, but our own campuses as well. Last month I wrote about “disappearing ink,” a way of characterizing the way that post-secondary institutions are trying to increase the affordability of required textbooks by decreasing student access to them. Specifically, campuses have initiated a number of programs like textbook buyback, textbook rental, digital subscription programs, and DRM-laden ebook programs, each of which results in students completely losing access to their required textbooks at the end of term. The more I’ve pondered the disappearing ink strategy, the more it has bothered me. I can understand commercial publishers acting in a way that favors business over learning, but not our campuses.

The Access Compromise

Earlier this week I had the opportunity to speak to a group of librarians at the annual SPARC conference. As I prepared for that talk, and after a great conversation with Nicole Allen of SPARC, I began thinking about this broader problem from the library perspective. I slowly came to see that libraries represent a compromise made centuries ago under a different set of circumstances.

There was a time before the invention of the printing press when books were unfathomably expensive – costing a full year’s wages or more for a single volume. In this historical context where ownership of books by normal people was utterly impossible – unimaginable, even – we compromised. We said, let’s gather books together in a single place and provide access to them. That access was limited to the privileged at first, but over time we have slowly but surely worked to democratize access to books through libraries.

Foregoing the idea of ownership and instead promoting the idea of access made sense in a world where books were incredibly scarce and new copies were too expensive for anyone but royalty to commission. However, in a world where books, journal articles, and other educational resources can be copied and distributed instantly and at essentially no cost, the “access compromise” doesn’t seem like such a bargain anymore.

Unfortunately, in the higher education textbook market we see this historical story playing in reverse. Books that were once affordable enough to be owned by students have climbed in price to a point where we find our own institutions trying to persuade students to make the access compromise. That should have been the trigger. It’s past time to turn the higher education textbook conversation away from access and back to personal ownership and individual control of learning content.

The 5th R

Which brings us back to OER. There is no possible short- or medium-term future in which commercial publishers do what is economically and technically necessary to make it possible for students to actually own their learning content. This means that any advances toward ownership will have to come from the field of open education.

Unfortunately, we the field of open education have completely bought into the access compromise. There’s not a single definition of OER I’m aware of – including my own – that speaks directly to issues of ownership. Yes, ownership is sort of implied in the “reuse” R, and is legally permitted by open licenses. But for all of their willingness to share access to open educational resources, how many OER publishers go out of their way to make it easy for you to grab a copy of their OER that you can own and control forever? How many OER publishers enable access but go out of their way to frustrate copying? How many more OER publishers seem to have never given a second thought to users wanting to own copies, seeing no need to offer anything beyond access?

This leads me to feel that the time has come to add a 5th R to my framework – “retain.” Hopefully this 5th R will elevate the ownership conversation in the open education community, allowing us to talk about it explicitly and begin the work necessary to support and enable it directly.

The 5Rs of Openness

– Retain – the right to make, own, and control copies of the content
– Reuse – the right to use the content in a wide range of ways (e.g., in a class, in a study group, on a website, in a video)
– Revise – the right to adapt, adjust, modify, or alter the content itself (e.g., translate the content into another language)
– Remix – the right to combine the original or revised content with other open content to create something new (e.g., incorporate the content into a mashup)
– Redistribute – the right to share copies of the original content, your revisions, or your remixes with others (e.g., give a copy of the content to a friend)

{ 16 comments }

Long before an upstart Harry headed to Hogwarts, Sparrowhawk went to the School of Roke in Ursula K. Leguin’s A Wizard of Earthsea. As part of his schooling, Sparrowhawk:

was sent with seven other boys across Roke Island to the farthest north-most cape, where stands the Isolate Tower. There by himself lived the Master Namer, who was called by a name that had no meaning in any language, Kurremkarmerruk. No farm or dwelling lay within miles of the tower. Grim it stood above the northern cliffs, grey were the clouds over the seas of winter, endless the lists and ranks and rounds of names that the Namer’s eight pupils must learn. Amongst them in the tower’s high room Kurremkarmerruk sat on the high seat, writing down lists of names that must be learned before the ink faded at midnight leaving the parchment blank again.

I find it deeply unsettling that publishers, startups, and college and university bookstores have turned to “disappearing ink” as the core of their textbook affordability strategies. Whether we’re talking about textbook buy back programs, textbook rental programs, relying on textbooks from the library’s collection, subscribing to ebook services, or even borrowing textbooks from friends, each and everyone of these approaches to improving textbook affordability does so by stripping students of their core educational resources at the end of term (or sooner).

I’ve written about the disappearing ink problem before in the narrow context of adaptive educational systems, but as I’ve pondered Sparrowhawk’s plight I’ve come to understand that this problem of students losing access to their core educational resources (1) is not new to the world of digital educational materials, (2) is propagated by publishers in almost all of their digital content and not just in their adaptive platforms, and (3) is something that our institutions are actually promoting through their various textbook affordability programs.

(I leave it as an exercise to the reader to ponder the many messages our institutions send when they tell students they only need to keep the textbooks from their “important classes.”)

Over the last several weeks this has led me to think about a benefit of open educational resources I had probably under-appreciated before. When the core instructional materials for courses are open educational resources (OER), we can provide more than free and open access to course materials – we can provide free copies of course materials to students. In a world where links break, services get retired, and organizations change business models, we should be doing more than providing students with free and unfettered access to OER – we should also be offering them easy-to-download and easy-to-use copies of OER. Copies they can own, and keep forever.

As it rattled around inside my tiny brain, the simple thought that “students should own their learning content” jarred another thought loose. If students can own their learning content, why can’t they own the history of their interactions with that content? In other words, why can’t they own a copy of the raw analytics data generated as they used that learning content? There are no technical or legal reasons students cannot own a copy of these data, only stupid, kludgey, protectionist, “business model” reasons. Among the myriad reasons this would be A Good Idea, if students had access to their own learning data the potential for an explosion of “personal learning analytics” tools would be incredible.

I’m increasingly persuaded that to truly empower learners and learning, we need to shift away from the culture of leasing content and hoarding data to a culture where learners are easily able to own copies of their learning content and learning data. It’s not enough for them to have free and unfettered access, we must enable students to own their own personal copies of them. Ownership matters, desperately.

Look to hear a lot more from me on the topic of “learners owning their learning content and learning data” as the year progresses.

PS. Maybe these are thoughts you’ve already had before. Terrific! Would you leave a link in the comments so I can read your thinking about issues of student ownership of learning content and learning data? I’m far less interested in being “first” to have the idea than I am interested in implementing these ideas in ways that benefit students.



http://opencontent.org/blog/page/9

On OER and College Bookstores

Occasionally an initial concern for institutions considering a major OER initiative is, “What will happen to the revenue the college has traditionally received from the bookstore?”

In 2011, NACS (the National Association of College Stores) released this infographic showing where the money students spend on textbooks goes (click the image to link to the original):

The Textbook Dollar

According to NACS, the average college bookstore’s pre-tax income on textbooks is 3.7% of the price of the book. In other words, when a student spends $150 on a biology textbook, the college “makes” $5.55.

According to NACS, the publisher gets 77.4% of the price paid by the student. Another 10.7% of the price of the book is tied up in bookstore personnel costs supporting (their examples): ordering, receiving, shelving, refunding, sending extras back to the publisher, cashiers, and customer service. 1% of the cost of the price of the textbook is consumed by physically shipping the book from the publisher to the store. The final 7.2% of the cost of the textbook goes to bookstore overhead.

Now, OER are born digital, and are completely free for students to access and use. There’s no publisher’s percentage, no ordering, no shipping, no receiving, no shelving, no cashiers, no refunds, no sending extra books back to a publisher, etc. But there’s also no revenue returned to the college by the bookstore. So while OER are great for students, they’re the natural born enemy of the bookstore, right?

Actually, I think there’s an opportunity for very productive collaboration between campus-wide OER intiatives and bookstores. Specifically, I think there’s a huge opportunity for bookstores to offer optional print-on-demand to students when faculty adopt OER in place of commercial textbooks.

For example, a single copy of a 500 page open biology textbook ordered print-on-demand from CreateSpace costs $6.85. Let’s round up and say it costs the book store $10 to provide a printed copy of this same title through local means.

Let’s assume that the personnel costs associated with providing print-on-demand are the same as the personnel costs of ordering, shipping, receiving, shelving, refunding, and sending books back (they’re likely not – they’re likely lower – but let’s pretend they’re the same). Let’s also assume that the overhead rate the store needs to charge per book remains the same. We would then need to add 7.2% + 10.7% to the cost of our open biology textbook, so $10 + $0.72 + $1.07 makes a new grand total of $11.79 for the book. Let’s be generous and round up to $12.

Finally, instead of the 3.7% pre-tax profit, let’s say that the college absolutely has to have the same $5.55 profit on this book that they used to enjoy on the $150 book, so that the college doesn’t “lose money” because of their OER initiative. Actually, let’s be generous and round up again – let’s give the college $6 in profit. $12 + $6 = $18.

Here’s the insane part: the college bookstore actually makes more pre-tax profit on the $18 print-on-demand open textbook than they do on the $150 publisher biology book, while earning the same per-book percentages for overhead and personnel.

Now, I’m sure it’s slightly more complicated than this back of the envelope sketch, but the point is that creative bookstores who want to proactively partner with local OER initiatives can provide a real service to students (i.e., optional, aggressively priced print-on-demand versions of OER) and provide the same amount of (or more) revenue back to their college.

{ 2 comments }

Lumen Learning / OpenStax Partnership

I’m SUPER excited today to announce a new partnership between OpenStax College, which makes great open textbooks, and Lumen Learning, which provides a wide range of services to institutions that want help adopting OER successfully and sustainably. For all the details, check the full announcement.

Lumen also released a new video today that explains what we do and why we do it, and also includes perspectives from several of our partner institutions about how they’re using OER and what it’s like to work with Lumen.

{ 0 comments }

I recently needed to quickly create a map of higher education institutions Lumen is working with, and consequently needed LAT and LONG info for dozens of schools. Rather than do that all by hand, I created this little recipe for automatically retrieving coordinates given a school’s name using the Google Maps API and Google Spreadsheets. Here’s a demonstration of the recipe using a list of all the higher education institutions where I’ve taught:

https://docs.google.com/spreadsheet/ccc?key=0Aq0aF_AqiIz9dFRvRVd2Y0JVbHh6WllqcGVCS2VnbWc&usp=sharing

I fully realize I’m no Tony Hirst, but thought this was interesting enough to share.



http://opencontent.org/blog/page/10

Taking a Leap of Faith

Exactly a year ago today I published a post about some exciting changes in my professional life. I had just applied for a 12-month unpaid leave of absence from BYU. My goal was to spend the time away focused on supporting and scaling the adoption of open educational resources (OER) in formal education. Specifically, I wanted to help institutions that serve at-risk students – like community colleges – use OER to eliminate textbook costs and improve student success. Kim Thanos and I had formed Lumen Learning in October for exactly this purpose. Then I got the incredible news that I’d received a Shuttleworth Fellowship. And then my leave was approved. Thus began a year of awesomeness.

Institutions and individuals have been creating and sharing OER for over a decade, and millions of people have used these materials in informal settings. As we’ve seen from OCW visitor surveys to MOOC demography, people who seek out free, online, informal learning opportunities tend to be people who already have a significant amount of formal education. And while I’m excited for the additional growth these people are able to achieve, one of my primary interests in OER has always been their capacity to decrease the cost and increase the quality of formal education for at-risk students. In a world of over half a billion OER, why are students still dropping out of community colleges because they can’t afford $170 textbooks?

Because faculty aren’t adopting OER for their classes. Plain and simple.

There are incredible collections of OER (e.g., OpenStax), there are incredible indices of OER supported by rich, professionally curated metadata (e.g., OER Commons), there are even great tools for creating OER mashups (e.g., OpenTapestry). But, in their own way, each of these efforts is underpinned by an “if we build it they will come” philosophy. If we just make the content sufficiently high quality, if we just make it easy enough to find, if we just make it easy enough to remix, faculty will adopt OER in their classrooms. Don’t get me wrong – there are some faculty who have the necessary time, prerequisite skills, and hacker ethic to do it themselves (I would like to believe that I’m one of them). But people with this particular configuration of opportunity, means, and motive are the overwhelming minority of higher education faculty. By the end of 2012 it had become clear that if OER adoption was ever going to happen at any scale, someone needed to get on a plane, go to campus, and train people. So that’s what the Lumen team did in 2012.

It’s grueling, blissful work. The travel is relentless (I flew 99 legs in 2013). Waking up and not being able to remember where you are is depressing. Time away from family is an incredible sacrifice. But along the way I also rediscovered how much I love working with faculty – seeing the excitement on their faces, hearing it in their voices. I had forgotten how rewarding it is. And then you get to hear the stories about the amazing ways students are affected…

Sometimes a half-day meeting with campus leadership and a one and a half day faculty workshop is enough. For bigger schools with a well-staffed Center for Teaching and Learning, some initial training is often all they need. Some schools are even able to take advantage of our open course frameworks without ever working with us directly. And that’s awesome – it’s right in line with our “RedHat for OER” philosophy. The open course frameworks are openly licensed and we don’t – and will never – charge for content. To my mind, every school that adopts OER without working with Lumen is another win for students that didn’t cost me an airplane ride.

But for thousands of smaller schools, a single campus visit and faculty workshop isn’t nearly enough to help them use OER successfully to support student learning. These faculty need ongoing support, both before and during the semester. They need a mix of different kinds of support – technical support, licensing support, and pedagogical support. And it turns out that the particular configuration of training and support that faculty actually need is fairly different from what we had imagined during our early whiteboard sessions. So in addition to traveling nonstop, we’ve also been learning nonstop.

Over this past year it’s become pretty clear why no one was offering this level of support to schools and faculty. It’s HARD! I don’t know if it would be possible to do if you didn’t really love students, love faculty, and believe in the transformative power of education and of open. (Did I mention the Lumen team are incredible?) But it’s paid off – over the last 12 months we’ve made huge progress at expanding OER adoption:

  • In December 2012 we were supporting open education initiatives at eight institutions. Today that number is over 30.
  • In fall semester 2013, Lumen-supported open courses saved students more than $700,000.
  • Preliminary data indicate that pass rates in OER-based courses are at least equal to, and in some cases much higher than, pass rates in courses using expensive publisher textbooks.
  • On campuses where we work, students are asking academic leaders to offer more courses using OER.
  • An exciting pattern is emerging: One conversation about open education leads to a pilot of 1-3 courses using OER, and that pilot then leads to all sections of those courses switching to OER the following semester.
  • Tidewater Community College, who we helped develop a completely OER-based Associates Degree in Business Administration, has just been named a finalist for the prestigious Bellwether Award for that work.

From my perspective down here, on the ground, I can see the momentum around OER adoption growing with my own eyes. It’s actually impacting students’ lives. It’s extremely exciting. And I couldn’t possibly walk away from it – not now that the ball is finally rolling.

I’ve made the incredibly hard decision to leave my full-time, tenured faculty position at BYU. For the foreseeable future, I’m going to focus my professional time and energy on providing on-the-ground support for OER adoptions. I’m so grateful to be able to continue this work with Kim and the rest of the Lumen Learning team – they are simply awesome, and I wouldn’t even be trying without them. And I’m extremely grateful to be able to continue the work with a second year of Fellowship funding from the Shuttleworth Foundation, whose support has been one of the keys to our success.

I’m not leaving academia altogether, however. I’m very excited to have accepted an appointment as Scholar in Residence at the University of Utah, in the Teaching and Learning Technologies group, where I’ll be able to continue the research end of my work on using openness to increase the quality and affordability of education. I’m also hoping to continue my relationship with BYU as an adjunct. These arrangements allow me to achieve the right mix of research and teaching necessary to support the success of my broader OER adoption work. Without the research component, any claims of success in helping faculty use OER effectively would feel like empty hype. And without the ability to teach my Intro to Open Education course occasionally, I can’t evangelize, identify, and prepare the additional people the field of open education needs so desperately. The small number of people in the world with deep expertise in open education just isn’t sufficient to get the job done.

This process of professional reconfiguration has been long, challenging, and invigorating. I decided early on to reject the model of being the professor with a side project who’s never on campus. That’s neither fair to the work, which is so much more than a side project, nor to my students and colleagues, who deserve the full attention of a fulltime person. The question then became, how can I structure my professional life in a way that maximizes my ability to do the work that needs doing? I’m sure that the configuration will continue to evolve, but for now it feels like I’ve found a combination that allows me to be as effective as I possibly can.

All that said, this is a terrifying leap to make. But I continue to feel a deep, abiding sense of responsibility and commitment to this work – a strong sense that there’s more here that needs doing, and that I’m somehow peculiarly prepared to do it. I’m constantly humbled and grateful that I even get to be part of it. These feelings give me the courage to follow the work wherever it leads, terrifying or otherwise. And did I mention how challenging, exciting, and fun it is? Or how much good it feels like we’re accomplishing?

So I’m taking a leap of faith. As Churchill said, “It is no use saying, ‘We are doing our best.’ You have got to succeed in doing what is necessary.” We have to do what’s necessary to improve the affordability and quality of education for millions of students. And now, working with the team at Lumen Learning, I will.

{ 18 comments }

Great news for Tidewater Community College, one of Lumen Learning‘s first partner schools:

Tidewater Community College’s textbook-free degree in business, which was launched as a pilot program with the Fall 2013 semester, is a finalist for a national Bellwether Award, given annually by the Community College Futures Assembly.

Among more than 400 applicants in three categories, TCC was selected as one of 10 finalists in the “instructional programs and services” category. All finalists will present their programs Jan. 27 at the Community College Futures Assembly in Orlando, Fla., and winners will be announced the next day at the group’s annual meeting.

TCC launched its “Z Degree” – “z” for zero textbook cost – to ease the pain of soaring textbook costs for college students. It partnered with Lumen Learning, a Portland, Ore.-based company that helps educational institutions integrate open educational resources into their curricula.

Read more from the TCC press release, TCC NO-TEXTBOOK DEGREE A FINALIST FOR NATIONAL AWARD.

{ 0 comments }

This week I’m participating in a conversation about badges over on the Department of Education’s LINCS website. I believe badges are potentially a key piece of infrastructure necessary to support truly open, distributed learning, but I’m frequently disappointed by the level of thoughtfulness of the discourse around badges. There’s much to learn about badges by looking to the history of other technologies, as I’ve tried to point out in my answers to the first two question prompts. [click to continue…]



http://opencontent.org/blog/page/11

This article was originally written by Steven Seidenberg and published on the site Intellectual Property Watch. IP Watch requires you to create an account to read their CC BY-NC-ND licensed articles. This annoyed me, so I am reposting the article here.

The US Supreme Court yesterday let stand an important appellate court ruling on copyright law, giving a boost to artists who repurpose others’ works and to supporters of fair use rights. This decision, however, upset many copyright owners, who fear it will allow their works to be used without payment and without their consent.

The Supreme Court didn’t decide the case on its merits. Instead, the court simply refused to review the Second Circuit Court of Appeal’s decision in Cariou v. Prince.
[click to continue…]

{ 0 comments }

Hypothesis Integration

I’m currently in Edinburgh at the semi-annual Shuttleworth Foundation Gathering. One of the other Fellows, Dan Whaley, is working on a killer open source annotation and highlighting tool called Hypothesis. You should absolutely check it out.

I’ve enabled Hypothesis on my blog now (via the companion WordPress plugin!). If you want to make comments on specific words or phrases in my posts (instead of making a comment on the entire post), just highlight a word or phrase and then click on the pen icon that pops up. I’ll be keen to see what – if anything – you do with this new capability. Please annotate posts on their permalink pages rather than annotating them on the front page.

If you want to see all the comments people have made around the site, check out https://hypothes.is/stream/#?uri=opencontent.org. An RSS feed for the stream is on the roadmap, and I’ll incorporate that into the site as soon as it’s available.

{ 1 comment }

Tom Reeves on Things and Problems

I’m at AECT this week, the annual meeting of the professional association for academic educational technologists and instructional designers. This is my 15th year attending the conference, and (with the exception of the Open Education Conference) this is my favorite conference each year. These are “my people,” and so I was much more nervous than usual when invited to give a keynote address here.

Ali Carr-Chellman, Tom Reeves, and I participated yesterday in “AECTx,” a keynote session in which we each gave 18 minute talks. Without coordinating ahead of time, each of our talks focused on using educational technology and educational research to solve large, societal problems. I was particualrly taken with the clarity of Tom’s formulation. Two slides near the end of his presentation admonished us that we need to:

Stop Focusing Research on Things

  • Learning Analytics
  • Mobile Learning
  • Online Learning
  • 3D Printing
  • Games and Gamification
  • Wearable Technology
  • The Internet of Things
  • Machine Learning
  • Virtual Assistants
  • Immersive Learning

Start Focusing Research on Problems

  • Poverty
  • Primary education
  • Racism
  • Sexism
  • Child Abuse
  • Crime
  • Lack of literacy
  • Poor motivation
  • Hopelessness
  • Obesity

Tom joked about “research” on the impacts of iPads on learning, and other thing-focused research. While he excluded them (perhaps as a professional courtesy to me as his co-presenter), “open educational resources” very clearly belong on the “Things” list. Openness is, and always will be, a means rather than an end. The moment we allow the means to become the end, we sacrifice the true end on the altar of zealotry.

Of course, I would argue that “Affordability,” “Access,” and “Completion” belong on the “Problems” list. Those are the problems I’m working on, and it’s good to be reminded and recentered from time to time. I think we all need that.

Tom ended his presentation with this quote from Charles Desforges:

“The status of research deemed educational would have to be judged, first in terms of its disciplined quality and secondly in terms of its impact. Poor discipline is no discipline. And excellent research without impact is not educational.”

I’ve long been inspired by Tom’s pleas for “socially responsible research” in education, and it was great to hear that message straight from the source. I thought the whole AECTx was a terrifically inspiring session.



http://opencontent.org/blog/page/12

What is Open Pedagogy?

Hundreds of thousands of words have been written about open educational resources, but precious little has been written about how OER – or openness more generally – changes the practice of education. Substituting OER for expensive commercial resources definitely save money and increase access to core instructional materials. Increasing access to core instructional materials will necessarily make significant improvements in learning outcomes for students who otherwise wouldn’t have had access to the materials (e.g., couldn’t afford to purchase their textbooks). If the percentage of those students in a given population is large enough, their improvement in learning may even be detectable when comparing learning in the population before OER adoption with learning in the population after OER adoption. Saving significant amounts of money and doing no harm to learning outcomes (or even slightly improving learning outcomes) is clearly a win. However, there are much bigger victories to be won with openness.

Using OER the same way we used commercial textbooks misses the point. It’s like driving an airplane down the road. Yes, the airplane has wheels and is capable of driving down on the road (provided the road is wide enough). But the point of an airplane is to fly at hundreds of miles per hour – not to drive. Driving an airplane around, simply because driving is how we always traveled in the past, squanders the huge potential of the airplane. So what is the analogous additional potential of open educational resources, compared to commercial textbooks and other commercial resources? OER are:

  • Free to access
  • Free to reuse
  • Free to revise
  • Free to remix
  • Free to redistribute

The question becomes, then, what is the relationship between these additional capabilities and what we know about effective teaching and learning? How can we extend, revise, and remix our pedagogy based on these additional capabilities? There are many, many potential answers to this question. Here’s one example.

Killing the Disposable Assignment

If you’ve heard me speak in the last several months, you’ve probably heard me rail against “disposable assignments.” These are assignments that students complain about doing and faculty complain about grading. They’re assignments that add no value to the world – after a student spends three hours creating it, a teacher spends 30 minutes grading it, and then the student throws it away. Not only do these assignments add no value to the world, they actually suck value out of the world. Talk about an incredible waste of time and brain power (an a potentially huge source of cognitive surplus)!

What if we changed these “disposable assignments” into activities which actually added value to the world? Then students and faculty might feel different about the time and effort they invested in them. I have seen time and again that they do feel different about the efforts they make under these circumstances.

But which effective practices specifically might we remix in order to kill the disposable assignment? I love John Hattie’s book Visible Learning as a source for finding effective practices. In the book Hattie compiles findings across over 800 meta-analyses of 50,000 studies of 80,000,000 students to arrive at average effect sizes for over 130 influences on learning, including student influences, teacher influences, teaching influences, and school influences. Here are a few that resonate with me, together with their effect sizes as estimated by Hattie, a brief description, and page numbers from the first edition:

Teacher Student Relationships = 0.72
“Developing relationships requires skills by the teacher – such as the skills of listening, empathy, caring, and having positive regard for others…. Teachers should learn to facilitate students’ development by demonstrating that they care for the learning of each student as a person and empathizing with students.” Pp. 118-119.

Teacher Clarity = 0.75
Clarity – as rated by students (not other teachers) – in “organization, explanation, examples and guided practice, and assessment of student learning.” P. 126.

Worked Examples = 0.57
“Worked examples reduce the cognitive load for students such that they concentrate on the processes that lead to the correct answer and not just providing an answer.” P. 172.

Organizing and Transforming = 0.85
“Overt or covert rearrangement of instructional materials to improve learning. (e.g., making an outline before writing a paper)…. The types of strategies included in this category (such as summarizing and paraphrasing) promote a more active approach to learning tasks.” Pp. 190-191.

Feedback = 0.73
Pp. 173-178.

Reciprocal Teaching = 0.74
“The emphasis is on teachers enabling their students to learn and use cognitive strategies such as summarizing, questioning, clarifying, and predicting…. The effects were highest when there was explicit teaching of cognitive strategies before beginning reciprocal teaching.” P. 204.

An Example of Open Pedagogy

When you can assume that all the materials you’re using in and with your class are open educational resources, here’s one way to remix the effective practices listed above with OER in order to provide you and your students with opportunities to spend your time and effort on work that makes the world a better place instead of wasting it on disposable assignments.

  • Begin by establish relationships of trust with students. You’re about to ask them to do something they’ve probably never tried before. They won’t follow you if they don’t trust you.
  • Provide a clear description of the assignment – students will revise and remix the core instructional materials of the class (which are OER) with other OER and with their own original work in order to create a small tutorial (in any medium) on a topic that students in the course generally struggle with. They will then use their tutorial to teach the topic to one of their peers. The best tutorials will be integrated into the official OER collection or open textbook for use by other students starting next semester.
  • In addition to a clear description of the assignment, you should also provide a detailed description of how the assignment will be graded and / or examples of high quality student work.
  • Show a variety of worked examples. If this is the first time you’re using this valuable assignment, use the OER that you’ve compiled to support student learning as your examples. Talk students through the process of selecting existing resources and remixing them into something that specifically supports their learning. If you have existing student work that you can show, even better.
  • Invite students to engage in the remix activity (aka organizing and transforming) with an eye toward their upcoming peer tutoring interactions (using strategies like summarizing, questioning, and clarifying in the design of their remix).
  • Provide constructive feedback to students on their remix and invite them to revise their tutorials.
  • Once the revisions are complete, invite students to engage in the reciprocal teaching experience. After reciprocal teaching, invite the students to make a final round of revisions based on their partner’s experience with the materials.
  • After your review, publicly congratulate the students whose tutorials will be integrated into the official course materials for next semester.

This assignment clearly leverages the reuse, revise, remix, redistribute permissions of open educational resources in order to enable students to extend and improve the official instructional materials required for the course. Because students know their work will be used both by their peers and potentially by future generations of students, they invest in this work at a different level. Because the assignment encourages them to work in any medium they prefer, students pick something they’ll enjoy, which leads them to invest at a different level. Because any one of these remixes might end up helping next semester’s students finally grasp the concept that has proven so difficult in the past, faculty are willing to invest in feedback and encouragement at a different level.

Examples of Student Work in the Context of Open Pedagogy

I’ve been iterating over a version of this approach for several years now. While nothing is universally effective, it tends to result in insanely awesome student work. An early version of this assignment back in 2007 brought you Kennedy and Nixon debating the merits of blogs and wikis, Rick Noblenski: Blasting Caps Expert and Wiki Advocate, and a father and son confrontation over District Policies Regarding Blogs and Wikis.

Later versions of this assignment brought you versions of the open textbook Project Management for Instructional Designers, which now includes multiple video case studies; completely rewritten examples in-text; alignment with the Project Management Professional certification exam; an expanded glossary; and downloadable HTML, PDF, ePub, MOBI, and MP3 versions of the book (among other improvements). The book is also used as the official course text at least one other university.

Of course I’m not the only one experimenting with these kinds of assignments – Murder, Madness, and Mayhem: Latin American Literature in Translation is another one of my favorites (see this essay for a description).

Defining Open Pedagogy

What makes this assignment an instance of open pedagogy instead of just another something we require students to do? As described, the assignment is impossible without the permissions granted by open licenses. This is the ultimate test of whether or not a particular approach or technique can rightly be called “open pedagogy” – is it possible without the free access and 4R permissions characteristic of open educational resources? If the answer is yes, then you may have an effective educational practice but you don’t have an instance of open pedagogy. Open pedagogy is that set of teaching and learning practices only possible in the context of the free access and 4R permissions characteristic of open educational resources.

{ 7 comments }

On Quality and OER

As I travel the country (and the world) telling people about open educational resources, open textbooks, etc., I frequently receive questions about the quality of openly licensed instructional materials. I’ve answered this question enough that I thought it might be time to actually write something on the topic.

A Tiny Thought Experiment

Imagine you had a favorite textbook (hey – it’s a thought experiment). Now imagine receiving a letter informing you that the author has passed away and left you all the copyrights to the book. You immediately walk across the room and pull your copy off the shelf and open to the copyright page. You carefully cross out the words “All Rights Reserved” and replace them with the words “Some Rights Reserved – this book is licensed CC BY.” Have you changed the quality of the book in any way? No. Simply changing the text on the copyright page does not change the rest of the book in any way.

Consequently, we learn that quality is not necessarily a function of copyright status. We are forced to admit that it is possible for openly licensed materials to be “high quality.” We are also forced to admit that taking poor quality instructional materials and putting an open license on them does not improve their quality, either.

No Monopoly on Quality

Because quality is not necessarily a function of copyright status, neither traditionally copyrighted educational materials nor openly licensed educational materials can exclusively claim to be “high quality.” There are terrific commercial textbooks and there are terrific OER. There are also terrible commercial textbooks and terrible OER. Local experts must vet the quality of whatever resources they choose to adopt, and cannot abdicate this responsibility to publishing houses or anyone else.

Accuracy and OER

Some people are unable to believe that any process other than traditional peer review, licensing, and publication can result in content that is highly accurate. If you were to create a kind of content wild west, where anyone could publish anything and anyone could edit anything published by anyone else, this would obviously result in horrifyingly inaccurate content when compared to content produced via the traditional process.

Except that it doesn’t.

In 2005 Nature conducted an experiment in which they directly compared the accuracy of Wikipedia articles with the accuracy of traditionally reviewed, licensed, and published articles in Encylopedia Britannica.

They explain,

We chose fifty entries from the websites of Wikipedia and Encyclopaedia Britannica on subjects that represented a broad range of scientific disciplines. Only entries that were approximately the same length in both encyclopaedias were selected. In a small number of cases some material, such as reference lists, was removed to bring the length of the entries closer together.

Each pair of entries was sent to a relevant expert for peer review. The reviewers, who were not told which article came from which encyclopaedia, were asked to look for three types of inaccuracy: factual errors, critical omissions and misleading statements. 42 useable reviews were returned. The reviews were then examined by Nature’s news team and the total number of errors estimated for each article.

In doing so, we sometimes disregarded items that our reviewers had identified as errors or critical omissions. In particular, as we were interested in testing the entries from the point of view of ‘typical encyclopaedia users’, we felt that experts in the field might sometimes cite omissions as critical when in fact they probably weren’t – at least for a general understanding of the topic. Likewise, the ‘errors’ identified sometimes strayed into merely being badly phrased – so we ignored these unless they significantly hindered understanding.

The results?

Only eight serious errors, such as misinterpretations of important concepts, were detected in the pairs of articles reviewed, four from each encyclopaedia. But reviewers also found many factual errors, omissions or misleading statements: 162 and 123 in Wikipedia and Britannica, respectively.

With 42 usable reviews returned to Nature, this means the average article in both encyclopaedias contained 4 / 42 = 0.09 seroius errors, and 162 / 42 = 3.8 smaller errors per article for Wikipedia and 123 / 42 = 2.9 smaller errors per article for Britannica.

In other words, alternative authoring and review processes used to create openly licensed resources like Wikipedia can result in content that is just as accurate as the traditional peer review, publication, and licensing processes used to create works like Encyclopedia Britannica.

Distracting People from the Issue at the Core of Quality

Beyond issues of accuracy, when publishers, their press releases, and the media who reprint them say “quality” with regard to textbooks and OER, they actually mean “presentation and graphic design” – is the layout beautiful, are the images high resolution, are the headings used and formatted consistently, is the book printed in full color?

But this is not what we should mean when we talk about quality. There can be one and only one measure of the quality of educational resources, no matter how they are licensed:

  • How much do students learn when using the materials?

There are two ways of thinking about this definition of quality.

  • One is to realize that no matter how beautiful and internally consistent their presentation may be, educational materials are low quality if students who are assigned to use them learn little or nothing.
  • The other way to think about it is this: no matter how ugly or inconsistent they appear to be, educational materials are high quality if students who are assigned to use them learn what the instructor intended them learn.

Really. For educational materials, the degree to which they support learning is the only meaning of quality we should care about.

Publishers put forth the beauty = quality argument because they have the capacity to invest incredible amounts of money in graphic design and artwork that visually differentiate their textbooks from OER. But when learning outcomes are the measure we care about, we see over and over again that many OER are equal in quality to commercial textbooks. (That is, over and over again we see OER resulting in at least the same amount of learning as commercial textbooks.)

We should never give into the temptation to focus on vanity metrics like number of pages or full color photos simply because they’re easy to measure. We have to maintain a relentless focus on the one metric that matters most – learning.

{ 8 comments }

This month is the one year anniversary of Lumen Learning, the “RedHat for OER” I founded with Kim Thanos in October, 2012. It’s been an incredible first year, and we’ve learned a million lessons along the way – and we continue to learn more about what it takes to support OER adoption at scale every day.

We’ve pulled together a summary of what’s happening with our post-secondary work for fall semester 2013 in a press release posted on the Lumen site, which begins:

Twenty institutions have partnered with Lumen to offer open content options for high demand, high enrollment courses that serve more than 6,000 students in total. Because these students are no longer required to purchase commercial textbooks or course materials, cost savings are estimated at approximately $700,000.

(This is our post-secondary impact for fall 2013, and doesn’t include winter term 2013 or our secondary work with the Utah open textbooks which is now statewide in math and science.)

The release also provides some details regarding the impact on student learning outcomes of the OER adoptions we’re supporting, information about a math pilot we’re running in winter semester 2014, and a description of our newly redesigned services and support model which we’ve branded “Candela.”

Thank you to everyone who is supporting our efforts to increase the quality and lower the cost of education – our institutional partners; the small but passionate Lumen team; the many of you who say and write positive things about our work; the Shuttleworth Foundation who have provided direct financial support for our work; and the broader community of OER funders and organizations who make our work possible, including the Hewlett Foundation, NGLC, the Gates Foundation, the Saylor Foundation, OpenStax, CK-12, Boundless, CMU OLI, and dozens of other individuals and orgs.

It feels like we’re really making a difference in people’s lives – we’ve saved students around $1M this year, and I firmly believe we can do 5x – 7x that next year.

Happy anniversary, Lumen!



http://opencontent.org/blog/page/13

Honored, Humbled, and Excited

Creative Commons has announced my appointment as CC Education Fellow. I’m honored, humbled, and excited to be formally affiliated with CC, and am looking forward to continuing to passionately (and hopefully, effectively) advocate for openness as a way to decrease the cost and increase the quality of education.

{ 3 comments }

Why the “Open” Education Alliance Matters

I expressed my frustration yesterday about the infuriatingly inaccurate name of the “Open” Education Alliance. Despite the obvious problems with the name, this new initiative demonstrates a critical move I described a 2011 post, Or Equivalent:

The high-level vision of the project is this: Many job descriptions include a requirement like “BA or BS in EE/CS/CE or equivalent experience.” We want to create a collection of badges that a top employer, like Google, will publicly recognize as “equivalent experience.” This goes straight for the jugular, demonstrating that badges are a viable alternative to formal university education.

While the “Open” Education Alliance relies on Udacity’s own proprietary credentials rather than Open Badges (wow! yet another way the Alliance isn’t open), the corporate partners clearly have the same goal in mind – an end run around formal degrees. Google, AT&T, Intuit, Autodesk, nvidia, and others will design their own courses to teach and test exactly what they want their future employees to know. Udacity will offer the courses for free to hundreds of thousands of people. Some pass with flying colors, demonstrating that they have the core skills the employers are looking for. This demonstration will qualify them for the “or equivalent” loophole in the companies’ job descriptions and WHAMO – you get hired into a premium paying job at a very reputable company without a college degree. And it scales.

Despite its seriously annoying name, the Alliance announcement is much bigger than we thought. Keep watching for more details. This is the first time in years that we’ve seen a serious increase in the water flowing through the crack in the dam of formal degrees.

{ 0 comments }

The “Open” Education Alliance

“Open” is a word with a wide range of meanings. The Oxford Dictionaries Online lists no fewer than 20 meanings. Consequently, we should not be surprised when we encounter the word used in a variety of ways. However, when “open” is used together with other words – as in the case of “open educational resources” – “open” can become part of a term of art and gain a very specific meaning within particular communities of use.

When used within the education community, it is broadly understood that “open” means (1) free and unfettered access and (2) liberal copyright permissions like those articulated in the Creative Commons Attribution license. For example, the US Department of Education provides a definition of “open educational resources” in its National Educational Technology Plan (p. 56):

Open educational resources (OER) are teaching, learning, and research resources that reside in the public domain or have been released under an intellectual property license that permits sharing, accessing, repurposing — including for commercial purposes — and collaborating with others. These resources are an important element of an infrastructure for learning.”

While there are dozens of definitions of “open educational resources” which emphasize and highlight different nuances, they all agree on the common features of (1) free and unfettered access and (2) liberal copyright permissions like those articulated in the Creative Commons Attribution license. (If you’re interested, Wiley, Bliss and McEwen examine these definitions in more detail.) And when “open” is combined with other words to create other terms of art in the education context, like “open access” or “open data,” the word “open” retains this core meaning.

As “open” has gained popularity and momentum within the education community, it became inevitable that people would begin to either accidentally misuse the term or deliberately misappropriate it. I’ve called out examples of openwashing in the past, but Audrey Watters provides an clear, tweet-sized definition:

Openwashing: n., having an appearance of open-source and open-licensing for marketing purposes, while continuing proprietary practices.

So it is with a mixture of sadness and frustration that I saw what Brian Lamb called a “Bold Innovation in Openwashing.” What was it that led Audrey to declare that “open education is now meaningless?”

The Open Education Alliance – a new “industry-wide alliance of employers and educators” that will “bridge the gap between the skills employers need and what traditional universities teach.”

With a name like that, it’s got to be good, right? Ponder the name for a moment – the Open Education Alliance – and then try to guess what the Alliance does. Go ahead, I’ll wait…

Perhaps you guessed:

  • Promote and utilize open educational resources

or, perhaps you guessed:

  • Promote and utilize open access to scholarly research

or, perhaps you guessed:

  • Promote and utilize open data to empower learners to improve their learning through analytics

or, perhaps you guessed:

  • Promote and utilize open source software systems

Think you got it? In the famous words of Willy Wonka, “Wrong, Sir! WRONG! … You get nothing!” I suppose it was a bit unfair to ask, however, because it was a trick question – there is nothing whatsoever “open” about the “Open” Education Alliance.

If permitting anyone and everyone to watch their videos and read their materials for free – without granting 4R permissions like those granted by the CC BY license – makes Udacity and their partners deserving of the moniker “Open Education Alliance,” then CNN and the BBC should be the “Open News Alliance” and Pandora and Spotify should be the “Open Music Alliance.” While each of these services are useful and valuable, clearly none of them can be accurately described as “open” – including the Open Education Alliance.

It’s time to call these fake open initiatives out for what they really are. It is time for us to stand up for and protect the idea and name that are so critically important to improving the affordability, quality, and equity of education around the world.

If you need a handy, slightly derogatory term to use in describing fake open initiatives, I highly recommend the term “fauxpen“:

Faux in French means “false” or “fake.” So fauxpen means “fake open.”

Examples of how to use this term appropriately would include “Fauxpen Education Alliance.”

Of course it’s not just enough for us to verbally stand up for the term and everything it represents to us. If “open” is successfully defined out from under us it will be because the mis-definers did something worthy of remembering with the term while perhaps we did not. What have we done with the term “open”? Which initiatives that use “open” properly would you suggest the whole world read about instead of the Fauxpen Education Alliance? Which initiative have you participated in that shows the world the real power of “open”?

We can’t just be complainers, we have to be doers. Of course, the odds are that if you’re reading this post, you’re already hard at work on an incredible – but under-recognized – open initiative. Thanks for all your great – and largely unsung – work. Let’s continue to increase the awesomeness of what we’re doing. Eventually, if our work speaks for itself, the world will catch on…



http://opencontent.org/blog/page/14

Stephen provides an “I told you so” link to this post, A Troubling Result From Publishing Open Access Articles With CC-BY. He continues the claim he has been making for some time that these “problems” would not occur if authors published under a CC BY-NC-SA license instead of the CC BY license.

A careful reading of the post he links to, however, shows that this is completely wrong. The problems described in the post are the result of two issues:

  • Reusers of CC BY licensed research articles are not obeying the terms of the open license, and
  • There is some confusion regarding who should pursue legal action against those who are not obeying the terms of the license.

Tell me, now, how would choosing a different CC license solve either of these issues? How does adding the NC or SA clauses magically either (1) correct user behavior or (2) identify who should pursue legal remedies against those misbehaving users? Put simply: it doesn’t. Reusers of CC BY-NC-SA licensed articles would likely still violate the terms of the license, and individual rights holders still wouldn’t know where to turn for a legal remedy.

There’s a certain inertia to bad behavior. Unpunished, it does not tend to change. Applying additional rules which will also go unenforced (e.g., choosing a more restrictive license) will certainly not change behavior.

If it is true that, as Christina writes, “it’s too much to ask for individual authors to take legal action,” and if you believe that legal remedies are the only effective remedies, then those authors who are truly disturbed by the problems associated with misbehaving reusers need to turn over their copyrights to an organization big enough to pursue license violations. Using a more restrictive license certainly does not solve the fundamental problem.

However, there are a range of extra-legal actions that individuals could initiate that might also impact these bad behaviors. Social media campaigns against violators, for example, might go a long way toward improving the behavior of bad actors. Come on people – get creative. But whatever you do, don’t go placing additional restrictions on your research articles when those restrictions will only negatively impact the behavior of good actors and will not positively impact the behavior of bad actors. That’s a net loss for everyone.

{ 6 comments }

What’s the difference between OCWs and MOOCs? At the end of the day, it may be nothing more than managing expectations.

Let’s take Physics for example.

Here’s the MIT OCW Physics course from 1999. It includes videos, lecture notes and other readings, assignments and exams with solutions, and a recommendation that you buy a commercial textbook. There is a study group that learners can join. There does not appear to be any way to interact with the instructor. The course uses a very traditional pedagogy and is openly licensed.

Here’s the Coursera / Georgia Tech Physics course from 2013. It includes videos, assignments and exams, and includes a recommendation that you buy a commercial textbook. There appears to be a study group that learners can join. There does not appear to be any way to interact with the instructor. The course uses an inquiry-based pedagogy and does not appear to be openly licensed.

This OCW collection and this MOOC have a LOT in common. While they differ in pedagogy and licensing, from the public perspective maybe the most important difference between these two big collections of freely accessible online resources – and the two genres of OCW and MOOC more generally – is market positioning and expectation management:

MIT OCW has always positioned itself as primarily teacher-facing. The collections of materials are intended to support faculty at other institutions in teaching similar classes or engaging in professional development. When independent learners manage to benefit from MIT OCW, this is a happy coincidence – a secondary benefit of the primary mission of supporting faculty around the world. Since MIT OCW is teacher-facing, of course there is no faculty member there to support students. Only the very bright and extremely self-motivated can benefit, but that’s ok since serving students isn’t their actual mission.

The commercial MOOC providers have positioned themselves as primarily student-facing. Their collections of materials are intended to support student learning, and their Terms of Service explicitly prohibit faculty around the world from using their materials in the courses they teach (there will be no secondary benefits). Since they are student-facing, the lack of a faculty member there to support students is keenly felt. The idea that only the very bright and extremely self-motivated can benefit from these MOOCs, which is what appears to be happening, is problematic since serving learners is their stated mission.

We’re seeing a huge anti-MOOC backlash now, but never saw an anti-OCW backlash. Why? Perhaps because even though to the public mind they’re doing essentially the same things – publishing large collections of curated, high quality, freely available course content – OCW managed the public’s expectations better.

{ 3 comments }

I am, ostensibly, on vacation. But if I don’t get this thought out of my brain it will continue to torment my cross-country driving.

What exactly is most unique / special about MOOCs? Let’s unpack the acronym back to front:

– Courses. Well, we’ve had these for a few hundred years. At least. Many of these are not MOOCs.

– Online courses. Well, we’ve had these for decades. At least. Many of these are not MOOCs.

– Open online courses. Well, we’ve had these for several years now, too. Many of these are not MOOCs.

– Massive. Hmm. This seems new. Ish.

I think in our ever-stumbling hurry to do what we’ve always done with new technology, we’re missing a genuine opportunity to see something new in the “massive” part of MOOCs. Back in 2004 I wrote:

“Is there a form of teaching which is indigenous to the online environment?”

“Massively Multiplayer Online Role Playing Games (MMORPGs) provide an interesting opportunity to research this question. These games frequently include Guilds and other organizations which allow players to group and cooperate. One of the primary functions of these groups is to train new players, including enculturation, how to slay certain types of beasts, operate certain types of weapons or spacecraft, etc. In informal conversations, it has been my experience that people playing these games have never belonged to guilds in the “real” world, never killed dragons in the “real” world, never flown an X-Wing in the “real” world, etc. They were taught these skills and continue to teach these skills to newcomers online. They have never taught these skills to another person in the “real” world, they have learned to teach these skills online. I would argue, therefore, that the type of teaching and learning occurring in MMORPG guilds is one example of the type of native online teaching we want to find.”

Relatedly, I’ve also thought for some time (but frustratingly can’t find it quickly in my archives): Our traditional pedagogies scale poorly beyond 30 or so people because they were developed in the context of teaching 30 or so people. I think it’s safe to assume that, in the same way that our pedagogies-for-30-people degrade as the number of students goes up, pedagogies-for-1000s-of-people degrade as the number of students goes down. Pedagogies for 1000s of people probably function so poorly in the context of 30 people that we’ve never even really tried them before. In other words, we’ve never taught 100,000 people at a time before, and consequently we’ve never developed pedagogies for teaching this many people at once – the last few years just show us trying to shoe-horn pedagogies-for-30 into MOOCs and then publishing articles about the astonishing drop rates.

MOOCs provide an extremely rare opportunity to completely rethink pedagogy, from the ground up, for a completely new context and configuration. However, until someone gets serious about this line of thinking and looks for legitimate inspiration outside of classroom-based pedagogies-for-30, it’s going to be hard times.

This seems to be an appropriate time to say, “we have to think outside the box.”



http://opencontent.org/blog/page/15

The Deseret News, a local Utah newspaper, today published a story titled Study: Majority of U.S. charter schools perform equal or worse than traditional schools, accompanied by the following infographic:

Infographic

What’s wrong with this story? While the information conveyed by the headline is, strictly speaking, an accurate reflection of the data, the DesNews is using the headline to seriously mislead the public. Let’s explore an alternate, accurate headline the DesNews could have run to see how they’re misinforming the public with this story.

While the story’s headline is accurate given the data shown in the infographic, the opposite headline is also true. The story could just as accurately have been titled:

“Study: Majority of U.S. charter schools perform equal or better than traditional schools”

How can both these statements be true? The answer is in the statistics. 56% of the charter schools in the study are not significantly different from other public schools in their local market when it comes to student performance in reading. 40% are not significantly different from other public schools in their local market when it comes to student performance in math. To say “majority” in the headline, we only have to get to 51%. The reading scores are already above 51%, and math only needs a few percentage points to reach majority status. So by including the 19% and 31% of charter schools that were significantly lower in reading and math, we get totals of 75% and 71% for schools with “equal or worse performance” in reading or math. So the DesNews’ headline is accurate.

However, the math works the other direction as well. By including the 25% and 29% of charter schools that were significantly better in reading and math, we get the totals of 81% and 69% for schools with “equal or better performance” in reading or math. So the opposite, positive headline would also have been accurate.

When there are two equally accurate – yet opposite – ways of interpreting data, the choice one makes clearly reveals one’s bias. It’s unclear whether the anti-charter school bias in the DesNews story belongs to the reporter or to the paper’s editors. Regardless of the source of the bias, the choice – and it this case, it is clearly a conscious choice – to portray the data in a negative way is disappointing.

{ 1 comment }

Publishers Taking Notice of OER

EdWeek has a nice writeup of the recent conference organized by the Association of Educational Publishers and the Association of American Publishers’ school division. The article, Ed Publishers Adjust to Changing Market, New Resources, includes this interesting bit about publishers’ current thinking about OER:

Publishers are also trying to gauge how the rise of free K-12 educational materials, often called “open- education resources,” will affect their businesses.

Many education publishers today are assuming that school districts and other buyers of curriculum and other products will recognize the value of products that publishers have poured money into developing, and will be willing to pay more for products the industry believes is of higher quality. It’s a shaky assumption, Goff said.

“I wouldn’t bank on it,” he said. “There has to be an answer [to open-education resources] that goes beyond, ‘our stuff is better than their stuff.’ ”

After his presentation, Goff explained that publishers are likely to take an “if you can’t beat-‘em-join-‘em” approach to open education resources. That would mean they would either create their own free materials, or partner with others designing those materials, and attempt to make money by offering to curate or organize them in ways that would make them more useful to educators.

I want to comment specifically on the bit about “products the industry believes are of higher quality.” It’s exactly this kind of unsupported claim that prompted my research group to begin focusing on comparing the amount students learn from OER compared to the amount they learn from prevailing publisher materials. After a not showing any difference in learning outcomes in 2010-2011, the Utah Open Textbook initiative data from 2011-2012 are telling a different story. Those data show a small – but statistically significant – positive effect. Students who used open textbooks as their primary materials during the year performed better on the state’s standardized tests than students who did not.

Little by little we’re influencing education for the better. When the publishing industry has to deal with the “problems” OER are creating at their annual meeting, we’re on the right track. Keep up the great work!

{ 0 comments }

The Post Flickr World – TroveBox

One of the many benefits of my Shuttleworth Fellowship is getting to hang out with other Shuttleworth Fellows twice a year at a meeting called The Gathering. They are an insanely bright, motivated, talented group of people. Take, for example, Marcin Jakubowski who is building the Global Village Construction Set (GVCS), “an open technological platform that allows for the easy fabrication of the 50 different Industrial Machines that it takes to build a small civilization with modern comforts.” I’m still a little in awe of the scope of work he has taken on…

Another of the fellows, Jaisen Mathai, is working on an open source photo management platform called TroveBox. This really terrific looking photo management platform can use almost anything for its backend storage – including Amazon S3 and Dropbox. Given the way that the Googles and Yahoo!s of the world are behaving lately, I was extremely excited to see a high quality, open source front end set of photo management tools that lets me store photos where ever I want. I connected my account to S3 and imported all my photos in about 15 minutes (note: their automated Flickr importer requires a Pro subscription). Of course I could have imported my Flickr photos by hand for free, but I was more than happy to pay to get my 2000+ photos plus all their metadata moved in 15 minutes.

This is my first move in a more deliberate process influenced by Jim Groom and other’s continued thinking and writing about taking back control of our digital personal identities with A Domain of One’s Own and Syndication-Oriented Architectures. I love the idea of an open source front end I can run myself if need be, and multiple options for backend storage that are VERY easy to switch between.

Check out the platform by poking around my account here: https://opencontent.trovebox.com/photos/list.



http://opencontent.org/blog/page/16

More on MOOCs and Being Awesome Instead

I’m grateful for your responses to my recent post Be Awesome Instead. In reading your comments, tweets, and other blog posts responding to the post, I was a bit concerned that some readers may have gotten the impression that I was saying it was ok to “Be Awesome Instead” of being open. That was absolutely not the point I was making. Being open – truly open – is absolutely critical for reasons I will describe below. The point I was trying to make in my post is that we should be awesome instead of being whiny; we should be contributors rather than naysayers.

At the end of that post I said I would share some thoughts on how the popularity of MOOCs can be used to move the open agenda forward, and this post makes good on that promise. In order to do that, I’ll have to briefly outline the open agenda as I believe it pertains to open educational resources (others are far better positioned than I to consider the open agenda in other contexts like open data, open governments, etc.). That will require a brief definition of OER. Rather than beginning each sentence in the next several paragraphs with “I believe,” “To my mind,” and “In my opinion,” I’ll just caveat all these here and now by saying this is my own personal view of the agenda for OER. Yours may vary.

Defining Open Educational Resources. There are two defining characteristics of an open educational resource. Any creative work with these two properties can be considered an OER:

  1. Access to the resource is free and unfettered. That is, the resource can be accessed without the user being required to pay, provide personal information, or jump through any other hoops as a prerequisite to access.
  2. All users have free 4R permissions with regard to the resource. That is, either by virtue of open licenses or the work being in the public domain, anyone and everyone has the legal permissions necessary to reuse, revise, remix, and redistribute the resource.

Some Preliminary Context on the OER Agenda. Universal access to free, high quality education is important for reasons so grandiose that to mention them risks trivializing them. These reasons include nothing less than the happiness and prosperity of individuals and families, and the possibility of civilized society. OER have an important part to play in the “universal,” “free,” and “high quality” aspects of this aspiration. This relationship is the driving motivation underlying my personal commitment to OER.

For a number of years I have felt that the overwhelming majority of educational researchers are focused on the “high quality” problem, to the virtual exclusion of the “universal” and “free” problem from the discourse. This is another factor in my decision to focus my professional work on OER.

The OER Agenda in the Short Term – “Universal” and “Free”. This relationship is very simple – because the adoption of OER can drastically reduce the cost of education, the adoption of OER can drastically expand access to education. Deployed effectively, OER move us (perhaps asymptotally for extra-educational reasons) toward universal and free. The “universal” and “free” problem has been the primary concern of the open education movement during the first 15 years of its existence, and I feel like we are making reasonable progress on this problem.

The Open Agenda in the Medium Term – “High Quality”. The idea of “high quality” only has meaning locally – it does not have meaning globally. If educational materials are expressed in a language I don’t speak, or use examples that are foreign to me, then regardless of the “accuracy” of their expression they will be low quality to me. By definition, high quality means personalized for me.

There is simply no way to scale the centralized creation of educational materials personalized for everyone in the world (cf. the 15 years of learning objects hype and investment, which feels very similar to the current MOOC mania). Perhaps the only way to accomplish the amount of personalization necessary to achieve high quality at scale is to enable decentralized personalization to be performed locally by peers, teachers, parents, and others. And given the absolute madness of international copyright law there is no rights and royalties regime under which this personalization could possibly happen. The only practicable solution is to provide free, universal access to content, assessments, and other resources that includes free 4Rs permissions that empower local actors to engage in localization and redistribution.

The Open Agenda in the Long Term – Infrastructure and the Unexpected. My two favorite saying with regard to OER continue to be “openness facilitates the unexpected” and “content is infrastructure,” both of which I have been saying for almost a decade. Once there is a high quality content infrastructure freely and universally available, there is absolutely no way to predict the incredible advances that will occur. To quote Linus Torvalds, “don’t ever make the mistake that you can design something better than what you get from ruthless massively parallel trial-and-error with a feedback cycle. That’s giving your intelligence much too much credit.” Who could have predicted what would happen as a high quality communications infrastructure (the internet) became increasingly universally and freely available?

I don’t work on OER because I believe I can see the endgame. I work on OER because I want to enable an endgame beyond imagining. I work on OER to create the infrastructure which people will leverage to – somehow – achieve universal access to free, high quality education.

Where Do MOOCs Fit In?

For a complex tangle of political reasons, “the people in power” are currently paying a tremendous amount of attention to issues relating to access to education, and the role of the cost of education in regulating that access. MOOCs have popularized and significantly advanced the conversation regarding both universal and free. The general public is beginning to believe that technology may have the near-term potential to provide a genuine solution to the problem of making education both universal and free. We can take advantage of the space MOOCs have created in the public conversation to introduce and advance the idea of truly open educational resources to people who are unfamiliar with it.

The comparison I made above between MOOCs and learning objects was a carefully chosen one. I believe that MOOCs will run – are already running – up against the reusability paradox. I believe people will eventually come to realize the pedagogical restrictions that are inseparably connected with the copyright and Terms of Use restrictions of MOOCs. As with the learning object mania of yesteryear, diehards will stick around but the rest of the world will move on as the experiment fails. If we message correctly before that happens, we can create a general understanding that much of what is frustrating about MOOCs to faculty, students, and others would be solved by the simple application of an open license (the same way an open license can resolve the reusability paradox).

MOOCs have carried the ball a significant way down the field toward universal access to free, high quality education. We should be grateful for the work they’ve done on behalf of that goal. The primary risk we have to guard against now is someone hanging out the “Mission Accomplished” banner. MOOCs are not openly licensed, and consequently will struggle with issues of quality and will never become part of the educational infrastructure that enables truly breakthrough advances. MOOCs get us one step closer to the goal, but we need to continue advocating for true openness in order to create the space in which those advances can happen.

It’s almost as if we’re actually, slowly, iterating toward openness.

{ 2 comments }

I’ve been fairly quiet recently about Lumen Learning, the “RedHat for OER” I founded earlier this year with Kim Thanos. Lumen (for short) is where I’m spending my Shuttleworth Fellowship time, with the goal of drastically increasing the use of OER in formal educational settings in order to lower the cost and improve the quality of education.

Today Lumen released its first six Open Course Frameworks. Open Course Frameworks are an idea I am very excited about, because they greatly simplify the process of adopting OER for the average teacher or institution. Open Course Frameworks are:

  • curated collections of OER,
  • mapped to learning outcomes,
  • openly licensed with detailed attribution,
  • organized in a way that looks and feels like an online course,
  • published on open source platforms, and
  • compatible with Lumen’s ImprovOER continuous quality improvement service (which we are publicly showing for the first time at InstructureCon in a few weeks).

In keeping with Lumen’s focus on supporting the most at-risk students, our first set of Open Course Frameworks is a developmental education sequence, comprised of:

The first four courses are published in the open source Canvas platform by Instructure. The math courses are available in the open source MyOpenMath platform. Both platforms make it easy for you to make your own copy of a course that you can extensively customize (or not) and then teach for free. And of course, because the courses are openly licensed you can pull the materials out and teach them elsewhere, too.

Recent surveys have shown that faculty and administration believe that open educational resources can save students money and potentially improve student success. But the same surveys show that the biggest barriers to OER adoption are the time and effort it takes faculty to find resources, vet them for quality, and align them with course outcomes. OCFs solve these problems.

Lumen is adamant that these Open Course Frameworks are now and always will be freely available. We do not – and will not ever – charge for access to these materials. Lumen acts as stewards over the OCFs as a service to the education community, in much the same way an open source software project works. In fact, the OCFs we published today were developed collaboratively with faculty members from the nine different institutions participating in the Kaleidoscope Open Course Initiative. We will be releasing updated versions of the OCFs over time as additional faculty use and help improve them. And these six are just the beginning – we will release 25 more OCFs over the coming year, with the next five coming in July.

I’d love your feedback on this idea and your help spreading the word….

{ 4 comments }

Be Awesome Instead

Cole Camplese, for whom I have great respect, recently wrote a wonderful essay about the negative response to MOOCs from many voices in the open ed space:

Just a couple of years ago we were all trying so hard to get people to accept the idea that open access to learning was a great thing. Hell, some of the best conversations I’ve ever had in this field have centered around the ideals of openness, but now that the MOOC thing has happened the same people who built rallying calls for more open access to learning are now rejecting this movement. Why? Because it is driven by corporations trying to make money? Because it isn’t really open? Because the press isn’t giving a few people the credit they believe they deserve?

I’ve had a few great conversations with Cole in the past, and as I read his piece my planet-sized ego quietly suggested that I was one of the people Cole is disappointed in. And it’s true that I went through a brief phase of disappointment that I was written out of the history of MOOCs. But I feel like I successfully got over that years ago – before the so-called xMOOCs ever hit the scene.

I try very hard to apply the motto “There’s no limit to the amount of good you can accomplish if you don’t care who gets the credit” when I can think clearly enough to do so. Would I rather spend my time making a difference in the world, or making sure people understood my role in the early history of MOOCs? It’s a stupid, embarrassingly self-aggrandizing question to even have to ask. That humbled me for a while.

And then, just as I was overcoming these petty feelings of being ignored, the xMOOCs emerged. At this point, I moved into my “righteous indignation” phase. No, MOOCs are not open – not in the same sense that I’ve been fighting to help people understand that word for the last 15 years. With the xMOOCs, almost literally over night, the primary effort of my professional career seemed to be undercut. The term “open” entered the popular mind meaning something very different, something severely watered down from the meaning I (and others, but this post is about me) had been working so diligently to establish. My feelings were hurt yet again.

The immaturity of those feelings was thrown back in my face by Cole’s post. For the last 48 hours, the question that has haunted me has been:

Why do those who used to push forward now push back?

And I find that I must ask myself the terrible question again. Would I rather spend my time making a difference in the world, or spending my time railing against MOOCs because they aren’t really “open”? And I find myself humbled again. And the little voice inside me says, “suck it up, Wiley. Yes, MOOCs have overrun the popular imagination. Yes, they are founded upon a severely impoverished definition of ‘open.’ So what are you going to do about it? Complain? Really? How about spending your time figuring out how to leverage MOOCs to move the ‘open’ agenda forward, rather than spending your time whining about how MOOCs have derailed it?”

So, I ask, how can the popularity of MOOCs be used to move the open agenda forward?

Whether you referring to me or not, Cole, I owe you a sincere thank you for this terrific piece of writing. As my kids say, “some people just need a high five, in the head, with a folding chair.” Your essay was the chair to the head for me. Time to follow Neil Patrick Harris’ excellent advice:


Time to be awesome instead.

I’ll share some thoughts on how the popularity of MOOCs can be used to move the open agenda forward later this evening.



http://opencontent.org/blog/page/17

MOOCs and Regifting

Jim Groom briefly but insightfully runs the numbers on the Georgia Tech / Udacity deal:

Apart from all sorts of misgivings about Georgia Tech’s MOOCish Master’s program in Computer Science, I want to take a moment to do the math. You charge $7000 a year tuition with the idea you’ll have a 2-year cohort of 10,000 students. If you add that up, you get $140 million. That’s massive, especially when you’re only hiring eight new faculty to educate those 10,000 students. Follow the money, this is no joke, the profits are huge even after you split 40% of the kitty with Udacity.

Those are some truly staggering numbers. And just as easy as that, MOOCs are simply the “new” elearning – purportedly less expensive than on-campus instruction, purportedly just as effective, and with the promise of thousands of new students flocking in from around the world driving unimaginable levels of new revenue.

It’s like a national regifting of the 1990s hype around elearning with a giant MOOC-colored bow on top.

{ 9 comments }

Redefining MOOC

If you haven’t read Audrey Watters’ coverage of the Coursera / Chegg deal, I highly recommend it. The short version is, DRM’ed commercial content is making its way into MOOCs, and this stands to make all involved – including the professors – quite wealthy.

While I completely and fully support recent calls to “reclaim open“, I think the term MOOC is irretrievably out of the barn. Consequently, perhaps the only way left to put an end to the openwashing of the big for-profit MOOC providers is to redefine the term MOOC in the popular mind. I propose that, whenever you hear the acronym MOOC, you think:

“Massively Obfuscated Opportunities for Cash”True, the obfuscation is less massive and more transparent each day. But now that DRM is here, we can no longer call these things open. We need to call them what they are. As Audrey wrote,

What was a promise for free-range, connected, open-ended learning online, MOOCs are becoming something else altogether. Locked-down. DRM’d. Publisher and profit friendly. Offered via a closed portal, not via the open Web.

They have become Massively Obfuscated Opportunities for Cash.

{ 4 comments }

The Chronicle have published an extremely articulate and well thought-through letter written by professors in the philosophy department at San Jose State University in response to their being encouraged to “adopt” an edX course on Justice. I’ve embedded the letter below, which I strongly encourage you to read in full.

The one section of the letter that absolutely breaks my heart is the top of page 4:

Good quality online courses and blended courses (to which we have no objections) do not save money, but purchased-pre-packaged ones do, and a lot. With prepackaged MOOCs and blended courses, faculty are ultimately not needed.

Oh, MOOCs. How thoroughly, completely, and profoundly you have failed us.

The SJSU faculty’s last statement is true if and only if one underlying assumption is met – that the content of the pre-packaged course is traditionally, fully copyrighted. So with regard to this particular edX course, whose YouTube videos all say “Standard YouTube License” for example, the SJSU criticism is accurate. This fully copyrighted, pre-packaged MOOC is clearly meant to run as is, and is not meant to be taken apart, adapted, localized, and customized by local faculty. If edX intended for those things to happen, they would take down their silly registration barrier and put a proper license on the course.

(Don’t even get me started on how edX oh-so-deceivingly puts “Some Rights Reserved” in their footer without ever specifying which rights those are. “Some Rights Reserved” is, obviously, a nod to Creative Commons licenses – but the site does not use one. Check their Terms. When you don’t use a Creative Commons license, why try to hoodwink us into thinking you’re “one of the good guys” by putting that language in the footer of EVERY page?!? And this is how the one NON-profit in the space behaves. No wonder people are suspicious…)

If entities like edX and Coursera and Udacity would simply be open – meaning, use an open license for their materials – the concerns of SJSU faculty and others could be assuaged. Rather than pre-packaged, teach-as-you-receive-it collections of material meant to undermine faculty, openly licensed course frameworks empower faculty to tweak and customize and modify while still saving money. I’ve said it before and I’ll say it again. You can have your cake and eat it, too, when you use open licenses. The either/or presented by the SJSU faculty is only true when purchased-pre-packaged courses are copyrighted – like the edX course is.

Come on, MOOCs. There’s no innovation in allowing open enrollment. The OU/UK has had that for decades. There’s not even innovation left in open licensing – we’ve been doing that for over a decade, too. What exactly is it you’re doing that we’re supposed to be so impressed by?

Document
Pages
Text

Zoom

p. 1
p. 2
p. 3

(Grab the letter as a PDF or as plain text.



http://opencontent.org/blog/page/18

More on Utah Open Textbooks

The Salt Lake Tribune has published a great article on Utah’s transition to open textbooks. But perhaps the most enlightening part of the article isn’t in the article at all – it’s this comment:

The books are open source, meaning that the person who wrote the book is doing it for the goodness of mankind and expects no compensation. I know that’s hard to believe, but I’m a teacher and have been working on some of the science books mentioned. Other than the State Office covering the price of my substitute for two days I haven’t been paid a thing (same for the other 20-30 teachers on the project). The books are now done and FREE for the world to use. The best part about these books is a year from now after using them in our classrooms we’ll get back together (USOE covering our subs) and fix the issues we have found and make them even better to again be posted for the world to use for FREE.

Now THAT’s what I’m talking about.

{ 0 comments }

Utah Open Science Textbooks for 2013-2014

The Utah State Office of Education has posted their open science textbooks for grades 7 – 12 for the coming school year. Here are some of the highlights:

  • Based on the CK-12 Foundation‘s open science textbooks
  • Customized specifically for Utah students by Utah teachers
  • Each book’s Table of Contents is the Utah Science Core Standards
  • Professionally designed
  • Print copies available from Amazon’s CreateSpace for an average cost of $5 per book (for schools that need a print option)

And here are the links to the free and open PDF versions of the books:

and the print versions available from CreateSpace:

Seeing the USOE launch the initiative statewide for this coming fall is the extremely gratifying culmination of years of collaborative work and research between the USOE, the Nebo school district, BYU, and Lumen. My research team (the Open Education Group) are currently finalizing an article analyzing data from last year’s expanded pilot, which shows statistically significant gains in student performance on the state’s end of year standardized tests for students using the $5 books.

Later this week, or perhaps early next, I’ll publish our process guide for creating and adopting open science textbooks statewide. We’ve learned many important lessons along the way…

{ 0 comments }

Giving Too Much Credit

Stephen comments on the “Great Rebranding” of MOOCs:

MOOCs were not designed to serve the missions of the elite colleges and universities. They were designed to undermine them, and make those missions obsolete…. There has been a great rebranding and co-option of the concept of the MOOC over the last couple of years. The near-instant response from the elites, almost unprecedented in my experience, is a recognition of the deeply subversive intent and design of the original MOOCs (which they would like very much to erase from history).

In summary, Stephen sees the rapid adoption of MOOCs among prestigious universities as a proactive attempt to co-opt their potentially subversive nature.

I think this is giving these schools WAY too much credit. As we saw with OpenCourseWare a decade ago, there is a HUGE amount of public relations benefit from being involved in these initiatives. As we saw in the early 2000s, every single school that launched an OCW initiative garned an incredible amount of press and praise – until the new car smell wore off. If you were one of the first schools out of the chute, you were showered with media coverage. But after OCW “got old,” additional OCW launches received no press coverage whatsoever.

Coursera has done an incredibly effective job harnessing this Presidential passion for press. Coursera – ‘the platform for offering “open” courses’ – has been very noisy about the fact that they only work with prestigious universities. What school doesn’t want to join the Stanford / Tecnológico de Monterrey / Princeton / École Polytechnique Fédérale de Lausanne club? For the cost of offering one class in a new format, a President can officially put his or her institution in the same category as these “prestigious” schools. What Board of Trustees doesn’t want that?

Don’t mistake lust for fame with forethought. The current mania around MOOCs has nothing to do with strategic neutralization of a potential threat to higher education’s business model and everything to do with needing to be in the New York Times. Assuming the prior gives way too much credit where it isn’t due – twice. First, to the leadership of schools who have jumped speedily on the MOOC bandwagon. And second, to the creators of the MOOC approach who by implication have supposedly devised a method so brilliant as to be capable of destroying formal higher education (which, apparently, is to be lauded).



http://opencontent.org/blog/page/19

Last week I had the incredible opportunity to spend about three hours talking with Gary Lopez, founder of the Monterey Institute for Technology and Education (or MITE, pronounced “mighty”), who is one of my favorite people in the OER movement and someone for whom I have boundless respect. Just a day later I was fortunate to participate in another amazing conversation involving MITE’s Ahrash Bissell as well as several other members of the OER community.

Among the wide range of issues we discussed, one topic that came up in both conversations was the observation that many “inside” the OER community seem to think of MITE as “outside” the OER community, despite the fact that they publish much of their content under Creative Commons licenses.

Why is that? I want to explore this a little. MITE is a case worth talking about both (1) because of the very high quality of the multimedia and other content they produce, (2) because of the incredible adoption and usage of their content (literally millions of users – many of whom are teachers using the materials in their classrooms and consequently represent 30 or so additional users), and (3) because MITE is one of an extremely small number of OER producing organizations that can be called sustainable.

MITE in 60 Seconds

MITE spends significant resources creating a relatively small number of very high quality courses, which they license under Creative Commons licenses. In this regard, you might think of MITE as the Carnegie Mellon University Open Learning Initiative of the secondary education space. An additional significant amount of non-development work goes into making the resources easy to find and use. For example, in addition to creating alignments with Common Core and state content standards, MITE aligns their content with popular textbooks. Using their Textbook Correlations tool a teacher or student can do a search like “show me content relevant to pages 103-111 of Glencoe’s 2010 Algebra 1 textbook.” It’s an amazing interface that makes it very easy for parents or students to find and use supplemental materials, for example.

MITE has settled on a sustainability model that requires you to become a paying member of their NROC Network if the uses of their materials you’re planning on making include things like being “downloaded en masse, stored on institutional servers, or otherwise incorporated into institutional resources (including learning-management or student-information systems) or distributed directly via institutional channels.” But, you might ask, how can they require network membership before permitting these uses if their content is CC licensed? Answer: they take a unique perspective on the NC clause in order to do it.

How Does It Work?

To expand the previous quote from MITE’s Terms of Use just a bit, MITE considers it “commercial use when the materials are downloaded en masse, stored on institutional servers, or otherwise incorporated into institutional resources (including learning-management or student-information systems) or distributed directly via institutional channels.” MITE uses this definition of NC to require anyone who wants to make these kinds of uses of their materials to join their membership network.

We now interrupt your regularly scheduled article with a brief foray into license technicalities. Skip down two paragraphs if this kind of stuff puts you to sleep.

There is one place where the MITE Terms of Use could be greatly improved. A little background first: The language in the BY NC SA license which constitutes the Noncommercial provision begins, “You may not exercise any of the rights granted to You in Section 3 above in any manner that…” In other words, the NC clause is triggered by the what the user does, or the kind of use that is made of NC licensed materials. The clause is not triggered by the type of entity making use of the NC licensed materials. Unfortunately for MITE, they take the uses they consider commercial (downloading en masse, storing on institutional servers, etc.), bundle them up, and label the bundle “Institutional Use.” By doing so, they confuse types of use (which NC addresses) with type of user (which it does not).

To expand the previous quote once more, the Terms of Use actually say “Institutional use is deemed to be commercial use when the materials are downloaded en masse, stored on institutional servers…” By appearing to attach their definition of commercial to type of user (“Institutional Use”), they appear to place it beyond the reach of the NC trigger language in the license. MITE really needs to get a new name for the bundle of uses they consider to be commercial, and that name needs to be descriptive of the uses themselves and not even smell like they’re related to the type of user who might make them.

So MITE has created a unique definition of commercial use. Of course, folks in the know about the inner workings of the CC licenses understand that this “defining NC” language in the Terms of Use document is probably not binding on end users because of Section 8.e. of the BY-NC-SA license, which reads “This License constitutes the entire agreement between the parties with respect to the Work licensed here. There are no understandings, agreements or representations with respect to the Work not specified here.” Either way, including this statement of their interpretation of NC does cause school districts, state offices, and others to pick up the phone and call MITE, so the desired effect is achieved regardless.

But back to the main point… even MIT OCW persists in offering its own unique definition of what constitutes commercial, and consequently unallowable, uses. So why does following MIT OCW’s lead make people think of MITE as outsiders? What’s up?

What Makes MITE Different?

A number of years ago, Creative Commons commissioned and published a report called Defining Noncommercial. This report demonstrated the wide range of attitudes regarding the meaning of “Noncommercial” held by both the general population of internet users and “Creative Commons Friends and Family,” a separate group of survey responders that actually knew something about the Creative Commons licenses.

Unfortunately the report itself only presents the results from the general population in any detail. However, the raw data on the CCFF group’s responses are available for analysis by anyone willing to download Open Office. The CCFF group’s responses are particularly important for our conversation because the OER community know and care about the Creative Commons licenses, and consequently their attitudes will be much more accurately reflected in the CCFF data.

Both groups of survey respondents were asked to rate the following scenario on a scale of 1 – 100, where 1 means “Definitely a Noncommercial Use” and 100 means “Definitely a Commercial Use:”

“Your work would be used for course materials in a school — a not-for-profit organization that does not charge tuition”

For people who view themselves as “creators” in the general population (n = 263), the mean response was 33.5 (median = 10, std dev = 36.8). For users in the general population (n=280), the mean response was 44 (median = 38, std dev = 40.9). So on average the general public believes that use by public schools is more Noncommercial than Commercial.

What you won’t see in the report are the following, much more striking data: For creators in the CCFF group (n = 1078), the mean response was 13.8 (median = 1, std dev = 25.4). For users in the CCFF group (n = 137), the mean response was 18.5 (median = 1, std dev = 31.6).

Here then, finally, is the issue that makes some people in the OER community look at MITE as being somehow different or outside the “mainstream” community. While the median survey response for both creators and users in the CCFF group regarding a scenario like ‘a public school using NC licensed material’ was 1 on a scale of 1 – 100, indicating that the average member of the CC community feels that this kind of use is clearly and “Definitely a Noncommercial Use,” MITE feels very strongly that use of their “course materials in a school — a not-for-profit organization that does not charge tuition” is Definitely a Commercial Use.

Why Be Different?

If you spend any time reflecting on the question, “Why would MITE take an approach that differs so significantly from the rest of the Creative Commons / OER community?” the answer is actually very straightforward. MITE’s materials are designed specifically for use in secondary schools. If secondary schools – MITE’s primary audience – can use all of MITE’s content without contributing anything back, there is literally nowhere for MITE to go for a sustainability model beyond being “just another OER publisher living from grant to grant.” So while sys admins, teachers, students, parents, and others are free to do anything else they like with MITE materials, they can’t engage in activities like loading them into LMSs (unless they join the network) – because these activities go straight to the heart of what “commercial” means to MITE.

MITE’s focus on middle and high school is critical. There is currently no real “direct to student” path to sustainability for OER producers who focus on K-12. At the secondary level, curriculum purchases are made by a district or school. These adoption decisions are centralized, lucrative, and the competition over them is ruthless. On the other hand, higher education is almost the complete opposite – adoptions are made by individual faculty and textbook purchases are (or aren’t) made by students themselves. This creates an opportunity for an organization focused on OER in higher ed to build relationships directly with the students using their OER, and try to follow a “direct to student” path to sustainability by offering proprietary supplemental materials and services. Due to the quirks of the K-12 adoption process, there is no similar path to sustainability for a publisher of secondary OER.

MITE’s approach to NC is so different from other members of the OER community because their goals are so different from others in the OER community. What other organizations focus primarily on producing original middle and high school content and licensing it as OER? CK-12 does, but they’re a foundation with an endowment to fall back on. Khan Academy also produces original secondary level content, but they’re currently supported by a huge array of foundations, and appear to have no sustaining model in sight beyond hat-in-hand.

MITE, which is also a non-profit, is a grand experiment in trying to figure out how to be a “sustainable producer of high quality middle and high school OER.” Since no one else in the OER community is really trying to do that, it makes sense that MITE’s approach would be significantly different from everyone else’s. Actually, since so few initiatives in the OER community are actually sustainable, MITE must to be doing something different.

Give MITE a Hug When You Can

MITE is absolutely a member of the OER community. A really innovative and interesting one, at that. If they seem a little far away from where you are and what you’re doing, remember they’re doing something different from you – trying to find a sustainable path through territory that no one else has successfully navigated. Yes, they’ve got a technical issue to solve with their license, but it’s fixable. And their sustainability model seems to be fundamentally sound.

If you’re a “sustainable producer of high quality middle and high school OER” that has figured out how to do that with a more “mainstream” approach to open licensing, I’m sure MITE would love to hear from you. But if you’re not willing to invest your energy in helping them find a “better” model, for the love of Pete don’t waste your energy complaining about the model they do have. It’s still very early days for OER, and we need all the interesting experiments in sustainability we can get. We need more experiments as different from the “mainstream” as MITE’s work is. (You see how successful the “mainstream” has been at sustaining itself.)

{ 1 comment }

Massive Fiction

Today Robison Wells, Marion Jensen (of USU OCW fame), and I are launching a new project called Massive Fiction. Massive Fiction is an effort to create and define a fictional world in three novellas, providing a good understanding of the new world, its characters, and its setting, after which several additional authors – including two NYT Bestsellers – will write story stubs that anyone can use as a place to start their own stories set in the new world.

Essentially, this is an effort to create an open narrative infrastructure for legal fan fiction. All of the story content, characters, settings, etc. will be released under a Creative Commons Attribution license so that everyone who writes in the world will be able to share, publish, or sell their writing (unlike other fan fiction, which is forced to live underground). Like all fan fiction, our project is especially useful for beginning writers who struggle to juggle the dozens of tasks involved in writing fiction. Rather than starting from a blank page and the need to invent characters, setting, and conflict from whole cloth, beginning writers will be able to start with characters, setting, and conflict already in pocket.

While supporting formal instruction is not the primary aim of the project, once the novellas and stubs are complete Lumen Learning will create CC BY licensed instructional supports to helps teachers bring the Massive Fiction world into their writing courses if they’re interested in doing so.

Check out the Massive Fiction site on Kickstarter and consider supporting the project.

{ 1 comment }

The Perfect Storm

Much of the education technology world – and many of the foundations and venture firms that provide the funding for it – are obsessed with adaptive learning. The Gates Foundation’s Adaptive Learning Market Acceleration Program RFP is the most recent evidence of this trend. The fascination largely stems from the fact that, because these systems are completely automated, they can scale. Scale matters to foundations because it means broader impacts for the work they fund. And, of course, scale matters to investors because it means more customers and, consequently better returns.

But some educational content publishers love the idea of adaptive learning services for a different reason. Open educational resources (OER) are driving the cost of educational content to zero. In fact, you can now graduate from high school (e.g., Open High School of Utah) and complete an associates degree (e.g., Tidewater Community College) without ever spending a penny on textbooks – because of the pervasive use of OER in these programs.

Adaptive learning services are a perfect response to the business model challenges presented by OER to publishers. While the broad availability of free content (e.g., CNN.com) and OER have trained internet users to expect content to be free, many people are still willing to pay for services. Adaptive learning systems exploit this willingness by deeply intermingling content and services so that you cannot access one with using the other. Naturally, because an adaptive learning service is comprised of content plus adaptive services, it will be more expensive than static content used to be. And because it is a service, you cannot simply purchase it like you used to buy a textbook (particularly useful for publishers given the Court’s recent decision upholding the first sale doctrine with regard to textbooks). An adaptive learning service is something you subscribe to, like Netflix. And just like with Netflix, the day you stop paying for the service is the day you lose access to the service.

The Attack on Personal Property

Given the Court’s decision, it makes sense that some publishers would zero in on this leverage point. Whether it’s music on Spotify, movies on Netflix, or TV shows on Hulu, the content industry is engaged in an active campaign to undermine the idea of ownership of personal property. Why would a publisher sell you a CD or DVD, for which you pay only once, when they could persuade you to subscribe to a service for which you will pay every month for the rest of your life? Why would they sell you a CD or DVD which you can listen to or watch forever, loan to a friend, or sell to a used record store, when they could have you subscribe to a service by which they deprive you of any first sale rights?

In short, why is it in a content company’s interest to enable you to own anything? Put simply, it is not. When you own a copy, the publisher completely loses control over it. When you subscribe to content through a digital service (like an adaptive learning service), the publisher achieves complete and perfect control over you and your use of their content.

To the extent that publishers actually have these motivations, the attack on ownership of personal property is annoying in the context of entertainment, but becomes profoundly disturbing in the context of higher education. But in some sense, whether these are the publishers’ motives or not, the end results for learners are the same – the move to subscription models results in a number of significant problems.

How the Past Differs from the Future

In the past, students bought textbooks. Because students owned the books, they could be sold back, loaned to a friend, or students could opt to keep them for future reference. But when you subscribe to an adaptive learning service you own nothing, you can keep nothing, there’s nothing to loan to a friend or sell back, and there’s nothing to reference in the future. When your subscription ends, everything goes disappears. Need to review the material from that math class last year for this semester’s science class? Sorry! Your subscription expired at the end of last semester. Would you like to rent another four months of access for $129.99?

In the past, students could highlight and take notes in the books they owned. This kind of intensive, structured studying resulted in the creation of personalized artifacts that were a meaningful portion of what students’ knew at the conclusion of class. Many adaptive learning services encourage learners to highlight, take notes, and build other learning artifacts by annotating their content. However, because students own nothing, the day their subscription ends all of their notes, highlights, annotations, and other study artifacts are unceremoniously deleted. An important part of what they learned in the class is gone forever, because they couldn’t afford to keep subscribing forever. The situation essentially becomes “You will pay, or you will forget.”

In the past, when a publisher went out of business students could continue learning from the books they had purchased from the publisher. But when one of the companies providing an adaptive learning service goes out of business, “pivots” to focus on other products, gets acquired, or for other reasons end-of-life’s the service, what happens? Even if you could afford to continue paying for a subscription, everything vanishes and you have literally no recourse.

From Content to Data

There is no analog in the old publishing world for the models of learners that adaptive learning services create in order to do what they do. However, it is clear that these models begin as empty algorithms, and are entirely dependent on the learner creating and contributing data to the system in order to function. If the learner does not contribute data to the system, the service cannot build a model of the student upon which it can adapt its instructional, assessment, and other features.

The utility of an adaptive learning service is a function of the amount of a student’s data to which it has access. And while these data are created by the students, and therefore would typically be the property of the students, publishers claim ownership of these data through Terms of Use and other legal tactics and refuse to provide students with access to their own data. Consequently, the longer a learner uses a particular adaptive learning service, the higher the switching cost becomes to move to a different service – because publishers will not allow students to take their data with them, they will have to train the new system from scratch. What happens when Johnny transfers to the school across town that uses a different service? What happens when Sally graduates and goes to college? What happens when Pat transfers from the community college to the university? In these and all other cases, the student is back at square one.

Summary

Through a general strategy of preventing students from owning educational materials as personal property, including taking away learners’ rights to their own data, publishers could have a ready-made solution to the problem of price pressure from open educational resources. And whether this is any specific publisher’s motivation for the move to subscription-based adaptive learning services or not, the resulting impacts on students are the same.

Because some of the research on these systems suggests that they can be very effective at supporting learning, publishers can claim to be “doing the right thing for students” while increasing revenue and decreasing degrees of freedom for students and institutions. As a comparison point, migrating from one learning management system to another would be a pleasant walk in the Sunday afternoon park compared to the switching costs associated with moving from one of these services to another. This before we consider the drastically increased “cost of ownership” of the subscription model, in which you don’t actually own anything.

I am not arguing in favor or against the instructional effectiveness of adaptive learning services. I am simply pointing out the completely unprecedented risks involved in betting an entire school, district, university, or state system on a service with the properties described above.

If creating a system of “super lock in” and perfect control over students’ use of content are not primary design criteria for adaptive learning systems, then we should see the emergence of multiple adaptive learning systems that do not have these characteristics.

Openness is the Solution

Each of the problems with adaptive learning services evaporates when principles of openness are applied to these systems.

  • When the source code of an adaptive learning service is openly licensed (open source), even if a company or hosting service goes out of business, or gets acquired, etc., your institution can continue to utilize the service.
  • When the content in an adaptive learning service is openly licensed (OER), that content, together with students’ notes, highlights, annotations, and other work within the system can be exported, archived, and used by students forever.
  • When students own and can download the data they create and contribute to an adaptive learning service, they can maintain their own backups and make multiple uses of it – including potentially using that data with other systems.

Openness is the skeleton key that unlocks every attempt at vendor control and lock in.

Inasmuch as vendors are just beginning to encourage institutions to make their first adoptions of these adaptive learning services, there is still plenty of time for institutions to stand up for their students’ and their own best interests. Institutions should require guarantees regarding openness in the RFPs they create for the acquisition of these systems. No school has to race to adopt an adaptive system that doesn’t provide the guarantees necessary to protect the legitimate needs of the school and its students.



http://opencontent.org/blog/page/20

Excellent coverage by Ronald Mann over on the SCOTUS Blog of an even more excellent decision by the court in Kirtsaeng v. John Wiley & Sons, Inc. While the whole analysis is worth a read, here is the good news in plain English:

The Court at last seems to have reached a consensus on a seemingly intractable problem of copyright law: whether a U.S. copyright holder can prevent the importation of “gray-market” products manufactured for overseas markets….

In Kirtsaeng v. John Wiley & Sons, the Court considered the “first sale” doctrine of copyright law. This is a rule that means that when a publisher sells a copyrighted work once, it loses any right to complain about anything later done with that copy. This is the rule that makes it okay to resell a used book to a used-book store, and for that store in turn to sell used books to its customers.

The issue in Kirtsaeng was whether the first-sale doctrine applies to copyrighted works manufactured overseas. Kirtsaeng bought textbooks in Thailand, where they are cheap, brought them to the United States, and resold them at a large profit. The lower courts said he couldn’t do this, and ordered him to pay damages to the publisher (John Wiley). The Supreme Court disagreed. The Justices said that the first-sale doctrine applies to all books, wherever made. So even if you buy a book made in England, you can resell it without permission from the publisher.

Now that the reselling of these kinds of books is unequivocally legal in the US, I expect we’ll see a host of interesting new tactics from students in their ongoing arms race against the publishers. Between this ruling and the ever growing impact of OER, it feels like it’s getting harder to be a traditional publisher. Don’t quite cry for Pearson yet though – “In 2011, Pearson increased sales by 4% in headline terms to £5.9bn and adjusted operating profit from continuing operations by 10% to £942m.”

We’ve still got a lot of work to do.

{ 1 comment }

Lumen Learning: A Red Hat for OER

Last week I wrote about the many goals I have for the open education movement, and how a Fellowship from the Shuttleworth Foundation is enabling me to spend focused time pursuing them. While I tried to lay out a compelling vision of what I want to accomplish last week, I didn’t discuss the how. Clearly, accomplishing a set of goals of that scope and magnitude requires more energy and productive capacity than any one person could ever muster.

Today I’m happy to announce the launch of Lumen Learning, a new organization I’ve founded together with my long-time friend and collaborator Kim Thanos. Our goal with Lumen is to significantly improve student success by bridging the gap between OER developers and potential OER adopters.

Over a decade and $100M US in foundation funding later, an incredible amount of high quality open educational resources exist which are only rarely used in formal settings. The situation today feels very much like it did with open source software about a decade ago. Back around the turn of the century, almost everyone had heard of open source and was interested in potentially saving money and improving the stability and quality of their technology offerings, but very few institutions had either the bravery or the capacity to run systems for which there was no formal training and no technical support. Red Hat stepped into this vast pool of curiosity and caution with training, technical support, and other services that put adopting Linux within the reach of a normal institution. Lumen is trying to do exactly same thing – step into the deep pool of curiosity and caution around OER with the faculty training, academic leadership consulting, technical and pedagogical support, and other services necessary to put adopting OER within reach of a normal institution. In other words, we want to make Lumen into a “Red Hat for OER.”

In the coming days and weeks I’ll write more about what we’re doing. For now, check out lumenlearning.com for an overview of our first group of partner schools, services, etc. More to come…

{ 1 comment }

Where I’ve Been; Where I’m Going

Sometimes it helps to look backwards and figure out where you’ve been to get a clearer picture of where you’re going. As today is the first official day of my Shuttleworth Fellowship, I’ve been taking the opportunity to reflect on where I’ve come from and where I’m going. Upon reflection, it feels like I have some really strong momentum behind my work in open education. But where is that momentum carrying me? How can I leverage it thoughtfully to be more useful? (This thinking fortuitously coincides with a recent article titled Why Open Educational Resources Have Not Noticeably Affected Higher Education to which I have included a paragraph response to below. Spolier alert: we see the world very differently.)

Where I’ve Been

I’ve had the privilege of being part of several interesting events in the open education timeline. (Some of them were even successes!) But as I look through the list, there is a subset of events I can pick out from the others that suggest a fairly specific trajectory. In K-12, that list includes:

  • Launch the first high school committed to using OER exclusively across it’s curriculum in Fall 2009.
  • Launch the Utah Open Textbook Project in 7 classrooms in Fall 2011.
  • Take the Utah Open Textbook Project district-wide in Fall 2012.
  • Take the Utah Open Textbook Project statewide as of Fall 2013.

Where’s the momentum heading? While the vector may not be immediately obvious, I see it this way: demonstrate the effectiveness of OER in a lab-like charter school setting, then take those success to a few brick and mortar schools, then grow it to a district, then expand it statewide across Utah.

What to do with the momentum? Now that I have practical experience with regard to rolling out open textbooks in secondary school settings (including the state level), and data about the cost savings and learning impacts of doing so, I need to keep pushing here until the number of state offices of education promoting the statewide adoption of open textbooks grows from 1 to 50.

In higher education, the subset of events that point in a particular direction includes:

  • Help launch and run the first phase of the Kaleidoscope Project which replaced commercial textbooks with open textbooks in 10 courses across 8 schools from 2011-2012.
  • Grow the Kaleidoscope Project to cover 30 courses across 28 schools in 2013-2014. (We secured grant funding to do this back in 2012.)
  • Help launch the first Textbook Zero Associates degree – an entire Associates degree using only OER. This will launch in Fall 2013 – a launch announcement is coming next week. (Associates in Business Administration)

Where’s the momentum heading? This momentum feels very much like the momentum in K-12: start small in terms of numbers of schools and courses using OER, then grow that number, and eventually cover an entire degree program.

What to do with the momentum? First, I need to help more schools adopt the Textbook Zero model for their Associates of Business degrees. At the same time I need to help a school move to a Textbook Zero Associates degree of General Studies – the OER work necessary for the business degree gets us 2/3 of the way there. The Associates of General Studies has almost 100% overlap with the General Education sequence at four year schools, so the next obvious move is to help a university commit to a Textbook Zero model of Gen Ed. And by that point, we’re within striking distance of a four year degree in Business or Computer Science based exclusively on OER – I should help a university do that next.

Where I’m Going Next

If I can successfully go where the momentum is pointing, this would give us successful exemplars from the top to the bottom of the entire formal secondary and post-secondary ecosystem – making it possible to earn a high school diploma, Associates degree, and Bachelors degree without ever spending a penny on a textbook. More importantly, that entire experience would occur in the context of 4R permissions that allow customization, personalization, remixing, sharing, continuous improvement, etc.

So this is where I’m heading – connecting the OER dots all the way from 7th grade through the end of the Bachelors degree. I think we can get the initial post-secondary program launches done within three years:

  • Textbook Zero Associates degree in a community college, Fall 2013 launch
  • Textbook Zero General Education pathway in a university, Fall 2014 launch
  • Textbook Zero Bachelors degree, Fall 2015 launch

Of course, the initial post-secondary launches are groundbreaking and interesting, but we’ll never have the level of impact we want if we don’t scale this work. Post-secondary OER adoption needs to expand like an ever-broadening wake behind an OER boat moving purposefully upstream.

I think the secondary launches take longer, likely five years:

  • 1 state actively promoting open textbooks across its secondary courses, Fall 2013
  • 3 states actively promoting open textbooks across their secondary courses, Fall 2014
  • 15 states actively promoting open textbooks across their secondary courses, Fall 2015
  • 35 states actively promoting open textbooks across their secondary courses, Fall 2016
  • 50 states actively promoting open textbooks across their secondary courses, Fall 2017

Obviously, this is a monumental work. How to tool up in terms of capacity, coordination, and organization to get all this work done successfully and enable it to scale is another question. More thoughts on that soon.

And in response to Gerd Kortemeyer I would say only this: OER haven’t been impacting education as much as they could because with very few exceptions the open education community has been too busy creating materials and writing hype articles about their potential impact to do the dirty, almost thankless work of helping people adopt them. There was a time when I was as guilty of this as anyone. This is slow, slogging, culture changing work that has to be done one faculty member and one school at a time (at least until it hits a tipping point). I doubled down on my belief that this is the problem by applying for a Shuttleworth Fellowship focusing on doing this very “boots on the ground” work. I don’t believe faculty and students need another piece of magic technology that will solve this problem for them. They need good old-fashioned, hand-holding help. I’m doing it, and it’s working.

Post Script: The Deep Future; or, The End Game

So what’s the end game here? Certainly not OER adoption. Getting the open content infrastructure broadly deployed is just the first step. Once faculty and teachers are comfortable using OER, and these OER are widely adopted across entire secondary and post-secondary programs, who knows what other kinds of innovations – think pedagogy, support, assessment, credentialing – we’ll realize are possible to build on top of the open content infrastructure? I come back to one of my all-time favorite quotes:

Don’t ever make the mistake [of thinking] that you can design something better than what you get from ruthless massively parallel trial-and-error with a feedback cycle. That’s giving your intelligence much too much credit. (Linus Torvalds)

I pair that quote with what has been (for me personally) my most profound realization in all the years I’ve worked on open – “openness facilitates the unexpected.” OER empower and enable. Yes, we already know that OER adoption will lower costs and can improve outcomes. What we don’t yet know is all the other things that can be done by an innovative student, teacher, entrepreneur, policy maker, or anyone else who can assume the existence and broad acceptance of the open content infrastructure as a starting point.

If we succeed in broadly deploying this open content infrastructure, it will empower and enable people to do things we can’t even imagine today – the same way an open communications infrastructure (read: the Internet) allowed people to create things we could never have imagined a few decades ago. Think of the incredible things that have emerged in the past 10 years alone because creative people can now assume the broad deployment and adoption of the open communications infrastructure called the Internet. Imagine what they’ll do when they can make the same assumptions about the open content infrastructure. You really can’t – and that’s the beauty of it.

Thank you, Shuttleworth Foundation, for creating a space in my life that allows me to pause and reflect like this.



http://opencontent.org/blog/page/21

When David Noble first published his groundbreaking critique of online education in 1998, Digital Diploma Mills: The Automation of Higher Education, I thought to myself “he couldn’t be more wrong.” As it turns out he might not have been wrong – maybe Noble was simply so miraculously prescient that I couldn’t see what he saw. Fifteen – count them, fifteen – years later, Digital Diploma Mills reads as if it were researched and written about the current phenomenon called “MOOCs.” Entire paragraphs from the essay can be read unaltered and applied precisely to the state of things today:

What is driving this headlong rush to implement new technology with so little regard for deliberation of the pedagogical and economic costs and at the risk of student and faculty alienation and opposition? A short answer might be the fear of getting left behind, the incessant pressures of “progress”. But there is more to it. For the universities are not simply undergoing a technological transformation. Beneath that change, and camouflaged by it, lies another: the commercialization of higher education. For here as elsewhere technology is but a vehicle and a disarming disguise.

. . .

The foremost promoters of this transformation are rather the vendors of the network hardware, software, and “content”… who view education as a market for their wares, a market estimated by the Lehman Brothers investment firm potentially to be worth several hundred billion dollars. “Investment opportunity in the education industry has never been better,” one of their reports proclaimed, indicating that this will be “the focus industry” for lucrative investment in the future, replacing the health care industry… It is important to emphasize that, for all the democratic rhetoric about extending educational access to those unable to get to the campus, the campus remains the real market for these products.

. . .

The third major promoters of this transformation are the university administrators, who see it as a way of giving their institutions a fashionably forward–looking image. More importantly, they view computer–based instruction as a means of reducing their direct labor and plant maintenance costs — fewer teachers and classrooms — while at the same time undermining the autonomy and independence of faculty. At the same time, they are hoping to get a piece of the commercial action for their institutions or themselves, as vendors in their own right of software and content.

. . .

Most important, once the faculty converts its courses to courseware, their services are in the long run no longer required. They become redundant, and when they leave, their work remains behind. In Kurt Vonnegut’s classic novel Player Piano the ace machinist Rudy Hertz is flattered by the automation engineers who tell him his genius will be immortalized. They buy him a beer. They capture his skills on tape. Then they fire him. Today faculty are falling for the same tired line, that their brilliance will be broadcast online to millions. Perhaps, but without their further participation. Some skeptical faculty insist that what they do cannot possibly be automated, and they are right. But it will be automated anyway, whatever the loss in educational quality. Because education, again, is not what all this is about; it’s about making money.

If MOOCs (or xMOOCs more precisely, for those of you who know the inside baseball) do not represent “the automation of higher education,” what does? And even today we read again about universities rushing to become “one of the elite” schools offering MOOCs in partnership with Coursera or edX, and the pathways to diplomas these organizations are working hard to create.

The whole xMOOC phenomenon reads like the history of the Internet played backwards (or the history of the Reformation read backwards, if you prefer). Remember when the internet was largely a walled garden to the average user of AOL, Prodigy, or Compuserve? Remember how hard those companies resisted letting their users loose into the big wide world of the internet on their own? Remember their justifications and reasons why? Remember how amazing it was when people finally made their way onto the open internet?

Now play that record backwards, as the first generation of MOOCs (cMOOCs) – that allowed anyone from anywhere to participate however they liked in experiences built from openly licensed course materials – gives way to a new generation of walled gardens that call themselves “open” but require registration, use copyrighted materials, and take investment capital. They even prohibit students from using their services in the most useful ways: “You may not take any Online Course offered by Coursera or use any Statement of Accomplishment as part of any tuition-based or for-credit certification or program for any college, university, or other academic institution without the express written permission from Coursera” (Coursera Terms of Use). David Noble saw something like this coming. I’m not sure he was wrong.

But he doesn’t have to be right, either. Since the very beginning, open education has been about enabling and empowering. Including empowering faculty – not replacing them. Free and legal access for faculty, their students, and everyone around the world to high quality educational materials that can be legally adapted and customized specifically for your particular circumstances, and then shared broadly, openly, and freely. Ultimate flexibility. Zero cost. Increased dependance on faculty as curators, customizers, and contextualizers – real people who have relationships with students and understand what they need. Unlocking the potential of faculty and unlocking access for students. Allowing for any and all uses of educational materials a learner sees as valuable. That’s the vision of open education the wider world apparently has not yet seen. Unfortunately, much of the world seems to have seen the more limited xMOOC vision and accepted it as the state of the art regarding what is possible.

There was a positive sign from one of the xMOOCs today – edX announced its first MOOC to release its content under a Creative Commons license today. If this were to become a trend, and xMOOCs were to rejoin the open education movement, David Noble would have come frighteningly close – but would still be wrong.

Where do you think things are going? Better yet, what are you doing to influence where they will end up?

{ 12 comments }

Leave Update: Month 1

My unpaid leave from BYU started January 1. I can honestly say I don’t think I’ve ever worked harder than I have during this past month. It’s been exhilarating and exhausting and exciting and challenging and I’m loving it. For anyone who’s interested, here’s an update on what I’ve been doing:

Textbook Zero
I spent two days early in January visiting with our partner community college working on the first Textbook Zero Associates degree program. This involved two full days of hands-on training with faculty, providing a very hands-on and high touch workshop experience focused on redesigning courses around OER. We started with learning outcomes, moved up through assessments, and finally looked at the open educational resources that will best support teachers in facilitating the specific types of learning they want to see happen with their students. This program, which is an Associates degree in business administration, will open this fall. The Textbook Zero approach (moving the entire degree off of textbooks and onto OER) knocks 30% off the cost of completing this degree. I am super excited about this. (And I have to say that, given the abuse the term “open” has taken recently, I’m going to take steps to make sure that the phrase “Textbook Zero” retains the meaning I meant for it to have when I coined it.)

Utah Open Textbooks Project
I spent four days during January working with teachers across the state on developing open science textbooks for adoption statewide this fall. We currently anticipate having somewhere in the neighborhood of 75,000 students in Utah using open science textbooks in fall 2013. This work is really being led by Sarah Young from the Utah State Office of Education, and I’m just playing a supporting and facilitating role. However, I have to say that even my love of open textbooks has been tested as I’ve begun working through the editing and copyright review process for six textbooks for grades seven through 12. This is going to have a huge impact, both financially and educationally, on Utah. And I hope the rest of the world sees Utah as an inspriational example here.

Lots of smaller items to report. I led several webinars in January (e.g., one for the State Department on MOOCs), did some one day trainings with teachers / faculty interested in moving a single course or three off of textbooks and onto OER, spent a day at the CK12 Foundation, spoke at the WestEd Forum / Board Meeting where I made some great additional connections around K-12 open textbooks (more on this in next month’s report, hopefully), and got an IES SBIR grant proposal written and submitted with my colleague Kim Thanos.

I also harvested some things in January I’d planted earlier: my PhD student / colleague TJ Bliss defended his dissertation and accepted a job as State Director of Assessment with the Idaho State Office of Education, and our newest research article appeared in First Monday: The Cost and Quality of Open Textbooks: Perceptions of Community College Faculty and Students.

All in all, a good first month! But I can do better… so here’s to February!

Is your community college, university, or high school interested in using open educational resources or open textbooks but not quite sure how to start? Leave a comment below or send an email – david.wiley@gmail.com

{ 0 comments }

New article on Open Textbooks

Our latest article on open texts has been published!

The Cost and Quality of Open Textbooks: Perceptions of Community College Faculty and Students (CC BY open access). Abstract:

Proponents of open educational resources (OER) claim that significant cost savings are possible when open textbooks displace traditional textbooks in the college classroom. We investigated student and faculty perceptions of OER used in a community college context. Over 125 students and 11 faculty from seven colleges responded to an online questionnaire about the cost and quality of the open textbooks used in their classrooms. Results showed that the majority of students and faculty had a positive experience using the open textbooks, appreciated the lower costs, and perceived the texts as being of high quality. The potential implications for OER initiatives at the college level seem large. If primary instructional materials can in fact be made available to students at no or very low cost, without harming learning outcomes, there appears to be a significant opportunity for disruption and innovation in higher education.

The follow-on study to this one will appear in Journal of Interactive Media in Education shortly.



http://opencontent.org/blog/page/22

Before I learned I would be able to take a sabbatical this year, I was scheduled to teach the 2013 edition of my Introduction to Openness in Education course at BYU during the winter term. Since I received the Shuttleworth Fellowship and am taking a sabbatical (more on this soon) I won’t be teaching the course at BYU this term – but because many people were interested in taking it informally again this year (I’ve been teaching it as a “MOOC” since 2007, before the term was coined), I will be offering a free, non-credit version in the new Canvas Network as a community service.

Check it out: Introduction to Openness in Education, the 2013 Edition

As always, I’m extremely interested in your feedback and thoughts on how the course can be improved and will happily improve the course content / design in real-time as improvements come in. Please send me any thoughts about the course at david.wiley@gmail.com.

The course is currently “full” in the Canvas Network. International interest looks strong (Go Iceland! Go Seychelles!):

Canvas Network Enrollments

However, since most of the course action really occurs on your blog, twitter, youtube, and delicious accounts (this is important: read why), you can still participate fully in the course even though you can’t register in Canvas. Just add your name and blog post to the same Google Form course participants are using, and you’ll get aggregated in the course RSS feed just like everyone else.

Here’s to a great 2013!

{ 1 comment }

Mayans, Flat World Knowledge, and Saylor.org

December 21, 2012 was supposed to be the day the world was going to end. Instead, it ended up being the day the Saylor Foundation saved a major portion of the educational Commons from disappearing. As described in a blog post this morning, Saylor.org now hosts free and open versions of Flat World Knowledge texts.

Saylor has done a Herculean job, backing up and providing free and permanent access to Word and PDF formats of every Flat World Knowledge textbook – with ePub versions coming in Q1 2013. They’re also inviting anyone who has remixed FWK books to contribute links to their remixes for Saylor’s new Bookshelf.

Saylor has also gone out of their way to make sure that each and every page of every Word and PDF document contains attribution and links back to the FWK site – over attribution just to be safe. And unless FWK breaks the attributions by changing book URLs when the paywall goes up January 1, readers of the Saylor versions of the books will still be able to purchase flash cards, printed copies, and other supplemental materials easily. The Bookshelf site explicitly suggests to readers, “We encourage you to visit the Flat World Knowledge website and consider the purchase options available there. We also provide links to the site within each book.”

A Merry Christmas, indeed.

{ 0 comments }

Changes

There are some very exciting things happening in my life right now.

Shuttleworth

1. I’m extremely humbled and excited to have been awarded a Shuttleworth Fellowship. These Fellowships provide a year’s salary replacement, allowing each Fellow to focus completely on creating a particular kind of social change – without other distractions. In my application, I characterized the change I want to create this way:

I want to push the field over the tipping point and create a world where OER are used pervasively throughout secondary schools, community colleges, and universities. In my vision of the world, OER supplant traditional textbooks for all high school, associates degree, and undergraduate general education courses. Organizations, faculty, and students at all three levels collaborate to create and improve an openly licensed content infrastructure that dramatically reduces the cost of education, increases student success, and supports rapid experimentation and innovation.

And set what I believe to be audacious, but achievable, goals:

I will dedicate my fellowship year to hands on work with secondary and post-secondary institutions, supporting and evaluating their adoption of OER. I will do on-the-ground work with at least 20 postsecondary and 20 secondary schools and help move at least 100,000 students off expensive, traditional textbooks to OER-based replacements. By the end of the fellowship year, at least one post-secondary institution I support will launch a completely OER-based associates degree.

By itself, the Shuttleworth Fellowship is an almost unimaginable opportunity for which I am extremely grateful. But it coincides with a number of other synergistic happenings that will amplify its effect significantly.

2. BYU has granted me a sabbatical starting January 1. This will allow me to focus in the way intended by the Shuttleworth Fellowship. I’m extremely grateful to everyone at BYU, particularly my ever-supportive Dean Richard Young, for helping this happen in the necessary timeframe.

3. I have stepped back from some other commitments. I am ending my term as Senior Fellow for Open Education at Digital Promise in Washington, DC at the end of 2012. And after much thought and emotional struggle, I have also ended my relationship with Flat World Knowledge.

4. I will continue my Gates Foundation-funded work researching the effectiveness of open textbook adoptions in post-secondary settings. I will also continue my Hewlett Foundation-funded work researching the effectiveness of open textbook adoptions in secondary settings. These two grants synergize perfectly with Shuttleworth Fellowship, because the Shuttleworth Foundation does not fund research per se.

5. I will also continue working with my good friend and partner-in-mayhem Kim Thanos on the NGLC-funded Kaleidoscope Project follow-on grant, which supports pilots of the Kaleidoscope model for open textbook adoption at post-secondary schools. Kim and I are also hard at work on what we call “Textbook Zero,” which is our model for a completely OER-based associates degree (which I described previously, albeit with a less catchy name).

6. Finally, I will also continue working with the Saylor Foundation, who are doing incredible things for the cause of open education and open textbooks, in a newly formalized role as Senior Fellow for Strategy. I anticipate using many of the OER materials they’ve curated, aggregated, and collected during my Fellowship year.

I’ll be sharing more details about all of this (and the other things I’m sure I’m leaving out) in the future. As you can see, there are several streams of work I’m trying to bring together here. And hey – crossing the streams seems like it’s always been a great idea in the past.

ghostbustersMany thanks to everyone who has supported me in pulling this all together, especially those of you who wrote reference letters for my Shuttleworth application on extremely short notice (you know who you are!). Ghostbusters references aside, I take this work very seriously and mean to be a responsible steward of this unbelievable opportunity. Wish me well; or if you’re the praying type, send a request Heavenward on my behalf. I look forward to working with many of you this coming year as we keep trying to make the world a better place.



http://opencontent.org/blog/page/23

The Best OER Revise / Remix Ever?

Book CoverIn fall of 2011, I took a new approach to the Project Management course I teach each year. I wanted my students to gain hands on experience managing a project, I wanted them to feel the pressure of hitting deliverables, I wanted them to feel the nausea of having things fall through, I wanted them to learn to navigate managing people, and most of all I wanted them to feel the joy of completing a piece of work that blesses people lives. So I asked my students to engage in a very large scale revise / remix project that would benefit them and many others.

We started with Project Management from Simple to Complex, originally written by Russell Darnall and John Preston and originally published under a Creative Commons BY-NC-SA license by Flat World Knowledge. For the last two years now we’ve been revising and remixing away on Project Management for Instructional Designers (PM4ID). Here’s what we’ve done:

  • Aligned each chapter with the relevant portions of the Certified Associate in Project Management (CAPM) certification exam and the Project Manager Professional (PMP) certification exam, so that you can now use PM4ID to study for these exams,
  • Removed generic examples like ‘You have to get 13,000 tons of concrete to Singapore by December 1′ and replaced these with examples from the instructional design field,
  • Shot comprehensive video interviews with three experienced instructional design project managers that include stories and bits of wisdom on each of the book’s chapter topics and compiled text transcripts of each of the videos,
  • Completed a word-for-word re-editing that improved the readability of the book,
  • Created text-to-speech audio recordings of each chapter section,
  • Replaced (c) photos throughout the book with CC licensed photos,
  • Added a Glossary of key terms,
  • Updated and modernized the ‘Technology Tools for Project Management’ portion of the book, and
  • Created an automated process for scraping content from the live PM4ID site and converting the entire book into ePub, Kindle, PDF, and downloadable HTML formats nightly.

If this new textbook isn’t the best OER revise/remix ever, I’d like to know what is! Really. Leave links in the comments below.

In the same way that faculty around the world give students assignments to contribute to Wikipedia, I think it would be awesome if more faculty assigned students to localize /revise / remix CC licensed materials from Flat World Knowledge, CK12, OpenStax, and other OER authors and publishers. Each time I give this kind of assignment, I find that my students invest in their work at a completely different level and go far above and beyond what I ever imagined they could do. Now these students are co-authors on a book that is being used in programs across the US (and world? let me know if you’re using PM4ID in your class!) and have an incredible portfolio piece to showcase to future potential employers and their moms.

Congratulations to my IPT 682 students. You guys hit it out of the park!

{ 3 comments }

Tuition is a Movie Ticket, OER are Popcorn

More response to the interesting discussion happening on the (closed) oer-community list. Brian Lamb asks:

Finally, can somebody tell me if an NC license forbids reuse by non-profit public education institutions that charge tuition? Seems like a fairly simple question, but I’ve heard authoritative responses that wholly contradict each other on that point.

The extremely misguided thinking Brian is referring to (and not personally guilty of) goes, “If someone is charges tuition for a course that uses a NC textbook, that violates the terms of the license.” This line of thinking is completely wrong. Full stop. Here’s why.

Every person in the world already has permission to use BY-NC-SA materials non-commercially. This group, every person in the world, includes students who enroll in a tuition-charging class. The STUDENT is the user, not the UNIVERSITY. What is the purpose of a textbook? To promote learning. Who learns, the university or the student? The student. Who buys the textbook, the university or the student? The student. The student is the user of the BY-NC-SA material, regardless of who suggests that s/he use it. And if a student wants help exercising their BY-NC-SA rights with regard to an OER, and is willing and able to pay someone to help them exercise those rights more effectively or efficiently than they can on their own, the NC clause doesn’t regulate that. Period.

The mention of “tuition” is a red herring. Tuition has nothing to do with textbooks. When you go to the movie, you have to buy a ticket to get into the theater. But no matter how much the movie theater wishes you would buy their ridiculously overpriced popcorn, they can’t force you to. Likewise, when you take a university course, you have to pay tuition to get into the class. But no matter how much the university wishes you would buy the ridiculously overpriced required textbook, they can’t force you to.

Again, the student is the user and the student is completely within their BY-NC-SA rights to hire a tutor to help them understand the OER, to pay for an assessment of what they learned using the OER, or to use the OER as the primary material they study when they enroll in a university course. What anyone other than the rights holder can’t do is charge for access to the OER.

It’s a little known fact, but the NC clause does explicitly define one use as definitely and in every case noncommercial:

[Exchange of OER] by means of digital file-sharing or otherwise shall not be considered to be intended for or directed toward commercial advantage or private monetary compensation, provided there is no payment of any monetary compensation in connection with the exchange of copyrighted works.

Black and white. In the license text itself. If a university makes BY-NC-SA materials available to its faculty and students, as long as it does not charge for access to the OER they are within the rights granted by the license.

The idea that using NC licensed content in universities violates the license is nothing but FUD, and we simply need some case law to put this argument to bed. But I predict that you will never see a publisher litigate on this issue because they know they will lose, and for their trouble will have paid the legal fees necessary to establish the case law that undercuts their arguments.

{ 6 comments }

Agreeing with Stephen: Perspective Matters

Stop the presses. I’m going to agree with Stephen here.

In a recent email to the (closed) oer-community mailing list, Stephen argued that perspective plays a significant role in this debate. He couldn’t be more correct. Just as there is not One True License, there is not One True Perspective on the free, nonfree, open, libre, etc., debate. A few examples:

– Some people look at OER issues from the perspective of the content, and some see them from the perspective of the people who use the content. Content-p drives people to favor SA licenses, to insure that derivatives of the content always remain free. People-p drives people to reject SA, so that derivers always remain free to license their derivatives as they choose. Which is the One True Perspective?

– In this thread we have already seen people who view NC from the perspective of the licensor and others who see NC from the perspective of the licensee. Licensor-p sees NC as enabling and facilitating commercialization. Licensee-p sees NC as forbidding commercialization. Which is the One True Perspective?

– As we’re also seeing on this thread, we can look at OER from the perspective of Access to content (without which permissions granted by licenses are meaningless) and from the perspective of the permissions granted by Licenses. I recently discussed these two perspectives in more detail. Which of these perspectives is most important? Which is the One True Perspective?

– As a final example, some people look at “open” from the perspective of a Bright Line test, while others take a more Accepting perspective. Bright Line-p enables people to make clear distinctions between what is and what is not open. Accepting-p enables people to recognize and value movements toward becoming more open, without passing judgments on people who “aren’t there yet.” Which of these is the One True Perspective?

In my 2008 OpenCourseWars story, I used a jihadi metaphor to describe licensing conversations. The jihadi metaphor is appropriate because LICENSING ARGUMENTS ARE ARGUMENTS OF PERSPECTIVE. When we argue that one particular way of licensing is better than others, we’re really arguing that one perspective is better or truer than others. In other words, whenever we make an argument that says “everyone should use a [free | NC | etc.] license,” we are making a _religious_ argument – an argument which dictates the perspective by which we think everyone else should be judged.

When we move licensing outside the realm of religion, we can recognize the truth of Stephen’s claim about the importance of perspective. We can also realize that, depending on the peculiarities of a specific context and the personal or organizational perspectives of a specific licensor, different licenses will be optimal under different circumstances.

It would be great if the world were simple enough that One License to Rule Them All could exist, but it doesn’t. I wish to Heaven we would stop arguing about it, and just respect individuals and organizations to understand their own contexts, goals, and perspectives sufficiently well to pick the license that best meets their needs.



http://opencontent.org/blog/page/24

Cable on Free vs Open

Cable Green sent a frustrated email today to the Educause Openness Constituent Group. Here’s the key point:

The Babson Survey Research Group has released a new report: Growing the Curriculum: Open Education Resources in U.S. Higher Education.

This sentence is of particular concern to me: “One concept very important to many in the OER field was rarely mentioned at all – licensing terms such as creative commons that permit free use or re-purposing by others.”

I think I’ll run a webinar series (as many as it takes) for Chief Academic Officers to help them better understand: (1) OER and (2) the difference between “free” and “open.”

I share his frustration. Here’s one humble contribution to making it easier to understand the difference between free and open.

A word about each quadrant.

On the Fence. 99% of content on the internet probably falls into this category. Completely free for you to access and read, but fully copyrighted – no permission for you to republish NYT articles on your own website or translate that CNN article into Swahili.

Old School. A small but growing amount of online content fits this category, like the articles behind the Chronicle’s paywall.

Open. Free to access and read, with free permissions to do the 4Rs – reuse, redistribute, remix, revise. Like the videos in Khan Academy or the text in Wikipedia.

No Man’s Land. I’m not aware of anything in this space. The first person to purchase the material would start legally distributing it outside the paywall, defeating the purpose.

As I’ve been saying, the real risk of the On the Fence MOOCs (aka xMOOCs) is that they confuse people about “open.” “Open” does not “mean free to access but copyrighted,” like Udacity and Coursera are. Open means free access plus free 4R permissions. The On the Fence MOOCs are drawing energy and attention away from where the real battle is happening – in open educational resources. OER is the only space where everyone has permission to make and redistribute the changes necessary to best support learning in their local context. Consequently, OER is the only space where continuous quality improvement is possible, as I’ve been saying for years now. You can have all the analytics in the world telling you where your course needs improving, but without 4R permissions you’re not allowed to make those improvements.

Being open is key to driving quality, and we need to help Chief Academic Officers who are desperately trying to improve student success get the message.

{ 12 comments }

Updated: What’s Happening at FWK?

FWK recently announced a change in their business model. I’m disappointed by this decision, but understand from an economic perspective why it is being made. The license issues involved are complicated. I am continuing to advise FWK, as is Creative Commons, on the licensing options that allow for maximum openness while ensuring that FWK has a sustainable business model. More will follow on that topic as decisions are made.

{ 2 comments }

New Degreed Beta

Educators are frequently criticized for being disconnected from reality, with insults along the lines of “Those who can, do. Those who can’t, teach.” I’m very cognizant of these criticisms and agree that there’s terrific danger in faculty remaining disconnected from the “real world” while trying to prepare student to thrive out in it. One of the things I’ve enjoyed most about EdStartup 101 is the opportunity to really dig in on issues relating to entrepreneurship in education not just academically, but in the “real world” as well.

I’m working with two startups right now that I’m extremely excited about. Degreed, which is the furthest along of the two has just released a new version of the site which addresses much of the feedback from our previous beta round (including sign ups without Facebook!). As part of the newer, better beta, we’ve kicked off an awareness campaign including some great new videos and a Kickstarter-like fundraising campaign. Check out the video and if you like what we’re trying to do, consider making a donation.



http://opencontent.org/blog/page/25

MOOCs, Showrooming, and Higher Ed

If you’re wondering what the impact of MOOCs will be on formal higher education, one answer came in the form of two interesting stories about the upcoming holiday season:

Thousands of shoppers headed to Target to shop last holiday season. Some made purchases at Target; others pulled out their smartphones, scanned product barcodes and made purchases online instead.

To prevent that practice, commonly known as “showrooming,” Target has pledged to match prices with select online retailers — including Amazon.com. Mashable

and

The Wall Street Journal reports that Best Buy is planning to match prices on many items offered by Amazon and other online retailers during the holiday shopping season. The move appears to be part of a larger effort from Best Buy to crack down on the number of consumers who scope out products in bricks-and-mortar stores and then buy them more cheaply online, a trend known as showrooming. Mashable

In the same way that online stores are clearly exerting downward price pressure on brick and mortar retailers – to the extent that they’re sending out press notices indicating that they’re responding to those price pressures – one impact of MOOCs on formal higher education will be this same type of downward price pressure. Watch for announcements about how institutions will respond.

{ 3 comments }

OER Quality Standards

The topic of OER quality standards came up at #OpenEd12 today. It makes me a little crazy. Why, why why, do we continue to focus on indirect proxies for quality when we’re capable of measuring quality directly?

Direct Measure of OER Quality Indirect Proxies for Quality
  • Degree to which the OER facilitates student learning
  • Academic credentials of the author / creator
  • Degree of interactivity
  • Amount of multimedia
  • Amount of editorial effort put into materials
  • Length / number of words / rigor
  • DPI of embedded artwork
  • &c.

At the end of the day, would you rather have (1) an OER that successfully facilitates student learning, or (2) an OER written by a top author that is 700 pages long and chock full or gorgeous artwork, simulations, and video? OER can be everything in the indirect column and fail on the direct column. So why do we continue to care and focus on indirect proxies for quality when we could go straight for the direct measures of quality? And why do we continue to think about quality as “static” when we have the capability to engage in continuous quality improvement? Why are we willing to work with materials that aren’t constantly getting better, as OER can when used in a principled way?

{ 4 comments }

Remix as Milk to Chocolate

Found this great OER metaphor and image today via Dana West.

Milk Role OERs
Cow Primary producer/Creator Teacher/Author
Calf Primary consumer Enrolled student
Farmer Secondary producer/repurposer Learning technologist/Course leader
Milk bottlers Primary supplier Learning technologist
Shop Secondary supplier deposit in institutional repository or open deposit
Human family Secondary consumer Teacher within or outside institution
Human family and pets Sharers and re-users Enroled students of that teacher
Person with milk, Person with cocoa powder, Person with sugar – can make chocolate Exchange and repurposers other teachers within or outside institution
Chocolate in shop fridge repository deposit in different open repositories
Chocolate eaten re-users/maybe sharing; ) potentially global learners
Chocolate added to cake mixture further re-purposing potentially global teachers

Image CC BY-NC-SA Steve took it



http://opencontent.org/blog/page/26

Open Textbook Cost Infographic

20 Million Minds today released an infographic summarizing costs of textbooks and cost savings associated with open textbooks. Click the thumbnail for the full-size version.

{ 0 comments }

Responding to Kira

Kira writes passionately (here; reposted here) about why Creative Commons should abandon the NC and ND clauses. S/he is wrong. The argument comes down to this:

“The crux of the concern raised by Students for Free Culture comes down to weather Creative Commons will be locked in by pressures to serve the interests of rightsholders or be committed to a strategic standard promoting free licensing towards the creation of an indivisible and shared commons.”

There are three major problems with the concern raised by SFC.

First, the sole purpose of Creative Commons is to serve rightsholders. Rightsholders are the only entities authorized to place a license on a work. Consequently, rightsholders are the only direct “customers” of the licenses Creative Commons provides.

When Kira asks whether or not CC will remain “locked in by pressures to serve the interests of rightsholders,” what other path forward does s/he imagine for Creative Commons? Whose interests might they be locked into instead? Users of creative works? “Unfortunately,” a user can’t place a license on someone else’s work. Creative Commons is an organization that offers licenses. Ergo, the only entities capable of using Creative Commons’ services are rightsholders.

I understand, of course, that there are huge benefits to society when rightsholders choose to use a Creative Commons license. But those benefits accrue only when rightsholders choose to use Creative Commons licenses. Creative Commons will forever and always serve the interests of rightsholders. There is no one else they can serve directly.

Second, it is a bit hypocritical to advocate for “the creation of an indivisible” commons, while simultaneously advocating for use of the SA clause. The SA clause is the sole source of license incompatibility in the CC universe. Without the SA requirement, the CC universe would already be undivided. NC and ND do not create legal divisions and incompatibilities among licenses, SA does. Consequently, if our true goal is an indivisible commons – that is, a commons which is incapable of being divided – CC must drop the SA clause. I’m not arguing that CC drop the SA clause, but want to be clear about the legal source of division in the CC universe.

Third, given the empirical evidence regarding level of demand from rightsholders for the NC and ND clauses, it strains credibility to suggest that people will stop using the 3.0 versions of the NC and ND-bearing licenses. These will inevitably fall out of sync with the current revs of the CC licenses. This gap between the 3.0 framework and the current framework will cause a significant “division” in the space of openly licensed creative works. It is also highly likely that, if CC refuses to continue to serve as steward for these clauses, that we will see the emergence of a new steward. This new actor will create new NC and ND enabled licenses into which the historical demand for NC and ND will flow. These new, even less-compatible licenses will further splinter the universe of openly licensed creative works. Consequently, the best thing Creative Commons could do to insure a commons “as undivided as possible” is continue to steward the NC and ND clauses going forward. An indivisible commons is legally impossible while SA is an option, so “as undivided as possible” is the best we can work for in the current context.

{ 0 comments }

Degreed Beta

For several months now I’ve been working with a great group of people on Degreed. Today we launched the public beta at the HASTAC/MacArthur grantees meeting (the Mozilla open badges functionality in Degreed is supported by a DML grant).

So what does it do?

Degreed eliminates the distinction between formal and informal learning by jailbreaking your college transcript and interweaving Mozilla open badges and other informal credentials together with your college courses. We help you categorize these formal and informal credentials in order to create a credential remix that allows you to showcase everything you know – not just what you learned in school. Unlike your college transcript, your Degreed profile continues to grow as you continue to learn throughout life.

In a nutshell: You login with Facebook. Degreed pulls your Education information from Facebook and prompts you to add more detail and confirm it. Degreed then makes a best guess about the classes you would have taken and builds out a generic transcript for you. (In a future version you’ll be able to both (1) refine the generic transcript by hand or (2) upload a transcript so that trained squirrels can refine your profile for you and mark it “verified”.) You can then add other informal courses (e.g., from Udacity), Mozilla open badges, etc. to fill out your profile. Categorizing each formal or informal experience allows us to count these experiences toward points that help you level up (we use degree equivalents for levels) in a wide range of areas.

Did I mention it’s BETA? In the spirit of #EdStartup, this is very much a Minimum Viable Product – not the final, feature-complete version, but definitely ready to play with. So go mess around with the beta and see if you can break it. Use the Feedback tab on the left of every page to let us what you broke and how. Let us know what you think.



http://opencontent.org/blog/page/27

Slip Sliding Away: The Open in MOOC

Looking through Stephen Downes’ list of MOOCs today, I saw that there’s a MOOC using almost exactly the same name as the open online course I began teaching this past winter. Compare my Introduction to Openness in Education from Winter 2012, which will be offered again in Winter 2013 and every winter term for the foreseeable future, and Rory McGreal and George Siemens’s Openness in Education being offered as I type (Fall 2012).

After pausing a moment to wonder about why they chose that particular name for their course, and how it might create confusion with the course I will offer at BYU again as soon as the AU course ends, I thought “Well, at least I can go see what they’re doing and see how it compares to my course. I bet there’s great stuff they’re doing that I can learn and benefit from.”

Except that I can’t.

I’ve clicked every link in the left-hand nav on http://open.mooc.ca/. I even registered for the course to see if that would magically unlock the content somehow. Nothing. The closest I came was finding a list of topics with no associated content. After another 15 minutes of digging through the site, I found this in the Newsletter Archives, part way down into what appears to be the first newsletter sent out to students:

“You will receive an email later today with readings and resources for Week 1.”

So… not only is this supposed to be a MOOC, but it’s supposed to be a MOOC about openness in education, and there is literally no content on the website. Apparently (though I’ve yet to confirm that this works, either) the only way to access the materials for this massive “open” online course on openness is to subscribe to a mailing list.

Huh?

I’ve already complained in the past about the “xMOOCs” not using openly licensed materials, and consequently not qualifying as massive OPEN online courses in my mind. But I never thought I’d see a cMOOC – one on openness, no less, – that required a person to register before they could wait for the course’s content to be mailed to them.

I feel extraordinarily strongly that you should not have to register or give away any information about yourself to have full access to all the content of anything that wants to describe itself as an open online course (whether “massive” or not).

Maybe I’m just dense, and the course materials are all right there on the website. But I can’t find them. And I like to think of myself as a relatively sophisticated user. Is this a new “feature” of gRSShopper, the de fact cMOOC platform?

What is happening to “open?”

{ 9 comments }

To Would-be Education Reformers

I really, really want to encourage you to take this suggestion.

Reading what you have written all over the internet, and listening to you at public hearings, it appears that you have a crystal clear idea of what should happen in schools to make them places that best support student learning and growth. In all seriousness and sincerity, I want to invite you to start a charter school that fully implements your instructional approaches and other philosophies. I encourage you to do this because you can only gain so much credibility from the sidelines. If you really want to drive ed reform in this state, start your own school and demonstrate how much more effective your way of doing things is. People are much more likely to listen to results than rhetoric. It’s easy for policy makers to ignore an armchair quarterback / critic with no academic credentials in education, but when your school’s CRT results top every other public school in the state, no one can ignore you any longer. And if charter schools aren’t your cup of tea, then start a private school. That process is even simpler.

Either way, replace your argumentation with results. Replace it with your own results from your own school where you’ve implemented your own model. Show everyone how to do it rather than just telling them how they should do it. This will move your agenda along SIGNIFICANTLY further / faster. Heaven knows you’re currently spending more time on other education reform-related “activities” than it would take you to actually open a school and start making things better for Utah’s children.

{ 2 comments }

OpenEd12 Early Bird Rate Ends in 5 Days

There are only five (5) days left for the Open Education Conference early bird registration rate. Register by September 15 and save! http://openedconference.org/2012/register/



http://opencontent.org/blog/page/28

Here are the things that stood out to me most during the three day meeting. Sorry for the brain dump format.

Moorseville, NC moved graduation rates 68% to 90% since the move to devices and all digital content
Two professional development release days PER MONTH for faculty to skill up on digital and using data
Small group differentiated instruction, almost no whole-class instruction
Superintendent visits every classroom in the district multiple times each year, primarily to say thank you to the teachers.
Funding model – $1 / day / student ($200/year) pays for devices and content. Average cost for online content was $35/student across all subjects.

We have to avoid the “Kabuki” version of education reform, a kind of innovation theater in which everything changes except adult behavior.

429M in 2011 in pure VC is triple the amount spent in 2002. About 128 education companies received about $1B of VC money in the last 5 years.

When devices have battery life lasting 4 hours, but school lasts 6 hours, power infrastructure in “first world” schools is suddenly insufficient.

Pearson can scale its content and services, but has no accountability for student learning outcomes. A charter has accountability for outcomes but can’t scale. We need organizations that can scale and share accountability for outcomes.

We have learning scientists, but where are the learning engineers? The people who leverage and apply what we know from the science of learning to help learning happen? Political skills are equally important for these folks. (Why isn’t this instructional designers?)

Scaling in education involves adapting, not adopting.

Genetically modeified food as a model for revise/remix. How can we produce “hybrids” that can succeed under local conditions? Super high yield corn may grow well in Iowa, but die completely in Africa. A lower yield hybrid that can at least live and produce in Africa is required.

“Research should be defined as doing something where half of the people think that’s impossible. And half of them think …eehhhh, maybe that will work. Whenever there’s a breakthrough, a true breakthrough, you can go back and find a time period when the consensus was, well that’s nonsense. So what that means is that a true creative researcher must have confidence in nonsense.” – Burt Rutan

{ 0 comments }

Open education has been an odd duck for the past decade. The overwhelming majority of the activity in the space has been in OER. People and institutions have found ways to openly license educational materials, posted these on a website, and hung the “Mission Accomplished” banner on their aircraft carriers. It reminds me of the Voyager Golden Record, content shared by being blasted out into space in the hopes that someone will eventually find and use it. Once we successfully launched, our obligations were met. “Sure hope somebody locates and learns from that. If they do, maybe they’ll send an email to let us know.”

To my mind, this is another example of education failing to cross the theory to practice bridge. We just don’t seem to be willing to adventure outside the ivory tower on the off chance we might have to deal with the difficulties of actually getting something accomplished in the real world. As long as we can setup the servers and upload the content without leaving campus, we’re happy to “share.” But there’s a difference between sharing and only offering to share. When I offer to share my ice cream cone, but nobody bites (as it were), I haven’t actually shared. I’ve only offered to share. Now, there is lots of goodness in people being willing to share. And sometimes just persuading people to be willing feels like a major political step forward. But it’s not time to hang the banner just yet.

In announcing several new tablets yesterday, Amazon CEO Jeff Bezos dropped some wonderful quotes. One was, “Above all else, align with customers. Win when they win. Win only when they win.”

Do OER creators have “customers” (in the broadest sense of the word)? If you believe the answer is no, ponder that for a while. If you believe the answer is yes, then how do OER creators help their customers “win”? If OER providers only felt like they “won” when learners “won,” how would the open education landscape be different?

Back in the day (maybe they still do it; I don’t know) MIT’s AITI and MISTI programs sent students out into the world to help people use and benefit from MIT OCW. As we say in crop irrigation country, they worked to make sure the water got to the end of the row. Or as Bezos would say, they helped their customers win. And according to Bezos, when they did that, MIT OCW won too.

Fast forward ten years, and Saylor has said ‘providing content isn’t enough’ on a whole other scale. Their partnership with Excelsior provides – right now, today – affordable paths to honest-to-goodness college credit for people who learn from OER. OERu is working on the same thing. Saylor’s partnership with Straighterline, and Udacity and edX’s partnerships with Pearson, provide non-credit bearing opportunities for people to demonstrate what they know and can do. These partnerships are arguably less valuable than the Saylor relationship that provides real college credits, but that may or may not continue to be true into the future.

The reason I highlight Saylor, Udacity, and now edX is that they’re moving beyond just “offering to share” and getting the water to the end of the row with secure assessments and credentials. Once Khan, CodeAcademy, and others start following open standards for badges rather than awarding their own proprietary badges, we’ll add them to the list of good actors working to make sure their customers win.

Bezos went on to say, “We want to make money when people use our devices, not when they buy our devices. If someone buys one of our devices and puts it in a desk drawer and never uses it, we don’t deserve to make any money.”

Amen. I think the lesson is exactly the same for open educational resources. If we’re really trying to help learners “win,” an OER provider hasn’t finished their job when they’ve published content. They’re succeeding when someone benefits from what they’ve done – and only then. We need to think harder about how to make this happen, and how to do it sustainably.

{ 2 comments }

Self Intro for EdStartup



Our open online class on entrepreneurship in education, EdStartup 101, is starting this coming Monday!

The most useful thing we did when we put the word out about the course was to invite interested people to fill out a simple form. We hoped this would give us an idea of what type of people were coming. A few weeks later, 850 people have indicated their intention to participate in the course. Fortunately for us, Google Forms has a handy feature that auto visualizes form responses. So, in case you’re interested, here are a few characteristics of the EdStartup 101 participants.

Hoping to see many of you next week!

{ 3 comments }

The No Textbook Degree

I’ve been thinking about what’s next for OER… With the current set of MOOCs – which aren’t even open – grabbing attention away from the real movement, we need an exciting idea to get behind. Something that can inspire another decade of work across the nation and around the world. (When was the last time you heard about a new OpenCourseWare initiative launching in the US? When was the last time you personally thought of OCW as being really innovative?) We need something that can capture the imagination, something that can inspire both faculty and institutional leaders, something that will bring another 100 US post-secondary schools into the open education movement. Most of all, we need something that will significantly bless the lives of millions of students, providing them access to educational opportunities that can radically transform their lives for good.

A recent Forbes article said, “in the case of low-tuition institutions (particularly community colleges), the cost of textbooks can even be in excess of the tuition and fees students pay.” Pondering the magnitude of this content tax on students, digging around in some community college program descriptions, and thinking about relevant, high-quality OER collections like those at Saylor, the Open Course Library, Project Kaleidoscope, and Flat World Knowledge, something has become very clear to me. There is currently a sufficient amount of high quality OER in the general education, business, and computer science areas that a community college could assemble a fully OER-based Associates Degree in either Business or CS from these materials if it had the institutional will and leadership to do so.

Consequently, the Fall 2014 semester will be the first one for which community colleges market “no textbook degrees.”

The “no textbook degree” – a degree where the materials formally listed on course syllabi are OER instead of traditional textbooks – cuts the cost of a community college degree in half. Imagine the competitive advantage for a school that can market a degree at 50% the cost of neighboring programs. Imagine the ease with which neighboring programs can make the same move, since it’s all based on OER.

By 2019, every community college in the country will have moved to no textbook degrees in business and computer science out of sheer competitive necessity. Other degree programs will follow. If Christensen’s model of disruptive innovation is to be believed, these OER will work their way up from the community colleges into the classrooms at public and eventually private universities.

An entire marketplace will spring up out of nowhere – in the same way that RedHat, IBM, and others provide services and support around open source software, new entities will provide service and support for institutions adopting OER and the “no textbook degree” model. Of course, the technically savvy will continue to adopt OER on their own, support themselves, and run Linux on their desktops.*

And finally, commercial textbook publishers will, for the first time, actually and acutely feel the impact of OER on their finances. By 2020, US students alone will have saved well over $1B.

Alan Kay famously said “The best way to predict the future is to invent it.” I think that means it’s up to me and you to make the “no textbook degree” happen. Let’s stop allowing the big name schools who cater to the wealthiest and brightest students to dictate the terms of the open education discourse in the public mind and public media (e.g., students who already have the academic preparation necessary to succeed in a course on Artificial Intelligence at Udacity). Let’s reclaim the open education discourse and make it about saving normal people money and increasing the academic success, graduation rates, and employment potential of normal people. We can do this.

* (Parenthetically, because of the continued uncertainty around the meaning of NC, the support marketplace will refuse to support NC licensed content. Thus, adoption and use of NC-licensed content will be confined to the DIY-ers among faculty, unless the creator of the NC licensed content is also a support provider. By 2017 the overwhelming majority of “casual creators” of OER that previously used the NC term will drop it from their licenses as they realize that NC condemns their content to obscurity due to lack of support.)

{ 7 comments }

Ed Startup 101: A new open online course

I’m super excited to share the news about Ed Startup 101 – a new open online course being offered this fall by a great team including Richard Culatta, Todd Manwaring, and myself. The course focuses on entrepreneurship in the educational context, and will be useful whether you’re taking the “quit your job, raise some money, start a company” route, or the “keep your job, write some grants, start a project” route, or if you’re interested in making a long-term impact on education in some other way.

We have a ridiculous list of incredibly talented experts who will be participating in the course. Each will lead a live video-based question and answer session, in which participants can ask the question they’ve always wanted to put to a venture capitalist, serial entrepreneur, foundation program manager, consultant, etc. Just these sessions alone will be worth the cost of admission. Oh wait – the course is free and open. Either way, these sessions are going to be fantastic.

We will be awarding badges through the Mozilla Open Badge Infrastructure for successful completion of course projects. The course is available for credit to BYU students as IPT 692R section 006. If you’re at another university and want to figure out how you can earn credit for the course, send me an email at david.wiley@gmail.com.

Complete our short More Information / Pre-enrollment Survey to let us know you’re interested and to receive updates as we get closer to the start of term. The course runs August 24 – December 7.



Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s