The Encylopaedia of Educational Philosophy and Theory

Back to Contents

Back to Top

Open Works, Open Cultures and Open Learning Systems

Michael A. Peters

Open Works

The idea of openness as a political, social, and psychological metaphor has been part of a set of enduring narratives in the West since the time before the flourishing of modern democracy, scientific communication and the rise of the knowledge economy. Principally these narratives have been about the nature of freedom, the primacy of rights to self-expression, the constitution of the public sphere or the commons, and the intimate link between openness and creativity. The core philosophical idea concerns openness to experience and interpretation such that a work, language and system permit multiple meanings and interpretations with an accent on the response, imagination and activity of the reader, learner or user. The classic work that philosophically develops this central idea is the Philosophical Investigations by Ludwig Wittgenstein (1953) who draws a close relationship between language as a set of open overlapping speech activities or discourses he calls ‘language games’ and a ‘form of life’ (or culture). Wittgenstein of the Investigations demonstrated that there was no such thing as a logical syntax or meta-logical language considered as a closed system of rules that acts as a hard and fast grammar for any natural language. The ‘language games’ conception seems to deny the very possibility of a logical calculus for language such that there are no necessary and sufficient conditions (or logical rules) for use of a word. In Wittgenstein’s account of rule-following we see a view of openness to language and to the text that permits multiple interpretations and the active construction of meanings. This emphasis on the openness of language, of the text and, indeed, of ‘openness to the other’ as aspect of subjectivity, which rests on the values of multiplicity and pluralism, is in part a reaction by Wittgenstein against the logical empiricist understandings of logico-linguistic rules that allegedly allow for only pure and single meanings unambiguously correlated with words that depict the world.

Wittgenstein’s Tractatus addressed the central problems of philosophy concerning the relations between the world, thought and language. He presents a solution that is grounded in logic and in the nature of representation such that thought or proposition picture or represent the world by virtue of shared logical form. In the Investigations Wittgenstein shifts his emphasis from logic to ordinary language which works to appreciate the openness of language and the language-user while disabusing us of the fallaciousness of traditional approaches to the question of language, truth, and meaning. He begins this new philosophy by asserting that the meaning of a word is its use in the language, and he demonstrates that there are multiple uses that are part of the activity of language games that comprise a culture or ‘form of life’. In a famous passage in the Investigations (paragraphs 65-69) Wittgenstein argues there is no feature common to all games that constitute the basis for calling them ‘games’: they are tied together through a set of family resemblances that unite them. In philosophical terms this constitutes at one and the same time the openness of both language and culture. Others following him have appealed to Wittgenstein’s concept of openness to protect the nature of language, thought and art.

Morris Weitz (1956) in his famous essay ‘The role of theory in aesthetics’, for instance, appeals to Wittgenstein to claim that art is an open concept in that it is possible to extend its meaning in unpredictable and completely novel ways in order to apply the concept to new entities or activities that were not included in the original concept—thus no necessary and sufficient conditions for something to count as art can be provided. (A closed system in this instance is one for which both necessary and sufficient conditions can be stated). Following Wittgenstein, he says we should ask not ‘what is art?,’ but ‘how is the concept of “art” used?’ Weitz (1956: 31) notes also that sub-concepts of art like ‘novel,’ ‘painting,’ ‘tragedy’, ‘comedy’ and ‘opera’ are likewise open, suggesting that ‘A concept is open is its conditions of application are amenable or corrigible, i.e., if a situation or case can be imagined or secured which would call for some sort of decision on our part to cover this, or to close the concept or invent a new one to deal with the new case and its property’. He asks is Dos Passos U.S.A., V. Woolf’s To the Lighthouse, or Joyce’s Finnegan’s Wake a novel? These works require an extension of the concept to cover the new case and thus the decision turns on our decision to enlarge the conditions for applying the concept. As he puts it:

‘Art’, itself, is an open concept. New conditions (cases) have constantly arisen and will undoubtedly constantly arise; new art forms, new movements will emerge, which will demand decisions on the part of those interested, usually professional critics, as to whether the concept will be extended or not…the very expansive, adventurous character of art, its ever-changing changes and novel creations, makes it logically impossible to ensure any defining properties (p. 32).

The multiplicity and radical openness that Wittgenstein finds in language and thought, then, seems to intimate a pluralistic world. This openness seems to apply also to other forms of expression such as music as well as to culture and human nature. The emphasis on radical openness distinguishes the later Wittgenstein as someone who overcomes the postmodern condition and provides a constructive and positive response to disintegration of culture, language and the self (see Peters & Marshall, 1999; Peters et al, 2009). He is also a philosopher who understands the emerging nature of information systems and networks (Blair, 2006) and anticipates the Internet as a system platform for language, communication, art and self-expression (Pitcher & Hrachovec, 2008).1) Even Wittgenstein’s own compositions were radically open to interpretation encouraged by the ‘hypertext’ nature of his writings (Pitcher, 2002). Others have followed in his footsteps or arrived at the value of multiplicity of meanings and the plurality of interpretation somewhat differently but drawing on similar source and motivations.

Three Forms of Openness

In 1962 Umberto Eco, the Italian novelist and semiotician, published his Opera aperta (The Open Work)2) which while belonging to his pre-semiotic writings nevertheless utilizes the underlying notion of a linguistic system to discuss the development and values of open works where openness stands for multiplicity of meaning, freedom of reader and the plurality of interpretation. As David Robey makes clear in his Introduction to the Harvard release of the modern classic:

Opera aperta in particular is still a significant work, both on account of its enduring historical usefulness of its concept of ‘openness’, and because of the striking way in which it anticipates two of the major themes of contemporary literary theory from the mid-sixties onwards: the insistence on the element of multiplicity, plurality, polysemy in art, and the emphasis on the role of the reader, on literary interpretation and response as an interactive process between reader and text (p. viii).

In ‘The Poetics of the Open Work’ Eco begins by noting that a number of contemporary avant-garde pieces of music–Karlheinz Stockhausen's Klavierstück XI, Berio's Sequenza I for solo flute, Henri Pousseur's Scambi, and Pierre Boulez's third Piano Sonata–differ from classical works by leaving considerable autonomy to the performer in that way (s)he chooses to play the work. He traces the idea of ‘openness’ in the work of art from its beginnings in Symbolist poetry focused on Mallarmé and the modernist literature of the early part of the twentieth century exemplified by James Joyce. Citing Henri Pousseur, he defines the ‘open’ work as one that ‘produces in the interpreter acts of conscious freedom, putting him at the centre of a net of inexhaustible relations among which he inserts his own form’ (p. 4). Eco's openness is a response to the aesthetics of Benedetto Croce who was a product of Italian fascism, and strongly emphasized the idea of pure meaning and authorial intent.

Eco distinguishes between three forms of openness in the work of art in terms of interpretation, semantic content, and the ‘works in movement’. While all works of art are capable of bearing a number of interpretations, the open work is one in which there are no established codes for their interpretation.

For Eco, the openness of Modernist literature (such as Symbolist poetry) is distinguished from medieval openness by the absence of fixed interpretative registers, which he gives, quoting Dante, as the literal, the allegorical, the moral and the anagogical. In medieval literature, no interpretations may exist beyond these four registers-that is the code by which writings were interpreted. Modernist literature has no such pre-established codes by which it is to be interpreted, and indeed, what marks the modernist artist out from the pre-modernist artist is the artist's awareness of the artwork as inevitably giving a “field of possibilities” of interpretation. Rather than seeking to limit those possibilities (through an established code of interpretation, and so on), the artist actively seeks the openness that is implicit in all artworks. Eco gives as an example of this active seeking of openness of interpretation in the absence of pre-established codes the poetry of Verlaine and Mallarmé, and the novels of Kafka, which have been described (especially by Lukács) as “allegorical”, but which yield, says Eco:

no confirmation in an encyclopedia, no matching paradigm in the cosmos, to provide a key to the symbolism. The various existentialist, theological, clinical and psychoanalytical interpretations of Kafka's cannot exhaust all the possibilities of his works.

The second form of openness Eco describes is on the level of the semantic content. This is a somewhat problematic idea as applied to music, since it is proverbially uncertain what the semantic content-what the “real-world meaning”-of music may be. Nevertheless, Eco uses serial music as an example of this semantic openness, comparing it to the verbal puns of Joyce's Finnegan's Wake, by which two, three, or even ten different etymological roots are combined in such a way that a single word can set up a knot of different submeanings, each of which in turn coincides and interrelates with other local allusions, which are themselves “open” to new configurations and probabilities of interpretation.

Serial music is composed using a particular arrangement usually of the 12 possible semitones as the organising principle, and hence often implies several continuations or contexts at once. Henri Pousseur describes the listener to contemporary music (contemporary with the late 1950's and early 1960's that is), which disrupts the usual “term-to-term determination” of music, placing himself “in the midst of an inexhaustible network of relationships” and choosing for himself his own “modes of approach, his reference points and his scale.” Leaving aside the difficult problem of whether “logical-sounding continuation” of musical material can be compared with the semantic content of language-in other words, whether music's meaning lies in the apparent logic of its continuation-we should also recognize that the difference between the first two forms of openness in the work is one of degree: Kafka and the Symbolists may disrupt our normal sense of narrative form, or of logical continuation, through the use of unorthodox symbolism, or ambiguity, but this is not a difference in kind from the kind of disruption which occurs in Joyce's use of pun. This recognition will help us to develop a theory of the open work applied to Kurtág's own musical symbolism.

The third kind of openness Eco perceives is that of the “work in movement” [opere in movimento], which he identifies at the start of his book. This is exemplified by Mallarmé's Livres, in which the order of the poems in this unfinished-both serendipitously and intentionally “unfinished”-work is left undetermined; and also by two of the pieces of music he referred to at the start of The Open Work. Stockhausen's Klavierstück XI requires the pianist to choose between a number of groupings of notes, to create a sequence of his own devising from among these notes. And in Boulez's third Piano Sonata, the first section of which is made up of ten different passages on unordered sheets of paper, which may be freely re-ordered, although not all permutations are permitted.

Open Cultures

As many scholars and commentators have suggested since the ‘change merchants’ of the 1970s —- Marshall McLuhan, Drucker and Alvin Toffler – first raised the issue we are in the middle of a long term cultural evolutionary shift based on the digitization and the logic of open systems that has the capacity to profoundly changed all aspects of our daily lives—work, home, school—and existing systems of culture and economy. A wide range of scholars from different disciplines and new media organizations have speculated on the nature of the shift: Richard Stallman established the Free Software Movement and the GNU project3); Yochai Benkler (2006), the Yale law professor, has commented on the wealth of networks and the way that social production transforms freedom and markets; his colleague, Larry Lessig (2004, 2007), also a law professor, has written convincingly on code, copyright and the creative commons and launched the Free Culture Movement designed to the promote the freedom to distribute and modify creative works through the new social media4);  Students for Free Culture5), launched in 2004, ‘is a diverse, non-partisan group of students and young people who are working to get their peers involved in the free culture movement’; Michel Bauwens (2005) has written about the political economy of peer production and established the P-2-P Foundation6) was founded in 2001 by experts in cyberlaw and intellectual property; 7)  the world’s largest and open-content encyclopedia was established in 2001 by Jimmy Wales, an American Internet entrepreneur, whose blog is subtitled Free Knowledge for Free Minds.8)

One influential definition suggests

Social and technological advances make it possible for a growing part of humanity to access, create, modify, publish and distribute various kinds of works - artworks, scientific and educational materials, software, articles - in short: anything that can be represented in digital form. Many communities have formed to exercise those new possibilities and create a wealth of collectively re-usable works.

By freedom they mean:

  • the freedom to use the work and enjoy the benefits of using it,
  • the freedom to study the work and to apply knowledge acquired from it,
  • the freedom to make and redistribute copies, in whole or in part, of the information or expression,
  • the freedom to make changes and improvements, and to distribute derivative works9)

This is how the Open Cultures Working Group– an open group of artists, researchers and cultural activists–describe the situation in their Vienna Document subtitled Xnational Net Culture and “The Need to Know” of Information Societies:

Information technologies are setting the global stage for economic and cultural change. More than ever, involvement in shaping the future calls for a wide understanding and reflection on the ecology and politics of information cultures. So called globalization not only signifies a worldwide network of exchange but new forms of hierarchies and fragmentation, producing deep transformations in both physical spaces and immaterial information domains… global communication technologies still hold a significant potential for empowerment, cultural expression and transnational collaboration. To fully realize the potential of life in global information societies we need to acknowledge the plurality of agents in the information landscape and the heterogeneity of collaborative cultural practice. The exploration of alternative futures is linked to a living cultural commons and social practice based on networks of open exchange and communication.10)

Every aspect of culture and economy is becoming transformed through the process of digitization that creates new systems of archives, representation and reproduction technologies that portend Web 3.0 and Web 4.0 where all production, material and immaterial, is digitally designed and coordinated through distributed information systems. As Felix Staler (2004) remarks

information can be infinitely copied, easily distributed, and endlessly transformed. Contrary to analog culture, other people’s work is not just referenced, but directly incorporated through copying and pasting, remixing, and other standard digital procedures.

Digitization transforms all aspects of cultural production and consumption favouring the networked peer community over the individual author and blurring the distinction between artists and their audiences. These new digital logics alter the logic of the organization of knowledge, education and culture spawning new technologies as a condition of the openness of the system. Now the production of texts, sounds and images are open to new rounds of experimentation and development providing what Staler calls ‘a new grammar of digital culture’ and transforming the processes of creativity which are no longer controlled by traditional knowledge institutions and organizations but rather permitted by enabling platforms and infrastructures that encourage large-scale participation and challenge old hierarchies. 

The shift to networked media cultures based on the ethics of participation, sharing and collaboration,  involving a volunteer, peer-to-peer gift economy has its early beginnings in the right to freedom of speech that depended upon the flow and exchange of ideas essential to political democracy, including the notion of a ‘free press’, the market and the academy.  Perhaps, even more fundamentally free speech is a significant personal, psychological and educational good that promotes self expression and creativity and also the autonomy and development of the self necessary for representation in a linguistic and political sense and the formation of identity.  Each of these traditional justifications of free speech and their public communication firmly relate questions of self-governance to questions of democratic government, the search of truth and personal autonomy. Yet the modern discussion of free speech from Milton’s Aeropagetica and John Stuart Mill’s On Liberty have also drawn attention to limiting conditions to emphasize that freedom is not an independent value but in liberal society exists in a tight network of rights and constraints that limit it in various ways (Mill 2002). As Momigliano(2003) comments:

The modern notion of freedom of speech is assumed to include the right of speech in the governing bodies and the right to petition them, the right to relate and publish debates of these bodies, freedom of public meeting, freedom of correspondence, of teaching, of worship, of publishing newspapers and books. Correspondingly, abuse of freedom of speech includes libel, slander, obscenity, blasphemy, sedition.

Openness has emerged as a global logic based on free and open source software constituting a generalized response to knowledge capitalism and the attempt of the new mega-information utilities such as Google, Microsoft, and to control knowledge assets through the process of large-scale digitization, of information often in the public domain, the deployment of digital rights management regimes (May 2008) and strong government lobbying to enforce intellectual property law in the international context.

Two long term trends are worth mentioning in this context. First, the Internet and open technologies, defined as open source, open APIs, and open data formats, are in the process of formation developing from the Web as linked computers, to the Web as linked pages and linked things (the so-called semantic web).11) In this respect ‘open cloud computing’ is a recent development that signals the next stage of the Internet.

The key characteristics of the cloud are the ability to scale and provision computing power dynamically in a cost efficient way and the ability of the consumer (end user, organization or IT staff) to make the most of that power without having to manage the underlying complexity of the technology. The cloud architecture itself can be private (hosted within an organization’s firewall) or public (hosted on the Internet). These characteristics lead to a set of core value propositions [including Scalability on Demand, Streamlining the Data Center, Improving Business Processes, and Minimizing Startup Costs].12)

Second, the Internet is a dynamic changing open ecosystem that progressively changes its nature towards greater computing power, interactivity, inclusiveness, mobility, scale, and peer governance. In this regard and as the overall system develops it begins to approximate the complexity of the architectures of natural ecosystems. The more it develops, one might be led to hypothesize, the greater the likelihood of not merely emulating Earth as a global ecosystem but becoming an integrated organic whole. Open cultures become the necessary condition for the systems as a whole, for the design of open progressive technological improvements and their political, epistemic and ontological foundations.

Intellectual Property and the Global Logic of Openness

The rediscovery of openness in the information society, as Chris May (2006) notes is the end of a period when intellectual property seemed to be the dominant paradigm for understanding how knowledge and information might fit into the contemporary information society. He usefully charts the ways in which the emerging realm of openness is challenging the global regime of intellectual property and the extension of intellectual property into areas previously unavailable for commodification, including claims over aspects of the ‘public domain’ and ‘knowledge commons’. The state as the guarantor of intellectual property finds itself writing, articulating and enforcing intellectual property laws that attempts to mediate interests of capital and different publics that structure the new media ecologies. In this context openness increasingly stands against forms of individualized knowledge property in the global digital economy (May, 2008). Indeed, the strong argument is that openness challenges the traditional notion of property and its application to the world of ideas. May suggests that openness can act as a countervailing force to balance the expansion of property rights under informational capitalism in an ongoing dialectical relationship. He writes:

Openness is the contemporary manifestation of an historical tendency within the political economy of intellectual property for resistance to emerge when the privileges and rights claimed by owners inflict onerous and unacceptable costs (and duties) on non-owners.

The shape of culture as a digital artefact, the formation of a deep ecology of human communication, and the emergence of a new social mode of (peer-to-peer) production, depends on the outcome of this ongoing struggle for openness and the assertion of its logics of global dispersal, distribution, and decentralization. This struggle is many-sided and takes many different forms not only against multinational knowledge capitalism and its expansion of claims to intellectual property into new public and cultural domains but also involves struggles against the surveillance panoptical power of the State and the corporation that threatens to create all-encompassing citizen and customer databases that rest on information-sharing, search algorithms and the compilation of consumer characteristics and behaviours.

Viral Modernity?

A viral modernity challenges and disrupts the openness of a free distribution model as well as distributed knowledge, media and learning systems. The celebration of hacker culture of the 1980s was based on the heroization of the disruption of computer security and the main activists and enthusiasts such as Steve Jobs, Steve Wizniak, and Richard Stallman focused on cracking software leading to the development of the free software movement. As Tony Sampson (2004) indicates the virus flourishes because of the computer's capacity for information sharing and the computer is unable to distinguish between a virus and a program. The alterability of information allows the virus to modify and change information, providing conditions for self-replicability. In these circumstances

viral technologies can hold info-space hostage to the uncertain undercurrents of information itself. As such, despite mercantile efforts to capture the spirit of openness, the info-space finds itself frequently in a state far-from-equilibrium. It is open to often-unmanageable viral fluctuations, which produce levels of spontaneity, uncertainty and emergent order. So while corporations look to capture the perpetual, flexible and friction-free income streams from centralised information flows, viral code acts as an anarchic, acentred Deleuzian rhizome. It thrives on the openness of info-space, producing a paradoxical counterpoint to a corporatised information society and its attempt to steer the info-machine.

This situation leads Fred Cohen to advocate the benevolent virus and friendly contagion as a foundation of the viral ecosystem instead of the corporate response to securitize and privatize all open systems through sophisticated encryption.

Digital Selves, Open Selves

The numerical representation of identity that is involved as an aspect of new digital media in forms of reading and writing the self through these media has a sinister downside through the application of new information technologies to security and identity issues with the linking of government and corporate databases.  Biometrics is responsible for the shift from identity politics to I.D. policies considered in relation to the question of security, verification, and authentication.  The Identity Cards Bill introduced in the British Parliament in the 2004-5 session provided for the Secretary of State to establish and maintain a national register to record ‘registrable facts’ about individuals (over 16 years) in the UK in the public interest, which is defined in terms of national security, prevention or detection of crime, enforcement of immigration controls, prevention of unauthorized employment, and for securing the efficient and effective provision of public services. ‘Registrable facts’ pertain to ‘identity’ (name, previous names, date of birth—and death, gender, physical identifying characteristics but not ethnicity), residence and history of residence, ‘numbers allocated to him for identification purposes and about the documents to which they relate’ (passports, driver’s license, work permits, etc.), information from the register provided to any persons, and information recorded by individual request. I.D. cards will store 49 different types of information.13) In terms of the Bill each individual is required to allow fingerprints other biometric information, signature, and photograph, to be taken with penalties for not complying. This information is recorded on a renewable I.D. card for which the individual is responsible. Information on individuals may be provided for purposes of verification on consent. Public services may be conditional on identity checks, although it will be unlawful to require an individual to produce an I.D. card except for specified purposes, e.g., of public authorities, uses connected to crime prevention and detection, including anti-terrorism. In certain cases information may be used without the individual’s consent. National Identity Scheme Commissioner will be responsible for ruining the scheme and make annual reports. Various offences are stated in relation to giving false information, unauthorized disclosure of information, tampering with the register, false use etc.

The House of Lords Select Committee Report14) published on 17 March 2005 had a brief to consider the constitutional implications of the Identity Cards Bill concluded that ‘it adjusts the fundamental relationship between the individual and the State’. It is worth quoting the report on the significance of what the Bill proposes:

Our own concerns are not founded on the [EU] Convention [of Human Rights], but rather on the fact that the Bill seeks to create an extensive scheme for enabling more information about the lives and characteristics of the entire adult population to be recorded in a single database than has ever been considered necessary or attempted previously in the United Kingdom, or indeed in other western countries. Such a scheme may have the benefits that are claimed for it, but the existence of this extensive new database in the hands of the State makes abuse of privacy possible.

The Report expressed the primary concern to ensure an adequate legal and constitutional infrastructure for the maintenance of a National Identity Register, with appropriate separation and limitation of powers. In particular, while recognizing the Bill as enabling legislation, the report expressed concern about the concentration of power and responsibility for the national register in the hands of the Secretary of State, calling for an independent registrar with a duty to report directly to Parliament.

The identity cards bill was passed by MPs by a small majority in late June 2005, after the failure of the first bill which is known as the Identity Cards Act 2006. While it is aimed at preventing illegal immigration and working, as part of anti-terrorist measures and to prevent identity and benefit fraud, there are critical issues around altering the relationship between the individual and the state including the loss of privacy, the potential for harassment of ethnic minorities and its ‘function-creep’, not to mention fears of the surveillance society. In the U.S. the Defense, Homeland Security, Interior and Veterans Affairs departments and NASA are all planning to implement smart-card programs that complies with the Federal Information Processing Standard 201, which the Commerce Department|made final recently, the first phase of which the first phase includes setting up identity-proofing, registration and issuance processes, to have been developed by October, 2005.  The Real I.D. Act was introduced in 2005 to protect against terrorist entry and improve security for drivers’ licenses and personal identification cards.15)

These concerns are not at all removed from the politics of space and new science of networks, or, indeed, from education as I.D. cards are now mandatory in many U.S. schools that have set up their own security systems. Pitted against the postmodern view that considers identity to be both dynamic and multiple, a discursive construction reflecting an on-going and open-ended process of forming multiple identifications in the face of globalization and media cultures is the mathematicization of identity for state, educational and business purposes—the nexus where biometrics meets smart card technology and the ultimate basis for applications in telecommunications (GSM mobile phones, DirecTV), financial services (electronic purses, bank cards, online payment systems), transportation, travel and healthcare (insurance cards) industries, computer/internet user authentication and non-repudiation, retailer loyalty programs, physical access, resort cards, mass transit, electronic toll, product tracking, and also national ID, drivers license, and passports.

The other side of the state and corporate digital reproduction of identity is a tendency that emphasizes the relation between openness and creativity as part of a networked group.  The ‘open self’ is self-organizing and is formed at the interstices of a series of membership of online communities that shaped spontaneous self-concept and self-image. Openness to experience is one of the five major traits that has shaped personality theory since its early development by L.L. Thurstone in the 1930s and is strongly correlated with both creativity and divergent thinking (McCrae, 1987). Sometime referred to as ‘big five’ personality traits or ‘the five factor model’ trait theory emerged as a descriptive, data-driven model of personality based on openness, conscientiousness, extraversion, agreeableness, and neuroticism. Openness is associated with creativity and the appreciation of art, emotionality, curiosity, self- expression and originality.  Meta-analysis reviewing research that examines the relationships between each of the five-factor model personality dimensions and each of the 10 personality disorder diagnostic categories of the Diagnostic and Statistical Manual of Mental Disorders, (4th edt DSM-IV) reveal strongly positive  (with neuroticism) and negative associations (with the other factors) (Saulsman and Page, 2004). One of the limitations of personality theory is its focus on the individual and in the age of networks this centeredness might seem somewhat misplaced. There are close links between open content, open science and open collaboration that makes collaborative creativity sustainable. Openness to experience is probably the single most significant variable in explaining creativity and there is some evidence for the relationship between brain chemistry and creative cognition as measured with divergent thinking (Jung et al, 2009). Openness also can be defined in terms of the number, frequency, and quality of links within a network. Indeed, the mutual reinforcement of openness and creativity gels with Daniel Pink’s (2005) contention that right-brainers will rule the future. According to Pink, we are in the transition from an ‘Information Age’ that valued knowledge workers to a ‘Conceptual Age’ that values creativity and right-brain-directed aptitudes such as design, story, symphony, empathy, play, and meaning.

Open Learning Systems

If the e-book has failed at least up until the introduction of the new e-book readers such as Amazon’s Kindle DX (2009) and Sony’s Reader then it was because e-books in the main became simple digitized versions of books. The new generation of e-book readers sought to overcome these problems and to focus on advantages of hypertext, mobility and mobile data connection, adjustable font size, highlighting and annotation, text-to-speech facility, readability based on electronic ink. Amazon’s Kindle DX released June 10 features a 9.7 inch display, improved pixel resolution, built-in stereo speakers, 4 GB storage capacity, holding approximately 3500 non-illustrated e-books, extended battery and support for PDF files 16). Amazon announced partnerships with three major textbook publishers representing 60% of the market and Amazon will test the Kindle DX with five universities this year. Kindle titles now represent 35% of books' sales within Amazon. The company now offers 275,000 books in Kindle format and received a huge sales demand when it launched Kindle 2 earlier this year. (See the live launch). Amazon’s Kindle DX is one of a range of e-readers available including i-Rex’s iLiaid, Sony’s Librie and Song Reader, mobile java devices such as Wattpad, Bookeen’s Cybooks Gen3, Polymer Vision’s Readius foldable eBook, COOL-ER by Coolreader, eSlick by Foxit Software, Ganaxa GeR2, and Jinke’s Hanlin V3 eReade.17) Plastic Logic, a spin-off company from Cambridge University’s Cavendish’s Laboratory, is a flexible A-4-size and robust plastic electronic display the thickness of a credit-card that is the core element of a soon to be released eBook reader.

The e-book reader has come a long way since Michael Hart launched Gutenberg Project in 1971 and the first digital books were offered in 1993. The e-book has arrived yet it still suffers disadvantages: the e-book still requires an electronic device and electric power; it is more fragile than the paperback and more prone to damage, loss and theft; there is arguable a loss of book aesthetics; the full range of printable material is not available; and due to digital rights management and protection e-readers are not easily shared.

One of the fundamental issues concerns digital rights and various technical attempts to prevent users from sharing or transferring ownership. Often ebook purchase agreements prevent copying, restrict usage and printing, and limit the right to distribution thus privatizing information or knowledge.

The first expanded books began with The Voyager Company in 1991. Founded in 1985 Voyager developed interactive laserdiscs pioneering home video collections of classic films. In the early 1990s Voyager sponsored a conference on digital books that attracted multimedia and hypertext experts who helped to shape the first expanded books adding a search method, and the capacity to change font size as well as other navigation features (drop-down menus), and margins for annotations and marginalia. The first three expanded books were released in 1992: The Hitchhikers Guide to the Galaxy; The Complete Annotated Alice; and Jurassic Park. In 1992 Voyager came out with the The Expanded Books Toolkit, which allowed authors to create their own Expanded Books.18)

Other experiments have taken place after Voyager was sold. Perhaps the most long lived is Sophie, a project of the Institute for the Future of the Book.

In 1996 a group of Voyager employees formed Night Kitchen with the intent of creating an authoring/reading environment that would extend the Expanded Books Toolkit concept to include rich media. The result TK3 never officially came to market, but teachers in high schools and colleges used it in their classrooms and with their students created some remarkable projects. The Mellon Foundation approached some of the TK3 team and asked them to build a new multimedia authoring program which would be open-source and would extend TK3 by enabling time-based events (e.g. a timed, narrated slide show or embedding links at specific points in video clips). That became Sophie.19)

Bob Stein the co-founder of Voyager is the founder and a director of The Institute for the Future of the Book which has carried through the experiment of the expanded book with Sophie. The Institute’s mission is stated as: ‘The printed page is giving way to the networked screen. The Institute for the Future of the Book seeks to chronicle this shift, and impact its development in a positive direction.’ It goes on to make the following claims:

The Book

For the past five hundred years, humans have used print — the book and its various page-based cousins — to move ideas across time and space. Radio, cinema and television emerged in the last century and now, with the advent of computers, we are combining media to forge new forms of expression. For now, we use the word “book” broadly, even metaphorically, to talk about what has come before — and what might come next.

The Work and the Network

One major consequence of the shift to digital is the addition of graphical, audio, and video elements to the written word. More profound, however, is the book's reinvention in a networked environment. Unlike the printed book, the networked book is not bound by time or space. It is an evolving entity within an ecology of readers, authors and texts. Unlike the printed book, the networked book is never finished: it is always a work in progress.

As such, the Institute is deeply concerned with the surrounding forces that will shape the network environment and the conditions of culture: network neutrality, copyright and privacy. We believe that a free, neutral network, a progressive intellectual property system, and robust safeguards for privacy are essential conditions for an enlightened digital age.


For discourse to thrive in the digital age, tools are needed that allow ordinary, non-technical people to assemble complex, elegant and durable electronic documents without having to master overly complicated applications or seek the help of programmers. The Institute is dedicated to building such tools. We also conduct experiments with existing tools and technologies, exploring their potential and testing their limits.

Humanism & Technology

Although we are excited about the potential of digital technologies and the internet to amplify human potential, we believe it is crucial to consider their social and political consequences, both today and in the long term.

New Practices

Academic institutes arose in the age of print, which informed the structure and rhythm of their work. The Institute for the Future of the Book was born in the digital era, and so we seek to conduct our work in ways appropriate to the emerging modes of communication and rhythms of the networked world. Freed from the traditional print publishing cycles and hierarchies of authority, the Institute values theory and practice equally, conducting its activities as much as possible in the open and in real time.


Bauwens, M. (2005) The Political economy of Peer production’, C-Theory

Blair, David (2006) Wittgenstein, Language and Information. Dordrecht, Springer.

Bondanella, Peter E. Umberto Eco and the Open Text: Semiotics, Fiction, Popular Culture. Cambridge University Press.

Eco, Umberto (1989) The Open Work. Translated by Anna Cancogni. Introduction by David Robey. Harvard University Press.

Jung,Rex E., Charles Gasparovic, Robert S. Chavez, Ranee A. Flores, Shirley M. Smith, Arvind Capriha and Ronald A. Yeo (2009) Biochemical Support for the “Threshold” Theory of Creativity: A Magnetic Resonance Spectroscopy Study, The Journal of Neuroscience, April 22, 2009, 29(16):5319-5325; doi:10.1523/JNEUROSCI.0588-09.2009.

Lessig, Lawrence (2007) Foreword, in Freedom of Expression: Resistance and Repression in the Age of Intellectual Property, Kembrew McLeod, Minneapolis: University of Minnesota Press, 2007.

Lessig, Lawrence (2006) Code Version 2.0

Lessig, Lawrence (2004) Free Culture: How Big Media Uses Technology and the Law to Lock Down Culture and Control Creativity. The Penguin Press.

McCrae, R. R. (1987) Creativity, divergent thinking, and openness to experience. Journal of Personality and Social Psychology 52: 1258–1265.

Mill, Van, D. (2002) Freedom of Speech, Stanford Encyclopedia of Philosophy,

Momigliano, A. (2003) Freedom Of Speech In Antiquity, in Dictionary of the History of Ideas

May, C. (2006)  Openness, the knowledge commons and the critique of intellectual property, Re-public: re-imagining democracy,|

May, C. (2003) ‘Digital rights management and the breakdown of social norms’, First Monday 8 (11),available at   May, C. (2004) ‘Side-stepping TRIPs: The Strategic Deployment of Free and Open Source Software in Developing Countries’, International Political Economy Working Group Working Paper No. 9 (May 2004), available at

May, C. (2008) Globalizing the Logic of Openness: open source software and the global governance of intellectual property. In Handbook of Internet Politics, Andrew Chadwick & Philip N. Howard (eds.), London,Routledge.

Sampson, T. (2004) The virus in the open system - A Virus in Info-Space: the open network and its enemies. M/C: A Journal of Media and Culture, Volume 7, Issue 3,

Moody, Glyn (2001) Rebel Code: LINUX and the open source revolution, London: Allen Lane.

Perelman, Michael (2002) Steal this Idea: Intellectual Property Rights and the Corporate Confiscation of Creativity, New York: Palgrave.

Picher, Alois (2002) Encoding Wittgenstein: Some remarks on Wittgenstein's Nachlass, the Bergen Electronic Edition, and future electronic publishing and networking. In: TRANS. Internet-Zeitschrift für Kulturwissenschaften. No. 10/2001ff.

Pink, Daniel (2005) A Whole New Mind: Why Right-brainers Will Rule the Future

Pitcher, A. & Hrachovec, H. (2008) Wittgenstein and the Philosophy of Information

Saulsman, L. M. & Page, A. C. (2004) The five-factor model and personality disorder empirical literature: A meta-analytic review. Clinical Psychology Review, 23, 1055-1085.

Sell, Susan (2003) Private Power, Public Law. The Globalization of Intellectual Property Rights, Cambridge: Cambridge University Press.

Strangelove, Michael (2005) The Empire of Mind: Digital Piracy and the Anti-Capitalist Movement, Toronto: University of Toronto Press.

Sum, Ngai-Ling (2003) ‘Informational Capitalism and U.S. Economic Hegemony: Resistance and Adaptations in East Asia’ Critical Asian Studies 35, 2: 373-398.

United Nations Conference on Trade and Development [UNCTAD] (2003) E-Commerce and Development Report 2003, New York/Geneva: UNCTAD.

von Hippel, Eric (2005) Democratizing Innovation, Cambridge: The MIT Press.

Wade, Robert Hunter (2002) ‘Bridging the Digital Divide: New Route to       Development or New Form of Dependency’ Global Governance 8: 443-466.

Weber, Steven (2004) The Success of Open Source Cambridge, Mass.: Harvard         University Press.

Weitz, Morris (1956) ‘The role of theory in aesthetics’ The Journal of Aesthetics and Art Criticism, 15 (1): 27-35,

Winston, Brian (1998) Media Technology and Society, A History: From the   Telegraph to the Internet, London: Routledge.

1) See in particular the work of Kristof Nyiri who in a variety of papers including ‘Wittgenstein as a Philosopher of Secondary Orality’ (1996), ‘The Humanities in the Age of Post-Literacy’ (1996) and ‘The Picture Theory of Reason’ (2000) examined the problem of machine consciousness, post-literacy, and the new unity of science. See his website with full text papers at
2) The book has been googlized with Introduction and parts of the first chapter available here.
3) See the GNU site, a 2006 lecture by Stallman entitled ‘The Free Software Movement and the Future of Freedom’ and Aaron Renn’s (1998) “Free”, “Open Source”, and Philosophies of Software Ownership.
4) See his bestseller Free Culture .
5) See the website here
9) See  Definition
13) For the full list see the BBC website.
14) See Report.
18) See the entry on expanded books in Wikipedia.
19) For demonstrations of Sophie see here. For a useful history of multimedia see ‘When Mulitimedia was Black and White’.

EEPAT is published in association with the Philosophy of Education Society of Australasia and the journal Educational Philosophy and Theory.