Author Archives: Fernanda Ribeiro Rosa

Final paper draft

One Laptop Per Child: a manufactured mismatch between digital media and Education

 

Context

The dissemination of OLPC (One Laptop per Child) over developing countries from Africa and Latin America occurs since the middle of the 2000’s, when Nicholas Negroponte, , settled at MIT and from the same generation of Alan Kay and Adele Goldberg, launched, in 2005, the One Lap Top Per Child (OLPC) project. Although the considerable amount of governmental investment that has been allocated, the outcomes of the adoption of this new artefact in the context of teaching and learning have not be convincing. Evidences from Brazil show policy challenges as well hardware design limitation that prevent improvement in the teaching and learning process.

 

Objective

The purpose of this paper is to examine the design concept of the mini-laptops known as OLPC (One Laptop per Child), which is the most popular digital media for use in primary education, in face of the “ Dyanabook metamedium” concept, which was first idealized by Alan Kay, Adele Goldberg and the team of the Xerox PARC in the 1970’.

Questions to be answered

While the envision of mobile devices and mobile learning principles can be identified in the documents produced more than 40 years ago by Kay and his team, to what extent the OLPC can be considered a development of the  “metamedium” concept? What are the consequences of OLPC current design to the experience of teacher and students at schools? What were the constrains for the OLPC to be developed as it is today? How common sense ideas as “digital native” and technology deterministic approaches help to explain the current scenario of low adoption of digital technology at schools.

Source of information:

 

A quick analysis of Google’s Ngram

For this week, I selected one of the Google’s services that I have used frequently as a matter of curiosity and also a way to get general information, Ngram Viewer, which allows for searching  words in books over the years, from 1500 -2008.

Ngram is one of the most impactful Google’s endeavors. It is the result of the digitalization of millions of books and the creation of a search tool that scan the material altogether.  According to an article published on Science by researchers involved in this project (Michel et. al, Quantitative Analysis of Culture Using Millions of Digitized Books, Science, 2011), the “corpus” of books was formed by publications that come from over 40 universities libraries with more than 15 million books digitalized, which corresponds to 12% of all the books ever published. The researchers then selected 5 million publications (4% of all the books ever published) based on the quality of both the metadata providing date and place, which is made available by publishers and libraries, and the optical character recognition (OCR) results, which shows how precisely the system used to digitalize recognizes the letters and symbols printed.

f1-large

Source:  Michel et. al, Quantitative Analysis of Culture Using Millions of Digitized Books, Science, 2011

To properly interpret the results that the tool shows the user, it is necessary to understand how the platform works. “Gram” is a group of characters, including letters, symbols or numbers, without a space. A gram can be a word, a typo or a numerical representation (bag, bagg, 9.593.040). For instance, “bag” is a 1-gram, while “small bag” is a 2-gram. Ngram means a gram composed by “n” number of groups of character.  According to the Ngram information session, word search results are circumscribed to the type of gram one is searching for. If the user typed a 1-gram, the search will be conducted only among 1-grams. The same occurs with a 2-gram and so on.

In the example given by the Ngram programmers, they search for two 2-grams and one 1-gram at the same time: “nursery school”, “child care”, and “kindergarten”, respectively. The answer to be obtained with the platform will be: “… of all the bigrams contained in our sample of books written in English and published in the United States, what percentage of them are “nursery school” or “child care”? Of all the unigrams, what percentage of them are “kindergarten”?” (Please see the first chart at https://books.google.com/ngrams/info).

Thus, the results are dependent on the classification of the gram that one is searching for. In the case above, the dataset where “kindergarten” has been searched is different from the dataset where “nursery school” and “child care” have been searched.

On the other hand, beyond the fact that the platform allows to easily search for classes of words such as adjectives, verbs, nouns, pronouns, adverbs, allowing linguistic comparisons, the capacity of the platform to scan books is much higher than a human capacity. As Michel et. al explain, “If you tried to read only English-language entries from the year 2000 alone, at the reasonable pace of 200 words/min, without interruptions for food or sleep, it would take 80 years.” (Michel et. al, 2011)

What is interesting about Ngram is that it builds on social knowledge of hundreds of years stored in universities’ libraries transforming millions of physical books into a single digital file. Through a combinatorial process that joins physical materials and also software, such as OCR, search engine, databases, Ngram makes possible the creation of tools that are more than a remediation (Manovich, 2013) of old books and libraries given that with a searchable file many different comparisons and uses are now possible. The fact that it is owned by Google and was built on a project of Harvard scholars (Michel et. al, 2011) shows that societal conditions and previous knowledge, while not determinant, are fundamental to shape who will have chances to reproduce power.

Regarding the limitations of Ngram, an article on Wired (Zhang, Sarah, 2015, The pitfalls of using Google Ngram to study language) shows that the more one unveils how it functions, the more precaution is advisable.  One can’t disregard the fact that optical character recognition technologies (OCR) are not perfect and can incur in results errors when some pixels generated when scanning a book are not accurate. Zhang (2015) explains that fonts patterns in some publications can generate confusion between letters (e.g. s and f), what will generate mistakes. Metadata can also present some errors implying that some information comes from a specific year and place when, in fact, it does not.

From the point of view of the web architecture, Google’s servers are the unique source for the content shown in the Ngram platform. Despite the fact that the physical books are in many different physical places, a user can read a given book on Google books platform and then do the search on Ngram accessing the websites from a computer device no matter where they are. Because it is a proprietary platform, users so far can’t have access to the raw data or even any report that explains the amount of books, 1-gram, 2-gram searched per year or decade. The more transparency the platform offers, the more uses one can make with such a rich application. At the end of the day, the reality created by Ngram is based on no more than 4% of all books ever published according to the researchers who pioneered it. We should keep this in mind.

Finally, the centralization of knowledge in one big player has consequences for the users’ privacy, which is compromised when their searches are identifiable and added to their profile to improve marketing advertisements. I don’t know to what extent this has been currently done, but there is no reason to believe that this is not the case.

The Internet: virtuality and materiality through its physical layer

The Internet is known as a network of networks, or an inter-network, as showed in the video “There and back again: A Packet’s Tale. How Does the Internet Work?”, from the World Science Festival. Because users’ interface with the Internet is mainly the application layer, the transport, network, physical layers (Ron White, How Computers Work, 2007) and others are commonly not noticed even if there are signs of them around us.

In “Networks of New York: An Internet Infrastructure Field Guide”, from Ingrid Burrington, the author joins art, design, architecture and politics and through many draws shows how the Internet looks like in a city. My main take away from her book is that the architecture of the Internet is embedded in the architecture of our cities.  The Internet is as virtual as it is material. The streets, avenues, buildings all have signs of the materiality of the Internet.

manhole

Source: http://seeingnetworks.in/guide/

Before thinking of the large Internet, it is possible to think of Local Area Networks (LANs). A LAN is a group of computers, routers, hubs and other equipment that communicate to each other through a common language, the Internet protocol. The Georgetown network is a LAN. For the Internet as we know it to work, it starts in the Georgetown building (and in others around the world as well), and is dependent on the design of the TCP/IP (Transmission Control Protocol/Internet Protocol).

As White (2007), the TCP/lP break data into packets that are labeled with some identification and addressing information. As the video aforementioned shows very well, the packets then travel over a massive infrastructure and the distributed network until arrive in its destination where the packets are put together again to be readable and interpreted by the recipient machine.

Vint Cerf, one of the creators of the TCP/IP protocol, calls attention to its design in the foreword of the Great Principles of Computing (Peter J. Denning and Craig H. Martell, 2015). For him, this is the reason why the TCP/IP has become the main communication protocol of the network. The fact that application designers do not have to understand how IP packets are transported, and that at the same time, the protocol does not depend on the type of information in transit, have contributed to the wide diffusion of the TCP/IP and also to the stability of the network, even if it receives daily a multiplicity of new applications not initially foreseen.

For a network to connect to each other in order to transfer more than 100 billion e-mails or to allow more than 2 billion Google searches a day (Internet Live Stats, Nov 15), the physical layer is as key as the transport layer discussed above. To be connected, the networks need to be identified with  autonomous systems numbers (ASN), an autonomous system is thus a network administered by an Internet operator. Google, AT&T are examples of big network operators. When a user sends information through their Internet Service Provider (ISP), an e-mail, for example, that ISP is required to interconnect with other networks in order to deliver the information to the assigned recipient.

Although such operations are not visible to the Internet users, the interconnection is ordinary and essential to the Internet to work and is commonly privately administered by companies. This happens through commercial agreements, such as regular data traffic purchase and “peering” that is part of the economy of the Internet. In the example on the transmission of an e-mail, the ISP has two possibilities to deliver the information among the various types of interconnection agreements. It can pay to connect to transit providers, which will allow it to establish communication with many networks at once (the larger Internet). Or the ISP can opt to do “peering” with other Internet operators, sharing traffic and infrastructure, in a one-to-one basis, also referred as bilateral connections. In this case, both operators share the necessity of delivering data through the cables and other resources owned by the peer. Peering is, thus, a collaborative relationship that decreases the value of the connection due to sharing of resources, although it can also involve payments among the parts.

These connections are only possible through physical cables and structures distributed among countries. The physical distance among operators can have impact on both the cost of user’s Internet connection, due to more or less resources necessary to make the linkage, and the Internet quality, given that the more distant the operators are among them, the more subject to latency they tend to be.  Because of this, an important interconnection facility gains relevance within the network, the Internet Exchange Points (IXPs) – previously known as NAPs (Network Access Points), as it is mentioned in the World Science Festival video. IXPs, as the video briefly shows, are physical spaces in which numerous autonomous systems can connect to each other, including Internet service providers, transit providers, content providers, content delivery networks, and others. IXPs optimize the network and amplify the possibility of peering among the Internet operators. Once they become part of an IXP, they are connected to all other operators already in the facility, from telcos to ISPs, from banks to universities. IXPs approximate Internet operators without their cost of having to connect individually.

The Internet Exchange Map, a tool developed by TeleGeography shows the distribution of the known IXPs around the globe here: http://www.internetexchangemap.com/

From a socio-technical point of view, there are many aspects of IXPs that need yet to be understood. There is an unequal distribution among countries that generate infrastructural dependencies among nations – Canada and Mexico are highly dependent on the United States Internet infrastructure, as other Latin American countries as well. There are also questions about the complexity of purposes that these facilities have nowadays and the different meanings that such technological artefact has for industries, governments, users, even if only a few of those actors are currently engaged with this topic considered very well.

The more hybrid new media are, the more disruptive they tend to be

In 2013, UNESCO launched its “Policy Guideline for Mobile Learning”, where they defined mobile learning as “the use of mobile technology, either alone or in combination with other information and communication technology (ICT), to enable learning anytime and anywhere.” They continue “Learning can unfold in a variety of ways: people can use mobile devices to access educational resources, connect with others, or create content, both inside and outside classrooms.” (p. 6) (emphasis added).

Interestingly, such concept has crucial correspondences with the Alan Kay’s personal computer prototype, the Dynabook. More than 40 years ago, when smartphones and tablets didn’t even exist, the underlined ideas above were already conceptualized. First, the main affordances of mobile technologies that we have nowadays are the possibility of using them “anytime, anywhere as [users] may wish” (Kay, 1972, p. 3). Second, the computer would be “more than a tool” (idem), it would be something like an “active book” (p. 1). Third, imagining some of the possibilities for the new device, Kay predicted: “Although it can be used to communicate with others through the ‘knowledge utilities’ of the future such as a school “library” (…) we think that a large fraction of its use will involve reflexive of the owner with himself (…)” (p. 3). Finally, Kay and his team imagined that personal computers would involve “symmetric authoring and consuming” – the creation of content by students aspired by UNESCO and its followers.

I already discussed the intellectual property constraints for expanding the metamedium potential of current devices and software in my previous post. I would like to expend some time now reflecting on the continuum v. disruptive characteristics of what we have now available in terms of hardware and software and what was imagined decades ago.

Kay’s devices draws from 1972 are similar to calculators with bigger screens and keyboard. Nowadays, there are some models of smartphones that keep such characteristics. The Blackberry is one of them, sold by 200 dollars.

kay

Source: http://techland.time.com/2013/04/02/an-interview-with-computing-pioneer-alan-kay/

 

blackberry

You can buy it here:

https://www.bhphotovideo.com/bnh/controller/home?O=&sku=1289656&gclid=CPKbqun-m9ACFU5MDQodT9EECA&is=REG&ap=y&m=Y&c3api=1876%2C92051677682%2C&A=details&Q=

Nowadays, this kind of smartphone is not the most common, though. The keyboard, which used to be hardware has become software, accessed through a larger touch screen. Obviously, in the Blackberry model, there is a software to interpret the physical keyboard, but it is a “grey software” (Manovich, 2013, p. 21), not visible to the users as they are in the most recent models that have digital keyboards (e.g. iphones and others).

The migration from hardware to software brings the possibility of unlimited keyboard keys, that in my opinion, is not already well explored by designers. Some affordances of the digital keyboard are just copied of the physical ones, as when you press the key for a longer time and see more options. Additionally to it, they could have other layers of keys, and more configurable keys for the user to decide the punctuation signals, letters or symbols that hey should contain. For elderly people, they should also have configurable response level depending on how strongly a key is pressed. Many of them, including my parents, feel more comfortable when putting more force in their touch, which generates a lot of undesirable responses from the software (e.g. repeated numbers on the screen). The possibility of calibrating the software response in accordance to how intense the user’s touch is would help them a lot.

I understand that the more hybrid (Manovich, 2013) media are, the more disruptive they tend to be. For the author, “in hybrid media the languages of previously distinct media come together. They exchange properties, create new structures, and interact on the deepest levels.” (p. 169). If a specific metamedia intends to “remediate” (p. 59) older ones, such as an old map book turned into a digital map, it is easier to see the continuum line between them. Instead, when the digital map acquires other features and affordances, allowing the user to avoid traffic or police stations where police officers are stopping young people after midnight to check their level of alcohol, it becomes more disruptive. In my hometown, São Paulo, a very crowded city, many taxi drivers have become heavy users of apps such as Waze during hush hours. They don’t need to turn on the radio to know what other drivers are sharing about the traffic. They trust on the app. Similarly, young people have become more dependent on apps that allow them to know in advance where they can find police stations in their way to home, complementing the online communities that they have created to alert and notify members about the same issue.

Thinking about the continuum vs. disruptive tasks regarding photographs, while taking thousands of pictures using smartphones represents more of a continuum with previous non-digital tasks, the possibility of searching for photographs on Google through the upload of a similar picture is very disruptive. Also, the search for photos through #hashtags as allowed by Instagram is very powerful and a kind of unimaginable affordance some years ago. However, I see the domestic storage process of digital photos very similar to the use of old physical boxes brought to the living room when our family was visiting us, and I don’t know the reason for that.

In this way, the more hybrid and disruptive photo apps, digital books and other kinds of software become, the more changes we see in how we experience the world.

Metamedium: a great idea yet to be fully implemented

Manovich (2013)’s historical approach explaining the development of digital media as a metamedium is inspiring. He shows with primacy that, even if it is possible to visualize a combinatorial process at place when comparing old media and the computing devices, there is nothing inevitable or deterministic about this development. In fact, such devices were built, constrained not only by people, but also by the market dynamic. On the one hand, the new properties emerged with the metamedium had to be “imagined, implemented, tested, and refined” (p. 97). On the other hand, industry interests and decisions also influence the kind of devices that the broad population will be able to experience. As Manovich (2013) affirms, “the invention of new mediums for its own sake is not something which anybody is likely to pursue, or get funded (p. 84). It does not go unnoticed that although researchers such as Alain Kay and Adele Goldberg imagined a metamedium that would allow computer users not only to consume existent media, but produce new ones, and being themselves programmers, the industry has not invested on these attributes as devices’ mainstream characteristics – neither in 1984, when the first Macintosh was launched, or in 2010, when Apple’s Ipad impressed the market.

The concept of a metamedium announces that it not only simulates old media, but it also has unprecedented functions. One can write using computers, as used to do using papers, but the “view control” (Manovich, 2013) is totally different, once one can change the fonts, cut and past, or yet change the structure of the text presented, to name a few possibilities. It is true that, as the author perceptively shows, although conceived some decades ago, some affordances are not fully developed yet, such as the Doug Engelbart’s spatial features to structure the visualization of text. Even though, the capacity of organizing text using computing devices is unprecedented.

Computing devices are also interactive. The possibilities that they open to support problem solving situations go far beyond previous calculators (Manovich, 2013). As a metamedium, computers bring the possibility of engaging the learner in a two-way conversation” opening new possibilities for teaching-learning methods (Kay and Goldberg, 1977). The history has shown less changes in education that imagined by such scholars, though. Why?

Nicholas Negroponte, from the same generation of Kay and Goldberg, settled at MIT, launched, in 2005, the One Lap Top Per Child (OLPC) project. Policymakers from developing countries received it with enthusiasm. Negroponte promised a device – with standard software included –  per 100 dollars each, to change teaching and learning process. The project was seen by many as the solution for the lateness in adopting digital media at schools. Latin American countries, including Brazil, invested a lot in this project. I then conducted a study with a colleague at Columbia University to understand mobile learning in Brazil and the results show that the OLPC failed in many distinct ways. Our focus was the public policies aspects of the project, but from the readings I can see that the device itself was completely different from what Kay and Goldberg once imagined and also from what Negroponte made people think that it would be. The device was locked-down (The future of the Internet and how to stop it, Zittrain, 2008), with limited affordances to allow students to create new media. The screen was small (although bigger than other classroom devices), the processor and memories were also limited.

Image result

OLPC is in the left

Source: http://www.zdnet.com/pictures/photos-olpc-xo-classmate-and-the-eee-pc/

Despite the fact that the uses imagined for such devices could create a better teaching and learning environment – and they created indeed in some classrooms where they were organically adopted  –  their affordances would not generate a new level of student, a metamedium student, I would say, that would be able to create new media, new tools, according to their necessities and personal trajectory. And this is a huge gap in this project, focused on developing countries.

Going further, as Manovich points out, from the point of view of the media history, the most important characteristic of a metamedium is that it is “simultaneously a set of different media and a system for generating new media tools and new types of media” (p. 102). This refers to the capacity of a user not only to transform a text, but also create mash-ups, remixes, machinima. The problem is that, as extensively studied by some scholars (Lawrence Lessig, Jack Balking, Aram Sinnreich), while the affordances are available, the limits imposed by intellectual property regulations, not only through laws, but also through technological and digital rights management (DRM) tools, have restricted the metamedium capacities massively. And because the industry also works in shaping the narrative about this kind of digital practices, naming “pirate” people that engage in such kind of activities, in a derogatory way, I believe that industries deliberately contribute to prevent the development of metamedium devices, metamedium students and metamedium users as whole.

Programming as a change factor in the teaching and learning environment

I graduated from a technical high school in informatics where teachers explained that once we understand logic, we are able to learn any programming language in specific. Algorithm rulers, pencil, eraser, and sometimes a computer, were the main tools for the Programming Language class. After this experience, I ended up teaching programming for 2 years before going to college. The language used in the school that I worked for was Visual basics, a Microsoft language that used written language and a graphical interface. Now, I feel happy to learn Python, with a very innovative teaching and learning design.

One of the Codecademy’s founders used to be a Columbia University undergraduate student who dropped the course to invest in a programming platform that should be easy and intuitive for everyone. This is worth mentioning because his endeavor is a result of his frustration with the university methods of teaching and learning, and the content itself. He has advocated for teaching programming to young people, and it is not difficult to see him in education conferences and seminars.

This was my first time using the Codecademy platform, although I had planned to do it previously. I found it very pragmatic and effective: learn a language to communicate with a machine. Referring to Jeannette Wing’s article, the platform does not intend to form computer scientists, instead, it focuses on creating programmers, who learn the language to give instructions to the machine. Learning is intended to come from experience: the more one uses and the more s/he assimilates.

I completed 25 activities (2 lessons) through which I could notice that I memorized more easily commands that are usually used by other languages: the use of quotation marks for informing strings, the mechanism to define a variable and store data on it, the use of math operation expressions similarly to what is used in math, etc.

On the other hand, my mistakes showed interesting findings. As David Evans (2011) explains, programming means to describe every step using a language that humans understand and the machine can execute. The error notification messages that I received were very clear about that. One of them exposed my confusion between a variable and a string, using quotation marks. The error message tried to explain me that I was not saying to the machine what I was supposed to say. It clearly exposed that there were no margins for machine interpretation. The platform understood what I did, explained me it and informed me what I should have done instead. The platform algorithm is very didactic and focused on the learner at the same time that implicitly it makes clear that I need to learn how exactly to represent what I want the machine to do.

Interestingly, I imagine that the platform, which seems to have more than 25 million users so far, will be able to predict users’ mistakes more precisely, giving better instructions in the exercises.

The level of Python abstraction, as explained by Evans (2011) is not so hard to get use to it. For instance, upper(), lower() are abstract commands, but they are intuitive as well. So far, Python seemed to be a “simple, unambiguous, regular, and economical” language. This is maybe why it has become one of the most popular languages in social actions which try to teach young people and women to program.

Although I advocate for teaching programming at schools, I do not agree with Jeannette Wing that we need to have computational thinking at schools as a way to teach abstract and problem solving thinking. This seems to come from someone who does not know the schools curriculum, which already has different disciplines focused on such skills. Math, for example! The problem is how schools and teachers teach that. The problem is in the design of the classes. This is why the Codecademy platform is very ingenious – it changes the teaching-learning process design using the tools available nowadays. I share Jeannette Wings and David Evans aspiration of having liberal arts students and young people learning computational skills at schools, but my reasons are grounded in the necessity of changing the teaching-learning design and leveraging the number of skilled people from different backgrounds able to  command machines for good of their communities and day-to-day life.

Communication theory and its social consequences

I could find two different perspectives on what “information meaning” means in this week readings. In Floridi (2010),  “’[m]eaningful’ [information] means that the data must comply with the meanings (semantics) of the chosen system, code, or language in question (p. 23). In this sense, inside of a system, machines interpret meanings previous programmed and set up. Such meanings only make sense in the context of communication among nonhuman actors – data and machine – and is what enacts the systems operability.

Another use for the word “meaning” appears in the explanation of Shannon’s theory of information given by Denning and Bell (2012). Here, an emitter sends a message, which is encoded according to a codebook and converted into signals. These signals follow their way until reaching the decoder on the receiver side, where they are decoded based on the same codebook, acquiring its original format as a readable message. In this model, the medium used to transport the message does not understand the “meaning” of the information that has been carried out. The meaning is given by connecting the signals to their referents. These referents are collectively shared, thus people assign meaning to it.

From the point of view of the Internet-based communication system that we have today, this is a powerful principle: the machines don’t need to know the information that is transported. Vint Cerf, one of the TCP/IP protocol designers, in the Foreword section of the book Great Principles of Computing (Denning and Martell, 2015) explains the wide diffusion of TCP/IP, the main Internet communication protocol, based on that principle. For him, the fact that application designers on the web do not have to understand as IP packets are transported, and at the same time, the protocol does not depend on the type of information in transit, have contributed to what the Internet is today, and for its impressive stability, despite of the fact that the network receives a plurality of new applications every day, not initially foreseen.

Despite of all these characteristics of the theory of communication and its consequences for our current way to communicate online, the fact that data have become valuable for the stakeholders who run the Internet require us to reflect on a new set of questions. Nowadays, Vint Cerf works for Google, which has developed software to read our Gmail messages and offer products and services through its ads. Are these machines interpreting meaning or not? Internet service providers using the discourse of network optimization demand the right to prioritize content when transporting data on the Internet, what has generated the necessity of new regulations, such as net neutrality provisions. Are they requiring the right to distinguish packets meaning or not?

The political aspects and the social consequences of this discussion are truly interesting.

The importance of diversity for new affordances to become visible (and possible)

Some months ago, in a talk about capoeira, a Brazilian martial art that joins attributes of dance and fight, my husband was approached by a student interested in the capoeira’s movements to the study of robotics. She wanted to explore how such movements could contribute to develop new robots.

As one can see in the video, capoeira explores many affordances of human body, including its flexibility, adaptation of movements according to the other, rhythm – all synthetized in the word “ginga”. If one has ginga, s/he has these characteristics altogether. This is why the Brazilian soccer is known for its ginga, very influenced by African people.

 

If a robot will be able to reproduce the ginga, one can’t guarantee, but the fact that that African American student was engaged on this goal shows me that the characteristics of the designer also matters for the design itself. Depending on who is in charge of that task, the questions that will be raised will differ substantially as well as the possibility of breaking design conventions and the path dependence on what has been created so far.

Unfortunately, on my perspective, nowadays breaking design conventions became a strategy of companies to impose a programmed obsolescence on products and, consequently, to sell more. Maybe the motivations are grounded on the desire of improving things, but because most of the time the process lacks essence and diversity, with the same people thinking about the same products, transformations are more difficult to occur.

The conversion of hard copy books into digital ones is another example of difficulties in understanding what the digital mediums allow a book to be. Murray remembers that the 500- year books can be considered an expansion of our memory, given that, with them, it is not necessary to remember everything that is written, instead, one only must to know where to find what is looking for. This is why the old book indexes, very rare in hard copies, I need to admit, become, with a different format, the main affordance of digital books (CTRL +F). The Find tool makes the index unnecessary. This affordance also makes the page numbers less important than before.

Designers could rethink the book margins requisite, because printing is not mandatory in digital books. The conversion of reader into “interactor”, in Murray terms, is also a necessity. If the affordances of digital mediums allow such change, why do companies keep only translating the old book model into the new one, without giving the reader more centrality? I accompany this discussion in education and mobile learning fields, and one of the biggest development challenges of the area is taking advantage of the ubiquity, the interaction and the portability of mobile devices for education purposes.  This is why including teachers and students in the educational tools design teams seems key for me. They can bring more grounded thoughts and sensitiveness to understand the affordances of the digital world applied to their realities.

Diversity matters for better design.

If Latour’s theory is difficult for you, your mindset is probably influenced by the western culture

Latour’s theory is very sophisticated and should fall as an anvil on the head of scholars who consider that new technologies are a result of internal dynamics and combinatorial evolution, “constructed mentally before they are constructed physically” (Arthur, 2009).

While Arthur’s passive voice above testifies the centrality of technology itself in his theoretical approach, Latour is clear in saying that behind an object, there are innumerous mediators, from engineers to lawyers, and ultimately, corporations, which make technical objects and also human beings what he calls object-institutions. This powerful idea prevents one to understand an object as made by matter, simply. As the author explains, when in contact with a technical object, one is in the end of an extensive process of proliferating mediators. De-blackboxing it means understanding these relations.

Technology and society are, thus, embedded. Through delegation, objects gain actions to execute human tasks. To imagine a world where human beings would be independent of objects is to imagine a nonhuman world, in Latour’s terms. Objects are agents, actants, and through the articulation of characteristics with human beings they become part of the collective of humans and nonhumans.

For many people, the Latour’s perspectivism can sound difficult to understand. The symmetry between agents and actants, where responsibility for actions are shared, imposes a new way of looking at mediation. A human being is so different with a technical object in hands as the latter is with the former. In the contact, they exchange competences and transform themselves.

In this sense, the speed bump is partially a sleeping policeman, as my cellphone is partially my father helping me to wake up in the morning. In a study that I conducted among the Kambebas, an indigenous community in the Amazon region, when asking the mother how she would translate the word computer into Kambeba language, she answered: “the man who knows”. She, unsurprisingly,  understands deeply what Latour is saying, thanks to the perspectivism characteristic of their culture. For us, western culture, however, the articulation is yet a blurred zone that needs to be better revealed.