Author Archives: Deborah Oliveros Useche

The War That Changed The World

Dr. J.W. Mauchly makes an adjustment to ENIAC, the massive computer he designed to assist the U.S. military during World War II.

(f.1) Dr. J.W. Mauchly makes an adjustment to ENIAC, the massive computer he designed to assist the U.S. military during World War II.(Source: www.scienceclarified.com)

Abstract

This paper provides an overview of the technological advances in the historical context of World War II, the institutions and individuals that played a key role in the creation of computers and the impact of said advancements in our current technology within the context of armed conflicts. It aims to analyze beyond the ‘what if’ scenarios and take a closer look at certain moments in history that signified a before and after for the technology around computers, who were the key actors behind them, and how did it shape and define the computers we use today and how we use them.

Topics, concepts and keywords:

  • History of technology development during and after World War II.
  • University research and government funding for the developing of technology during war.
  • Computing, graphical interfaces for human interaction and combinatorial technologies.
  • Keywords: World War II, Department of Defense, DARPA, ARPA, ENIAC, Cold War, MIT, MIT Lincoln Labs.

Research questions

  • In what ways is the current technology around computers influenced by the technological achievements of World War II?
  • Within this context, under what circumstances do the combination of international conflict and the involvement of government with university research teams motivates the advancement of technology?

Introduction

In popular culture, it’s common to refer to our current times as the age of technology. We live in a world that is not only intrinsically related to technology but it’s also incredibly dependent on it. This trend is not entirely new, we’ve been influenced by technological advancements for a very long time, even before the invention of electricity. However, there is no denying that the pace at which technology advances has sped up drastically in the last half century. It wasn’t that long ago that we used to live in a world without internet, cellphones, GPS, or digital cameras, just to name a few. More surprisingly, technology is advancing so fast that, many times, its predecessors become obsolete very quickly.

Image result for technological advances

(Fig. 2) Source: Google images.

Society marvels at new technological advances in different fields and wonders “how is it possible?”. The rapid pace and the mysterious aspect (black-boxing) of the modern advancement of technology make it seem as something magical, almost inevitable and unstoppable for everyone. In order to demystify technology as an autonomous entity that magically evolves independent from us, it is important to ask what happened 50-60 years ago that unchained this phenomenon? Who played a part in it? And how did it affect the current state of our technology?

A snapshot in time

To begin to answer our question it is necessary to look at the history of what was happening in the world at the time. Upon analyzing this, we’ll find that it was not one specific event but rather a combined chain of events, interdependent, that happened at the perfect timing. On top of that, it wasn’t one specific individual, but instead a group of different actors and institutions whose actions had an impact in determining the path technology would take in the future.

Even though technology is still very much present and a determining factor in future conflicts –in addition to earlier inventions in World War I serving as the ancestors to build on new technology- no war had such an impact on the current technology of our lives than World War II (1939-45).

It was a peculiar moment in history in which a unique combination occurred simultaneously: the need for technological advances to defeat the enemy with the intellectual flourishment of revolutionary ideas in the field. Both government funding and private sector funding united forces with academic research in the United states, such as MIT and Stanford, which resulted not only in the victory of the allies but its effect still resonates in our lives with the way we interact with technology in our everyday activities.

(Fig. 3) The transportation technology advances in World War Two included amphibious landing vehicles, aircraft carriers, vastly improved tank technology, the first appearance of helicopters in combat support roles, long range bomber aircraft and ballistic missiles. (Source: www.21stcentech.com)

There were many types of technologies and discoveries of scientific principles that were customized for military use. Major developments and advances happened in such a short period of time that it’s difficult to study and analyze all of them in this limited space. Just to name a few, we can take into account the design advancements of weapons, ships, and other war vehicles, or the communications and intelligence improvements with devices such as the radar, allowing not only navigation but remote location of the enemy as well. Other fields that were drastically influenced by technological advancements were the medical field and the creation of biological and chemical weapons, the most notorious case being the atomic bomb.

On the subject, Dr. David Mindell from MIT brings attention to a few specific cases and their impact, both during the war and its outcome, as well as in the current state of our technology:

We can point to numerous new inventions and scientific principles that emerged during the war. These include advances in rocketry, pioneered by Nazi Germany. The V-1 or “buzz bomb” was an automatic aircraft (today known as a “cruise missile”) and the V-2 was a “ballistic missile” that flew into space before falling down on its target (both were rained on London during 1944-45, killing thousands of civilians). The “rocket team” that developed these weapons for Germany were brought to the United States after World War II, settled in Huntsville, Alabama, under their leader Wernher von Braun, and then helped to build the rockets that sent American astronauts into space and to the moon. Electronic computers were developed by the British for breaking the Nazi “Enigma” codes, and by the Americans for calculating ballistics and other battlefield equations. Numerous small “computers”—from hand-held calculating tables made out of cardboard, to mechanical trajectory calculators, to some of the earliest electronic digital computers, could be found in everything from soldiers’ pockets to large command and control centers. Early control centers aboard ships and aircraft pioneered the networked, interactive computing that is so central to our lives today”. (Mindell, 2009).

Image result for V-1 or “buzz bomb” world war II

(Fig. 4) The V-1 or “buzz bomb” was one of the early bombers used during World War II. (Source: www.learnnc.org)

(Fig. 5) Radar system in operation in Palau during World War II. (Source: www.learnnc.org)

The history of how all of these advancements came to be it’s fascinating, and it would be easy to get sidetracked into analyzing each of them. However, this paper does not aim to be a mere recounting of the facts that are already very well documented by historians. Let’s take a look at the specific case of advances in computing, which is probably one of the biggest, if not the main, takeaway from World War II.

Even though, ‘computing’ as a way of thinking and seeing the world had existed for a very long time before these events –including machinery- there is no denying that the jump in the last 50-60 years has been abysmal, and we owe it, in big part, to the research and funding achieved during and after World War II.

As a field, Computing started formally in the 30’s, when notorious scholars such as Kurt Gödel, Alonzo Church, Emil Post, and Alan Turing published various revolutionary papers, such as “On Computable Numbers, with an application to the Entscheidungs problem” (Turing, 1936), that stated the importance of automatic computation and intended to give it mathematical structures and foundations.

Alan Turing

(Fig. 6) Alan Turing, considered to be the father of computer science. (Source: www.biography.com)

The Perfect Trifecta: Universities Research Teams + Government funding + Private Sector

Before World War II, the most relevant analog computing instrument was the Differential Analyzer, developed by Vannevar Bush at the Massachusetts Institute of Technology in 1929 “At that time, the U.S. was investing heavily in rural electrification, and Bush was investigating electrical transmission. Such problems could be encoded in ordinary differential equations, but these were very time-consuming to solve… The machine was the size of a laboratory and it was laborious to program it… but once done, the apparatus could solve in minutes equations that would take several days by hand”. (Mindell, 2009).

(Fig. 7) Vannevar Bush (1890–1974) with his differential analyzer Bush joined MIT at age 29 as an electrical engineering professor and led the design of the differential analyzer. During World War II, he chaired the National Defense Research Committee and advised President Franklin D. Roosevelt on scientific matters. (Source: Computer History Museum)

During World War II, the US army commissioned teams of women at Aberdeen Proving Grounds to calculate ballistic tables for artillery. These were used to determine the angle, direction and range in which to shoot to more effectively hit the target. However, this process was vulnerable to error and took considerable amounts of time, therefore, the team could not keep up with the demand of ballistic tables. In light of this, the Army commissioned the first computing machine project, the ENIAC, at the University of Pennsylvania in 1943: “The ENIAC could compute ballistic tables a thousand times faster than the human teams. Although the machine was not ready until 1946, after the war ended, the military made heavy use of computers after that” (Denning, Martell, 2015).

(Fig. 8) 1946, ENIAC programmers Frances Bilas (later Frances Spence) and Betty Jean Jennings (later Jean Bartik) stand at its main control panels. Both held degrees in mathematics. Bilas operated the Moore School’s Differential Analyzer before joining the ENIAC project. (Source: Computer History Museum).

This is one of the first examples of the combined work of government and universities research teams to fund and advance technology. However, it is worth noting that this was not the only project in place at the time in the world. In fact, the only one that was completed before the war was over was the top-secret project at Bletchley Park, UK, which cracked the German Enigma cipher using methods designed by Alan Turing (Denning, Martell, 2015).

Nevertheless, projects such as ENIAC (1943 US), UNIVAC (1951 US), EDVAC (1949 US, binary serial computer), and EDSAC (1949 UK) provided ground-breaking achievements that, later on, allowed for the design advancements of a more efficient, reliable, and effective computer: “Even relatively straightforward functions can require programs whose execution takes billions of instructions. We are able to afford the price because computers are so fast. Tasks that would have taken weeks in 1950 can now be done in the blink of an eye”. (Denning, Martell, 2015).

These projects sparked the flourishment of ideas that transformed computing into what it is today. Computers changed from being mere calculators to being information processors, and pioneers John Backus and Grace Hopper had a key role in that shift. In 1957, Backus led a team that developed FORTRAN, a language for numerical computations. In 1959, Hopper led a team that developed COBOL, a language for business records and calculations. Both programming languages are still used today: “With these inventions, the ENIAC picture of programmers plugging wires died, and computing became accessible to many people via easy-to-use languages” (Denning, Martell, 2015).

(Fig. 9) 1952, Mathematician Grace Hopper completes A-0, a program that allows a computer user to use English-like words instead of numbers to give the computer instructions. It possessed several features of a modern-day compiler and was written for the UNIVAC I computer, the first commercial business computer system in the United States. (Source: Computer History Museum).

The role of government funding during this period was essential, but it went beyond just granting money to universities’ research teams. In February 1958, President Dwight D. Eisenhower, ordered the creation of the Defense Advanced Research Projects Agency (DARPA), an agency of the United States Department of Defense which mission is the development of emerging technologies for use by the military. International armed conflict not only played a part in the creation of this agency but it was the reason behind it. About the climate of the context of its creation:

“ARPA [originally] was created with a national sense of urgency amidst one of the most dramatic moments in the history of the Cold War and the already-accelerating pace of technology. In the months preceding [the creation] … the Soviet Union had launched an Intercontinental Ballistic Missile (ICBM), the world’s first satellite, Sputnik 1… Out of this traumatic experience of technological surprise in the first moments of the Space Age, U.S. leadership created DARPA” (Official website).

The agency establishes its purpose clearly: “the critical mission of keeping the United States out front when it comes to cultivating breakthrough technologies for national security rather than in a position of catching up to strategically important innovations and achievements of others” (Official website). By this description, is not difficult to assume that tension between countries due to armed conflicts definitely impacts their willingness to invest in the creation of new technology.

However, the projects funded at this agency, throughout its creation, have provided significant technological advances that have had an impact not only for military uses but in many other fields. The most ground-breaking ones are providing the early stages of computer networking and the Internet, in addition to developments in graphic user interfaces among others.

(Fig. 10) 1962, J. C. R. Licklider, first director of DARPA’s Information Processing Techniques Office (IPTO) discusses concepts with students at MIT. (Source: DARPA)

Along the lines of DARPA, the Department of Defense, in collaboration with Massachusetts Institute of Technology, created the MIT Lincoln Laboratory as a research and development center focused on the application advanced technology to problems of national security: “Research and development activities focus on long-term technology development as well as rapid system prototyping and demonstration… The laboratory works with industry to transition new concepts and technology for system development and deployment” (Freeman, 1995)

Other projects like the Stanford Research Institute started from a combination of forces between university and government funding after World War II and continue to develop technology to better the lives of the public. Among its accomplishments are the first prototype of a computer mouse, inkjet printing, and it was involved in the early stages of ARPANET.

When the future becomes now

Many people involved in the projects created during World War II went on to start computer companies in the early 50’s. Universities began offering programs to study in the new field by the late 50’s. More specifically, Computer Science programs were founded in 1962 at Purdue University and Stanford University, facing early criticism from scholars who believed that there was nothing new outside of mathematics and engineering. “The field and the industry have grown steadily ever since, into a modern behemoth whose Internet connections and data centers are said to consume over 3% of the world’s electricity”. (Denning, Martell, 2015).

Over the years, computing provided new insights and developments at such a pace that, in a matter of few decades, it advanced further than other fields since their creation: “By 1980 computing had matured in its understanding of algorithms, data structures, numerical methods, programming languages, operating systems, networks, databases, graphics, artificial intelligence, and software engineering”. (Mindell, 2009).

In relation to that, the first forty years or so of the new field were focused on developing and perfecting computing technology and networks, providing ground-breaking results that better suited it for combinatoriality and further advancement. In the 1980’s another shift started in the field: the interaction with other disciplines and computational sciences: “Recognizing that the computer itself is just a tool for studying information processes, the field shifted its focus from the machine itself to information transformations”. (Denning, Martell, 2015).

The biggest advances of this field have been integrated into our world seamlessly, shaping not only our lives but the way we see and interact with said world. Design achievements such as the microchip, the personal computer, and the Internet not only introduced computing to the public’s lives both also promoted and sparked a motivation for the creation of new subfields. This effect, in fact, replicates itself almost like a cycle, explain Denning and Martell: “Network science, web science, mobile computing, enterprise computing, cooperative work, cyberspace protection, user-interface design, and information visualization. The resulting commercial applications have spawned new research challenges in social networks, endlessly evolving computation, music, video, digital photography, vision, massive multiplayer online games, user-generated content, and much more”. (Denning, Martell, 2015).

(Fig. 11) Evolution of the computer. (Source: Google Images)

David Mindell clearly expresses this marvelous achievement: “Perhaps the single most remarkable development was that the computer—originally designed for mathematical calculations—turned out to be infinitely adaptable to different uses, from business data processing to personal computing to the construction of a global information network”. (Mindell, 2009)

Conclusion

What if World War II hadn’t happened? Would our current technology be at the stage that it is today? In what ways would it be different? How long would it have taken us to achieve these technological advancements if military conflict wasn’t present in the context?

Such hypothetical questions were the ones that plagued my mind when I started this research, and there is not a clear answer for them. The impact World War II had on society is undeniable and impossible to measure. The world was never the same in every aspect and there was no field left untouched by it. From international relations and diplomacy, with the creation of the UN and the Human Rights, to world politics, specifically in Europe, were forever changed, leading to dictatorships and more armed conflict within the region. Other fields such as physics, biological weaponry, engineering, medicine and genetics, just to name a few, went through a drastic change as well sparked by the events during this time, which in consequence led to future conflicts such as the Cold War and the development of nuclear weapons by various nations.

At the core of all these changes is technology. World War II and its impact on the development and advancement of technology shaped the world as we know it now, in ways that we’re still trying to comprehend and address.

Would technology be less mature, robust or advanced if World War II hadn’t happen? Probably, but more so in a change of pace than a different path. There were astounding technological advances before the war and there are still technological achievements occurring that are not sparked by military conflict. However, wartime stimulates inventiveness and advances because governments are more willing to spend money on revolutionary, and sometimes risky, projects with urgency.

For the specific case of World War II, the creation of computers was a result of different actors and institutions (universities, government agencies, computer scientists and researchers), with various interests, pushed by armed conflict to work together in perfect timing in one of the most drastically world-changing cases of serendipity in history. It is the ‘before-and-after’ of not only our generation but our civilization.

 

 

References

Texts:

  • Campbell-Kelly, Martin.Origin of Computing.” Scientific American301, no. 3 (September 2009): 62–69.
  • DARPA official website: https://www.darpa.mil/about-us/timeline/where-the-future-becomes-now
  • Denning, Peter J and Craig H. Martell.“Great principles of computing.” Communications of the ACM11 (2003): 15-20.
  • Freeman, Eva C. MIT Lincoln Laboratory: Technology in the National Interest,, Lexington, Mass.: MIT Lincoln Laboratory, 1995.
  • Geiger, Roger L. Research and relevant knowledge: American research universities since World War II. Transaction Publishers, 2008.
  • Hall, Daniel and Lewis Pike. If the World Wars hadn’t happened, would today’s technology be less advanced? Guru Magazine, web source: http://gurumagazine.org/askaguru/if-the-world-wars-hadnt-happened-would-todays-technology-be-less-advanced/
  • Mindell, David. The War That Changed Your World: The Science and Technology of World War II. Introductory essay for the exhibition “Science and Technology of World War II exhibition at the National WWII Museum, 2009. Web source: http://www.ww2sci-tech.org/essays/essay2.html

Images:

  • Fig. 1: http://www.scienceclarified.com/scitech/Artificial-Intelligence/The-First-Thinking-Machines.html
  • Fig.2: Google Images.
  • Fig. 3: http://www.21stcentech.com/technology-war-part-3-war-impact-transportation-technology/
  • Fig. 4 and 5: http://www.learnnc.org/lp/editions/nchist-worldwar/6002
  • Fig. 6: https://www.biography.com/people/alan-turing-9512017
  • Fig. 7: http://www.computerhistory.org/revolution/analog-computers/3/143
  • Fig. 8: http://www.computerhistory.org/revolution/birth-of-the-computer/4/78
  • Fig. 9: http://www.computerhistory.org/timeline/1952/#169ebbe2ad45559efbc6eb35720dca99
  • Fig. 10: https://www.darpa.mil/about-us/timeline/ipto
  • Fig. 11: https://robertocamana.files.wordpress.com/2014/08/articulo-no-140.jpg

Technology and War (WIP)

Topics and concepts:

  • History of technology development during and after World War II and Cold War.
  • Graphical interfaces for human interaction and combinatorial technologies.
  • University research and government funding for the developing of technology during war.

Research questions

  • In what ways would the technology around computers be where it is, or less advanced, if World War II hadn’t happened?
  • Under what circumstances do the combination of international conflict and the involvement of government motivates, and in other cases prevent, the advancement of technology?

I am shaping both questions in the historical context of World War II and Cold War to analyze beyond the ‘what if’ scenarios and specify which were the key moments in history that signified a before and after for the technology around computers, who were the key actors behind them, and how did it shape and define the computers we use today and how we use them.

Bibliography

My starting point is this short article on Guru Magazine that attempts to answer this hypothetical question. I’ll use it as a guide for the research questions but not as a source of facts since I believe it is a little bit outdated.

For main and general concepts:

  • Denning, Peter J. “Great principles of computing.” Communications of the ACM 46.11 (2003): 15-20.

I’ll also add some of the readings during week 9 of our course: “Computers as Information Processors & Metamedia Interfaces”:

  • Video Documentary: Alan Kay on the history of graphical interfaces: Youtube | Internet Archive
  • Vannevar Bush, “As We May Think,” Atlantic, July, 1945. (Also etext version in PDF.): a visionary and influential essay anticipating many aspects of information society. Because of his concern for the direction of scientific efforts toward destruction, rather than understanding, Bush introduces his concept of the memex, a collective memory machine that would make knowledge more accessible.
  • J. C. R. Licklider, “Man-Computer Symbiosis” (1960) | “The Computer as Communication Device” (1968): “In many ways, his initiatives and the teams of engineers he got funded are the bridge between war-time and Cold War computing and computers as we know them now. Note the attempt to work with the “symbiosis” metaphor as a way to humanize computing, and proposing interaction concepts that could not be technically implemented at the time. He was working toward a model of computing as an interactive cognitive artefact” (Irvine).
  • Ivan Sutherland, “Sketchpad: A Man-Machine Graphical Communication System” (1963): Expanding on techniques for screen interfaces for military radar, Sutherland was way ahead of his time, and it took many years for the whole combinatorial array of technologies to catch up for implementing the concepts. The Sketchpad concepts inspired Engelbart, Alan Kay (Dynabook), and the development of all screen interactive technologies that we know today” (Irvine).
  • Engelbart,Augmenting Human Intellect: A Conceptual Framework.” First published, 1962. As reprinted in The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 93–108. Cambridge, MA: The MIT Press, 2003: “Engelbart is best known for inventing the graphical interface, the “desktop computer metaphor,” the mouse, and the hyperlink. His research and development teams at Stanford in the 1960s-70s were influenced by Vannevar Bush’s vision and motivated by a new conception of computers not simply as business, government, or military machines for instrumental ends but as aids for core human cognitive tasks that could be open to everyone. His approach to computer interfaces using interaction with a CRT tube display (early TV screen) launched an extensive HCI engineering/design and computing community around user interaction and “augmenting” (not replacing or simulating) human intelligence and cognitive needs” (Irvine).

In addition to these I will add a source on how university research had an impact on the advancement of technology during World War II and Cold War:

  • Geiger, Roger L. Research and relevant knowledge: American research universities since World War II. Transaction Publishers, 2008.

I’m still no entirely sure how to apply some of the other concepts we’ve learned during the class, or how to make it more specific with a study case. I guess I could talk specifically about the developing and advancement of computers in this context, but I would have to leave so much out.

 

 

References:

The annotations for the sources cited under “week 9” were written by Professor Irvine.

The financial revolution waiting to happen

I’m old enough to remember a world without internet and smartphones. However, I’m entirely dependent on it. When I think about how my parents did anything in the past, it blows my mind. Getting in contact with each other when you’re not at home (I remember using pay phones and collecting cool historical phone cards). Or how did they ever get around without GPS? I remember many maps in our car during road trips. The list of examples goes on and on: photography, video, communication, banking, etc.

Everything has been revolutionized by the Internet and access to the World Wide Web. As Ron White mentioned in How Computers Work: “The web is changing how we do everything and creating new standards for commerce, education, and communication” (p. 367).

There are many examples we can unpack to analyze the combinatorial and modular design principle we’ve studied.

Case study:

Coinbase and Cryptocurrencies

Cryptocurrency.jpg

Source: steemit.com

For the case study I decided to tackle something that I do not fully understand well but intrigues me: cryptocurrency and Coinbase as the interface for it.

If you’re like me and are extremely unfamiliar with the term don’t worry, it is relatively new, or recently widely talked about. What is it and how does it work? here is the most accessible explanation I found “Cryptocurrencies are digital currencies using encryption techniques that regulate the generation of currency and verify the transfer of funds, operating independently of a central bank. Units of currency are created through a process referred to as mining. In the case of Bitcoin, miners run computer programs in order to verify the data that creates a complete transaction history of all Bitcoin. This process of verification is made possible by a technology known as the blockchain, which is used to create irreversible and traceable transactions. Once a miner has verified the data (which comes in a block, hence, blockchain), they are rewarded with some amount of digital currency, the same currency for which they were verifying the transaction history. So mining Bitcoin, for example, would earn you Bitcoin” (web: Investopedia).

If you haven’t been following the latest tech trends you’ve probably have only heard the term Bitcoin vaguely mentioned but without a lot of explaining behind it. Cryptocurrencies are one of the most intriguing cases, to me, of how the Internet and World Wide Web have revolutionized our lives.

Banking going online has made our lives easier in many ways: we rarely have to physically go to the bank for anything anymore. We can check our accounts, deposit checks and even open accounts without ever setting foot inside a branch or talking to an associate, among other things. Let’s not forget about the ecological benefit of going paperless. It’s brilliant.

But the revolution doesn’t stop there. The creation of cryptocurrencies, that work very much like stocks, is not only fascinating but it also comes with benefits and vulnerabilities

What is Coinbase?

Just like with banks and stocks, if you want to trade cryptocurrencies you need an intermediary. In this specific case, you need the interface or the platform to communicate with the blockchain network that is tracing and ‘mining’ these transactions. That’s where Coinbase comes in: “Coinbase is global digital asset exchange company (GDAX), providing a venue to buy and sell digital currencies, as well as send information about those transactions out to the blockchain network in order to verify those transactions. Coinbase serves as a Bitcoin, Ethereum and Litecoin wallet, too, where the digital currencies can be stored. The application operates exchanges of Bitcoin, Ethereum and Litecoin, as well as other digital assets with fiat currencies in 32 countries, and Bitcoin transactions in many more countries. According to their website, Coinbase has served over 8.2 million customers, and facilitated the exchange of more than $6 billion worth of digital currency”. (web: Investopedia).

How does it work?

In order to use Coinbase you need to create a free account and have a card or a bank account that you will link to your account. In this way, you basically loose the privilege of anonymous transactions since you have your name attached to it: you’re putting your trust in Coinbase the same way you do it with your bank. It doesn’t necessarily have to be this way, and Coinbase isn’t the only platform out there that deals with cryptocurrency, but I believe this can be both an advantage and a vulnerability.

After you set up your account and go through the process of verification, you’re free to check the current prices of three cryptocurrencies: Bitcoin, Ethereum and Litecoin. If you decide to buy either of them you have to follow simple rules set by Coinbase to control the transaction: there is a value limit per week and a waiting time for your transaction to be verified that varies depending your payment method, and you also pay a small fee to Coinbase for the purchase. It also allows you to set price alerts and set future purchases ahead of time. It allows you to buy things with your wallet (wherever it’s accepted) and it also allows you to ‘cash out’ your wallet to your paypal account.

Coinbase works under combinatorial principles because it didn’t invent or come up with technology that didn’t already existed. It uses the same principles your bank app uses, it uses internet, data transmission, your camera for verifying your identity (you take a picture of your license), etc.

Although I have doubts of how stable the market of cryptocurrencies can be compared to the stock market, and the role government control play into this. I also believe we’re experiencing just the very beginnings of this e-commerce world, and that the technology of blockchain can go way beyond financial transactions: “Despite the intricate technology associated with and necessary for cryptocurrency investing, speculation and possession, Coinbase has created an apparatus that makes this process remarkably easy and familiar, almost like buying and selling stocks” (web: Investopedia).

As an end note I would add that if you want to trade cryptocurrency do it wisely. It is still very unknown and new. I personally decided to do it and use specifically Coinbase because it was recommended to me by a high government official that assured me Coinbase were the only platform working and in communication with the US government… just a tip!

Last but not least, I encourage you all to learn about net neutrality and get involved. Here I leave you with a very fun and easy to understand explanation by John Oliver:

 

References:

  • Ron White, How Computers Work. 9th ed. Que Publishing, 2007. “How the World Wide Web Works.” (excerpts).
  • Coinbase: What Is It and How Do You Use It? | Investopedia https://www.investopedia.com/tech/coinbase-what-it-and-how-do-you-use-it/#ixzz4zqV7ABsT
  • Last Week Tonight With John Oliver. John Oliver, HBO, 2017.

What is the Internet and how ‘those tubes’ work?

Deborah Oliveros

When I was watching the introductory video for this week’s readings and Professor Irvine started explaining how the Internet is not a ‘thing’ I was immediately reminded of one of my favorite episodes of the British sitcom “The IT Crowd”(2008) about introducing the ‘Internet’ to one of the characters.

If you’re not familiar with the show let me summarize it quickly. Two IT guys, Moss and Roy, are the only two employees in the IT department of a big company. Their office is hidden in the basement of the building and they spend their days dealing with mundane requests from employees that do not understand basic technology. In fact, they usually answer the phone with “Hello, IT department, did you try turning it on and off again?”. Jen is an HR employee that has been ‘promoted’ to the IT department, but she doesn’t understand the first thing about computers or technology in general.

In the episode aptly named “The Speech”, Jen has to give a speech when she is named the employee of the month. Roy and Moss are jealous of Jen for winning an award she most definitely doesn’t deserve, so they decide to play a prank on her and give her ‘The Internet’ in a box to use as a visual aid.

Moss introduces Jen to the Internet.

In the clip, the Internet is a literal black box with a blinking red light. Both Moss and Roy use technical words and vocabulary to describe it so that Jen, knowing nothing about it, believes it completely:

Jen: “wait, it has no wires”

Moss: “everything is wireless now”

Jen: “it’s so light!”

Moss: “the Internet doesn’t weight anything”

Roy: “wait, Moss, has it been de-magnetized?”

Moss: “by Stephen Hawking himself”

And so on. Jen buys it and decides to give ‘The Internet Speech’ to a group of executives in the company. To Moss and Roy’s surprise, and the spectators as well, they doesn’t understand the prank because they also don’t know what the Internet is. Jen calls them “ordinary folk” and goes on to explain how our civilization would fall apart if something happens to the black box. The end of the speech is an amazing moment of human behavior comedy because something does happen to the black box:

Jen gives “The Speech” about the Internet as a black box.

It made me wonder how, when I watched the episode for the first time, I cried laughing because how could she believe the Internet was a black box with a blinking red light? but also, there is no way I could explain what the Internet is and how it works before this week’s readings. It also made me think about, just like the executives at Jen’s company, how many people in positions of power don’t fully understand the Internet or technology in general, but they are in charge, in many cases, of setting regulations, law and policies regarding its use and the way it impacts society.

An example you might have heard of was in 2006, when late senator Ted Stevens (R-Alaska) described the Internet as a “series of tubes” in the context of opposing network neutrality:

“Ten movies streaming across that, that Internet, and what happens to your own personal Internet? I just the other day got… an Internet was sent by my staff at 10 o’clock in the morning on Friday. I got it yesterday [Tuesday]. Why? Because it got tangled up with all these things going on the Internet commercially.

[…] They want to deliver vast amounts of information over the Internet. And again, the Internet is not something that you just dump something on. It’s not a big truck. It’s a series of tubes. And if you don’t understand, those tubes can be filled and if they are filled, when you put your message in, it gets in line and it’s going to be delayed by anyone that puts into that tube enormous amounts of material, enormous amounts of material.”(Singel, 2006).

 

https://www.instagram.com/p/BFz69qBudtz/?hl=en

Titus explains to Kimmy how the Internet “is a series of tubes”.

Later on, the case could be made for the senator’s wording that the Internet is, in fact, a series of tubes, or more aptly, a series of cables, as explained by Sarah Kliff in her article in the Washington Post and, if you have time, there is a fascinating article on Popular Science titled “Who Protects the Internet?” that explains how private companies drag and fix underwater Internet cables, and more specifically, about the facilities of Terramark in Miami, where those underwater cables touch ground and keep us connected.

Even though, one can argue that the Internet is a series of tubes, if one is talking about hardware and interface, the reality is that the issue is much more complex. What exactly “goes through” these tubes?

Professor Irvine mentions that “the Internet isn’t a ‘thing’ but an enacted system of agencies and technical mediations”, and later on, to be more specific, when talking about the Internet and Web as distributed systems ” the Internet… is enacted and performed as an ‘orchestrated combinatorial complexity’ by many actors, agencies, forces, and design implementations in complex physical and material technologies”(p.1).

A lot of things come into action when it comes to the Internet. As Ron White expresses in How Computers Work “it would be a lot easier to explain how the Internet works if you could hold it in your hand… The Net is not just a single thing; it is an abstract system”(p. 308). To illustrate this, he explains how the Internet is similar to a living organism, comparing it to the human body and how the molecules that form it are not the same all the time “No matter which molecules make up your hair and eyes and fingers, at any moment, the structure of your body remains the same. Your heart doesn’t refuse to pump because new molecules of blood are created. If you remove some parts of your body, the system continues to function”(p. 308)

When we read the history of the Internet (arpanet, etc) and how it was all combination of different advances that occurred in a perfect timing, and later on, how it was made accessible to civilians and to advertisement, it is certainly overwhelming to think not only of how fast it has developed in such a short time, but also how dependent we are on ‘these tubes’. To wonder not only of the ‘what ifs’ of the past but also the ‘what ifs’ of the future.

In the next week, when we explore the World Wide Web, I will definitely go further into analyzing and understanding this abstract. I’m specifically interested in the concept of packets and protocols. For this week, however, I want to focus on the aspect of the vulnerability and the ‘doomsday’ scenarios.

Physically, hardware or interface, the Internet is very vulnerable, unexpectedly so. Not only in relation to the protection of hubs around the world or the buildings that store the servers and connect with the underwater cables. Although it is a very fair question to ask “Who protects the Internet?” it is also very fair to question our own limitations for designing ways to protect it, either physically or with social systems such as regulation and laws.

Going back to the example of the human body to understand how abstract the Internet is, we could use the same analogy to explain why it is so important, relevant and imperative for us to understand what the Internet is and how it works. I presented this issue to my roommate, an international economy lawyer who spent this summer in Switzerland studying the possibilities and limitations of implementing regulations (and sanctions) on ecommerce in a global level. Coincidentally, she also used the human body as an example: you need to understand how your body works in order to have an active role in how you control your health, it doesn’t mean you’re going to become a doctor overnight and perform a surgery on yourself, but you need to know what it is and how it works.

The same principle can be applied to technology and the Internet. It so normalized to be oblivious about how this black box works that it’s even embedded in our pop culture, it’s a joke, which makes me think that at least a small part of the problem might be cultural.

 

References:

Computational Thinking is Everywhere (as it should)

I have to admit that, up until now, the words computational thinking and coding seemed not only foreign but unreachable to me. This personal preconception is starting to change slowly but surely. After this week’s readings and activities I’m still positive that I can’t code yet but now I know that it is possible to understand and use its concepts and principles to think about the way I interact with technology and the way I actively design my everyday activities. It is also worth nothing that this false preconception of computing and coding as something difficult and unaccessible is a main flaw in our educational system and mainstream media, and I agree with Jeannette Wing in her call for a different educational approach in her article “Computational Thinking”.

In this short but poignant article, Wing states and demystify the way people think about Computational Thinking. She lists many ways in which Computational Thinking is embedded in our everyday life activities. We use it without noticing, mainly because Computational Thinking is, in my opinion, ‘human thinking’  “… it is using abstraction and decomposition when attacking a large complex task or designing a large complex system… Computational thinking is planning, learning, and scheduling in the presence of uncertainty” (p. 1).

This last statement called my attention. When she goes about listing common examples in regular activities in which Computational Thinking is very present, such as gathering the things you need before you leave your house, retracing your steps if you lose something, or choosing a check-out line at the supermarket, it is clear that uncertainty seems to be not only a common factor but the motivator for this pattern or behavior. It occurred to me that I’ve been actively using Computational Thinking my whole life but I’ve been calling it ‘Logical Thinking’, or in extreme cases ‘common sense’.

To me it seems like an obvious way of thinking. We might often find ourselves thinking ‘why do people do X when they could do Y and it would be so much easier, faster, cheaper, better etc.’. Therefore, what might seem an evident and inherent way of human thinking, doesn’t always turn out to be that common. As the saying goes: common sense is the least common of the senses.

Here are a some funny examples of design fails:

(Fig. 1)

(Fig. 2)

Wing says “Computational Thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science” (p. 1) and it clearly relates to two facts: computer science concepts are human thinking concepts and we can reformulate problems in order to be able to apply a known pattern in a way that we know how to solve them.

We’ve been reading about and understanding the concept that technology is designed (by us) to do everything it does. Therefore, the ‘magic’ behind technology is the ‘magic’ behind human thinking and human condition. On that regard Wing says that one of the characteristics of Computational Thinking is that it is “a way that humans, not computers, think… it is a way humans solve problems… We make computers exciting. With computing devices, we use our cleverness to tackle problems we didn’t before the age of computing”(p. 3).

This idea resonated deeply with me. When coming to CCT I told myself that the reason I wanted to get the “technology” part in my education was because technology seems to be advancing so fast that we, as a society, cannot keep up with it and seem to be one step behind solving problems caused by technology instead of anticipating it… but I’m starting to think I was looking at it from the wrong perspective. Yes, in many ways we are behind technology (our laws could be a clear example) but technology is made by us, designed to do what it does by us. Therefore, in order to solve the problems caused by technology we have to use computational thinking, the same one we used to design said technology, and more importantly we need to think about these outcomes when we are designing technology. We should be using Computational Thinking more actively when it comes to solving problems related to technology, as actively and unconsciously as we use it in everyday activities.

Learning and understanding how to talk to my computer: Python

This was my first encounter with a coding language and I have to say that it was less scary than I originally thought and I enjoyed it (at the beginning) more than I expected. Before starting the lessons on Python I took a few minutes to navigate the Code Academy website and found myself excited and interested about what it offers. The fact that this knowledge is so accessible, both as in free and understandable, seems almost shocking to me.

I know the word ‘language’ in coding language it’s pretty obvious, but I was still a little bit surprised of the similarities I found between Python, languages and music. During the first lesson I found myself thinking ‘oh, this is like learning a new language’ and immediately rolled my eyes at myself because that is exactly what I was doing.

(Fig. 3) Python. First Lesson. Deborah Oliveros. Code Academy. Quote: William Shakespeare.

Because of my experience with languages I could see the similarities regarding using symbols to represent meaning and to represent and interpret other symbols (such as valuable). It seemed to me like I was learning a new language with an alphabet different than mine, as I would be if I was learning Chinese, Korean or Arabic.

However there is an extra element: the console. Even though the whole thing can be described as a translator of sorts, I related it more to playing an instrument. I play, although not very well, the guitar and the ukulele. To play music there is also an alphabet assigned to notes or chords. Now, if you’re familiar with The Sound of Music then you already know this:

(Fig. 4) Music alphabet starting with Do(C).

Depending on the instrument you’re playing, the note Do (C) will require different positioning of your fingers, but the sound would be the same. You can play C on every instrument and it would be the same note, but the way you play it might change. That way if you know what the position of your fingers should be to play C on a piano, a guitar or a ukulele than you can play any song as long as you have the chords ‘lyrics’. This way, you basically can teach yourself how to play any instrument, because the universal musical alphabet or language lets you convert and interpret these symbols from one instrument to another. In my case, I learned the basic chords in a guitar and learned the alphabet, with that information I taught myself how to play the ukulele and briefly applied the same pattern to a melodica and a piano.

(Fig. 5) Chords chart for “Love Is a Losing Game” by Amy Winehouse.

If you look at this image you see the lyrics of the song and, above it in blue, the musical alphabet chords telling what note to play at what time. Although this is from a guitar chords website, I can use this to play this song in a ukulele, a piano, or any other instrument as long as I know what is the value of C, Dm7, Fdim and Cmaj7 in those instruments. However, in this case I am acting as the console, or as the “print… X” on Python, which brings us to the last characteristic of symbols expressed by Prof. Irvine in the introduction video:

“We use symbols (software) not only to represent meanings but to perform actions on other symbols” (Irvine). I have to act as the ‘print’ function on my musical instrument. I cannot tell my ukulele “play C now and then D and then B”. However, I can tell Python print C, D an B with 3 seconds between each to perform an action. This is the main difference I noticed when comparing languages. I can ‘tell’ my computer to perform the actions for me, actions that I can perform but that I might not be able to perform as fast and accurate as the computer.

Which is also another huge difference between python and language and musical language: there is no space for mistakes. It is not as flexible as our native languages or musical language, in which you don’t have to be 100% accurate to be able to communicate what you want. In this case, it has to be accurate and reliable always in order to work:

(Fig. 6) Learning Python. Second Lesson. Deborah Oliveros.

This is the part in which I started to become less excited and more frustrated with the new language. And it was clear and evident that my lack of computing background and my severe aversion to math showed. I still think it is exciting and I probably will use this website in the future because I would like to learn and ‘teach myself’ the same way I did with English and musical language. However, the most important takeaway from this experience is realizing that I’ve been using this way of thinking throughout my whole life unconsciously: now is the time to start doing it consciously.

 

 

References:

  • Jeannette Wing, “Computational Thinking.” Communications of the ACM 49, no. 3 (March 2006): 33–35.
  • Figure 1 and 2: “Poor Design Decisions Fails”. Bored Panda Blog: https://www.boredpanda.com/poor-design-decisions-fails/
  • Figure 3 and 6. Code Academy. Learning Python. First and Second Lesson. Deborah Oliveros. Quote: William Shakespeare.
  • Figure 4. Musical Alphabet SOL(G). Music Notes 101 Blog: https://musicnotes101.wordpress.com/2010/04/20/the-musical-alphabet-clefs-the-musical-staff-and-the-keyboard/
  • Figure 5: “Love Is a Losing Game”, Amy Winehouse and Mark Ronson, Chord Chart from Ultimate Guitar Tabs: https://tabs.ultimate-guitar.com/a/amy_winehouse/love_is_a_losing_game_crd.htm
  • Martin Irvine. Key Concepts in Technology: Week 7: Computational Thinking & Software. Accessed October 25, 2017. https://www.youtube.com/watch?v=CawtLHSC0Zw&feature=youtu.be.

Information design: text message

(fig. 1) The app/keyboard Slated translates your messages to other languages on real time. In this specific case it’s translating English-Tagalog

When we think about music, photos, videos and text and how we are able to produce them and transmit them using technology it can be a process described as incredible, even magic, but very little understood. Even though as users it is not clear to us what is happening ‘behind the scenes’, we continue to use this medium to communicate and transmit this information. It might look like magic but it was designed by us to perform that way and to be interpreted by us. Let’s further analyze the specific case of the text message.

We know how to interpret text messages, emails, pictures or sounds when we receive them because the meaning comes from the symbolic system that surrounds them and the social use of said structures (Irvine, Intro to Information Theory in Meaning Systems). For example, when we read a text message and we understand the words in it it’s because of language and symbols. When we read a text message we are not interpreting the digital bits that transmitted it to our device, we understand the meaning of said transmission.

(fig. 2) Martin Irvine, Introduction to the Technical Theory of Information (p. 7)

If we look at the image above we can state that a very clear set of encoding/decoding it’s happening in this process, or it is designed to, in order to transmit our cognitive system however we want to describe it: information, message, and/or meaning.

First we take our social-cultural meaning, in this example it could be the alphabet in one specific language. Second, in “pattern matching” each symbol is correlated to tokens that are specific to the medium is being used, in this case it would be specifically for the text message and whatever device we’re using to receive said message. Third, those signals are transmitted from one device to another, for that to happen they need to be converted into a language that can be registered and displayed in the receiver’s device. Then, it seems to me that there are two ‘decoding’ stages: first, our device decodes that digital signal into symbols in our screen, and then we decode those symbols into words or ‘meaning’.

However, we can not separate the digital transmission from the meaning entirely, because the existence of the ‘meaning’ and the need to transmit it is what motivates, or the reason for, having a digital signal transmission designed to encode this ‘message’ or ‘meaning’ and transmit it to another device, that in turn will decode it into a symbolic structure that we can interpret. Therefore, the characteristics of our cognitive symbolic systems are going to be a key part into the design of digital transmission systems, because we are the ones designing it and interpreting it. 

In the case of the text message, we can see a few characteristics that are necessary for our interpretation of the signal received:

  • Symbols: it can be the alphabet or images (such as emojis) to help us ‘decipher’ the message.
  • Visual representation of said symbols that is appropriate for the receptor. For example, visually it has to resemble the symbolic structure as much as possible: the size of the font has to be adequate for reading, the shape and order in which the symbols appear has to make sense in whatever language the message is communicated through.
  • It has to follow the rules of the symbolic structure that it is transmitting in order to be interpreted. In the case of the text message, it has to follow the rules of reading and writing.

What is interesting is that this process of encoding the cognitive symbolic structure into a digital signal, then to be sent and decoded again into the same cognitive symbolic structure is invisible to us, we can’t see this “semiotic envelope” (Irvine, Introduction to the Technical Theory of Information, p. 7). We cannot see it, but it’s happening right there in our hands. It is black boxed. Why? it seems to me that, as we’ve mentioned before in class, this is a case of “I don’t care how it works only that it works”.

 

References:

  • (fig. 1) Darrell Etherington, Slated iOS 8 Keyboard Translates Your Text Messages to Other Languages in Real Time, Tech Crunch, extracted October 18, 2017, https://techcrunch.com/2014/11/06/slated-ios-8-keyboard-translates-your-text-messages-to-other-languages-in-real-time/
  • Martin Irvine, Intro to Information Theory in Meaning Systems, (extracted on October 16, 2017)
  • (fig. 2) Martin Irvine, Introduction to the Technical Theory of Information, (extracted on October 16, 2017)

Affordances, Constraints and Convention of Books

This week we revisit the concepts of affordances and interfaces, but this time we go deeper into analyzing them.

Previously, in Universal Principles of Design, we learned the concept of affordances as “a property in which the physical characteristics of an object or environment influence its function”. Among the same lines, on Affordances, Kaptelinin cites Gibson “action possibilities provided to the actor by the environment”. However, there’s more to affordances and interactions.

After the concept of affordances in design was expressed by Norman in Psychology of Everyday Things, designers on the field focused on it and embraced it, often mistaking everything with an added “affordance”. Later on, Norman made some distinctions to clarify what he meant by affordances and what he considered “perceived affordances”, constraints and conventions.

Just as an affordance is an intrinsic, almost intuitive, better way of using or interacting with something, constraints are the exact opposite. Constraints are clear and obvious ways of not using or interacting with said artifact. The design of an artifact is considered “good” when it enables or affords certain uses and actions but at the same time disables or closes other uses (Irvine, Introduction to Affordances and Interfaces.)

However, on Affordance, Conventions, and Design, Norman expresses that “an essential part of making designs intuitive has to do with perception” but he also goes on saying that the majority of what designers have been considering “affordances” are really “cultural conventions” or visual feedback. We learn how to use things not only by responding to the features in our interaction with the object, but also by normative, practiced, social use of said object: by convention.

To further clarify, Norman says that “a convention is a constraint in that it prohibits some activities and encourage others. Physical constraints make some actions impossible: there is no way to ignore them. Logical and cultural constraints are weaker in the sense that they can be violated or ignored… They are not arbitrary: they evolve, they require community of practice. They are slow to be adopted and, once adopted, slow to go away. They are real constraints on our behavior”.

Now, let’s try to apply the principles of affordances, constraints and convention to the book.

Fig. 1. Source: University of Michigan Library.

Physical or real affordances:

  • It is scaled to be handled by humans. The pages to be easily turned using our hands.
  • It has pages, instead of one big scroll, making it more efficient to continue reading without extra non-practical handling.
  • It is portable.
  • It is easy to locally store.

Constraints:

  • Since it is human scaled, it does not afford practical real-time use from a big group of individuals at the same time. It works better for just the individual handling it.
  • There is only one practical way of opening it: by separating its sides.
  • Depending on the materials used, it can be either light but easy to damage, or durable but heavy.

Convention and cultural environment:

  • The book can be opened only one way but doesn’t necessarily has to be read in one way. The direction in which we turn the pages it’s by convention. By practiced normative use, in the western civilization we turn pages from right to left and read left to right, but in other cultures it’s the opposite. This is an example of a convention that became a constraint of our behavior, and therefore, we expect it in future versions of this artifact.
  • The way we store books could be considered a convention. The physical affordance of the book enables it to be stored in any way possible, but by the use of symbols, letters and systems like the alphabetical order of filing information, we store it in a certain position and order.

 

 

References:

Napster and the Revolution of the Music Industry

Right at the end of the century, when so many other field were beginning to experience a ‘technological revolution’, Napster made its appearance to disrupt the music business and the way artists release and produce music.

Napster (source: Google)

Napster was P2P software, much like torrents, but it specialized in mp3 files. It wasn’t the only one either, later you could find others such as LimeWire and Ares, but it was definitely the pioneer not only in the sharing of digital music files but also as a basis for a massive change in the field, including legal battles, copyright infringement claims, record labels operations, music marketing, and even the way artists produced and released their music, ultimately leading to the continuos decline in physical album sales and album (as a compendium of songs, or as a concept) production.

Music digital files existed before Napster, the technology was there. And it is undeniable the cultural power of music for society, the need to make it more accessible was also there. The combination of these two factors resulted in this irrevocable change in the music industry.

If we take into consideration the concept of mediation we can analyze the phenomenon of Napster both as what caused it and what it caused, how the interaction between music and P2P software drastically changed how music is listened to, stored, produced, released, shared and bought.

Screenshot of Napster (source: Google)

In the Intro to Media and Technical Mediation video, Dr. Martin uses the example of the book as an artifact to demonstrate how the “technical affordances are important but don’t “cause” the “effects” we attribute to a medium”. This idea can be applied to both digital music files and Napster as a software. The social functions of music are not embedded to the medium in which it is ‘transported’. Which is why it can be digitalized and, in fact, becomes even more accessible because of this digitalization.

But this is not an autonomous and isolated process. Napster didn’t happen by accident or because the technology that made it possible was inevitable. It was because the social/cultural function of music was the factor that not only pushed but enabled this evolution.

With the presence of Napster there was a disruption in many fields that now had to approach this new scenario from a completely new point of view and we, as a society, rethought our interaction with music.

Debray mentions, in the article What Is Mediology? that “The mediologists are interested in the effects of the cultural structuring of a technical innovation (writing, printing, digital technology, but also the telegraph, the bicycle, or photography), or, in the opposite direction, in the technical bases of a social or cultural development (science, religion, or movement of ideas).”

Taking this into consideration, what were the effects of cultural structuring behind Napster? What was the cultural development caused by Napster?

To answer the latter, both record labels and artists engaged in grueling legal battles against Napster and likewise services over copyright infringement, which in turned disrupted the field of music law, patent and trademark. Labels stopped solely relying on album sells and started investing more in visual marketing, such as music videos, in order to fruitlessly fight against the drastic decline on sales. Artists stopped relying on album sales and started focusing more on tour revenue, which led to a trend of long, epic world tours, in big stadiums with the increasing use technology to create a more impactful concert experience, along with high ticket prices that continues to this day (i.e. U2 and Madonna among others). And it is impossible to deny that all of these events led to music streaming.

One of the changes that most intrigues and saddens me at the same time, is the slow disappearance of the concept of an ‘album’ from many artists’s catalogues. Producing and releasing ‘albums’ or ‘concept albums’ is not as profitable as before. Audiences are more interested in singles or collaborations, which leads to the majority of the artists constantly releasing songs trying to reach a hit but with rarely any concept of artistry behind it. This is not an absolute 100% phenomenon, there are still a lot of artists producing and releasing albums, but the majority doesn’t, which is a drastic change from 20 or 25 years ago when almost every artist released major bodies of work instead of isolated unrelated songs.

I recently bought a ticket for a concert to see one of my favorite bands, one that still releases albums and not just singles. This band fills stadiums and headlines festivals, and the tickets I bought were ridiculously overpriced. Along with my ticket, I received a physical CD via mail, which I thought was a pretty cool gesture considering the price of the ticket. When I received it, I realized that not only I no longer have a device to play it (except my laptop) but also, I don’t need to play it: I stream all my music now. As a result I didn’t even tear the plastic protecting the CD. It now sits in my desk as a quirky decoration or souvenir, and I often find myself wondering “will this be worth a lot of money 20 years from now as a relic?” When we no longer use CD’s and collectors might pay a lot of money for an album released in the dying age of CD’s, still in its wrap, much a like a collectors toy.

References:

Interaction with cognitive artifacs

During the introductory video of “Symbolic Cognition & Cognitive Technologies”, Dr. Irvine expressed “cognitive technologies now generate accelerated technical advances through continued combinations… Now we are facing the limitations of understanding and making sense of the massive accumulation of data that we are generating”.

He also poses the question “How did we get here? What is it about human symbolic faculty and the structure of symbolic systems that help us develop cognitive technologies?”

There are specific characteristics about symbols and symbolic cognition that not only makes it a good fit for the development of cognitive technologies, but more importantly, it is the enabler of said technologies and their future advancement.

Let’s look at the GPS and how we interact with it, trying to apply the previous characteristics mentioned by Dr. Irvine and the basic principles of cognitive artifacts expressed by Michael Cole on On Cognitive Artifacts.

Screenshot of Google Maps, Georgetown University.

What makes the GPS a cognitive artifact and what kind of cognitive artifact is it? First we need to explain why was it created? why do we need to use it?

First, Cole talks about “Mediation Through Artifacts”(p. 108) to explain that human psychological processes emerged simultaneously with the behavior of modifying material objects to regulate their interactions with the world and one another. Basically, the process of making tools it’s intrinsically linked to a behavior, a way of thinking, that guides to manipulate and shape our “tools” not only to create more tools but to shape the way we communicate as a society and the way we interact with said tools.

Creating tools is the tool of tools. Cole cites C.H. Judd(p. 109) to explain in a clear way this new ‘mindset’: “[man]… does not develop more skill in the use of claws of teeth to cope with the environment. He has adopted an indirect mode of action. He uses instruments”.

Scene from “2001: A Space Odyssey” by Stanley Kubrick (1968). It illustrates the jump in human symbolic cognition: from making tools to technology.

If we think about the GPS it is very clear that it is a cognitive technology that is the combination of previous different technologies, previous ‘artifacts’. In a way, the combinatoriality of technologies is due to our symbolic cognition in how do we produce the tools so that it is ingrained in them the possibility of improving, creating more tools, or combining with a bigger system.

Secondly, Cole talks about “Historical Development”(p. 109) and says “in addition to using and making tools, human beings arrange for the rediscovery of the already-created tools in each succeeding generation”. The GPS couldn’t exist if the technologies that are combined in its system weren’t designed in a way that would let them create new symbols, interpret them and create new systems, that ultimately allowed them to combine together as the GPS. But also, those previous technologies were designed in a way that allowed them to be improved by future generations, in order to find new ways to put them together and link them to it. He says ” becoming a cultural being and arranging for others to become cultural beings”.

Third, he mentions “Practical Activity”(p. 110): “the analysis of human psychological functions must be grounded in human’s everyday activities”. This one is very self-explanatory when it comes to the GPS. Not only do we use it and interact with it everyday, but the advances that can be made on it are solely driven by human’s everyday interactions with it. Not only it’s the driving force, it’s the purpose.

Cole also goes to divide artifacts in three levels (p.121):

Primary artifacts: those directly used in production (i.e. words, writing docs, telecommunications)

Secondary artifacts: representations of primary artifacts, they preserve and transmit modes of action and belief (i.e. recipes, norms, constitution)

Tertiary artifacts: can shape the way we see the actual world.

I believe the GPS falls in between secondary and tertiary categories. In a way the GPS it’s a representation of a primary artifact, the map, and transmit the same modes of action that a map does. But the GPS it’s more than just a map. The characteristics of the different combinatorial technologies that are part of the GPS allow for a different interaction with it, that shapes the way we see and interact with the world more profoundly than a map.

For example, we can see real-time changes in maps such as traffic, events, classification of nearby places, etc. Also it is able to re-route you taking into consideration these conditions. It is able to predict ETA depending on time of the day and previously stored information about the conditions of your route. You can also customize your interaction with it in many different ways. And let’s not forget that it has built-in functions that are more capable of self-improvement and analysis than the previous technologies combined in it.

 

References:

  • Martin Irvine, Cognitive Technologies and Symbolic Cognition (from “Key Concepts in Technology” course)
  • Michael Cole, On Cognitive Artifacts, From Cultural Psychology: A Once and Future Discipline. Cambridge, MA: Harvard University Press, 1996. Connected excerpts.
  • 2001: A Space Odyssey. Dir: Stanley Kubrick, MGM, 1968. (Video extracted from youtube on 09/27/2017, link: https://www.youtube.com/watch?v=vQdO9TUPF9g
  • Screenshot of Google Maps, Georgetown University, taken by me.

My mom and the iPhone: a love story

When we think about what we’ve read so far on the concept of modularity and modular design, we can get overwhelmed by the knowledge of how much actual thinking happens before actually doing or designing something. Of course, by last week’s readings we learned that most of the technological development happens in the mind before it “comes to life” as a combination of different technologies or systems. But when I say a lot of “thinking” before “doing” I don’t mean it in the explicit way; I’m alluding to the amount of self awareness and awareness of others that might interact with what you’re doing.

But how does a modular system work?

In Modularity in Technology and Organization (2002), Langlois briefly describes three key components of modularity:

  • Design architecture: what is part of the module and what is its function,
  • Interfaces for connecting modules: how they interact, fit together and communicate and
  • Standards: its own design rules to measure it and compare its performance to other modules.

For this particular analysis I will focus on interfaces in relation to the concepts of affordances and mental models expressed in Universal Principles of Design (2003) by Lidwell, Holden and Butler to illustrate how my mom’s relationship with technology changed drastically after the iPhone.

Not an actual picture of my mom.

These three concepts stuck with me through and shed a light on user interaction. First, I have to set the context. My mom is a 64 year old lawyer who has never had a good relationship with technology. She has somewhat adapted through the years, but it has been a slow process: from typewriters to computers, from actual letters to emails, from photocopies to scanning and then the sorcery that is the internet. Overall she gets there, just a few years later than everybody else, and interacting at least 50% less than everybody else. For many years she went through a very long and very varied selection of mobile phones, and she struggled through them all. Frustrated, she would realize that, just as she was getting used to them, something better, faster and smarter was already making its way to her hands, thus starting the process all over again. This was the pattern until, reluctantly, she succumbed to my pressure and got an iPhone.

As the “technological” person in the house (which basically consisted of turning things off and on again) I thought I would have to give my mom the regular crash course of “how do you use this thing again? and where are my contacts?” like I did many times before. To my surprise, I barely had to guide her because, as she looked at it, she instinctively knew where to go and how to do most of the things. Not only that, but the need to use most of the smartphone functions to communicate with family abroad forced her to learn, by herself, how to do it. In a very short time she was using Whatsapp efficiently, Facetiming, sending voice notes, videos, pictures and I kept hearing “do you think there’s an app for…?”. I believe this is related to the concepts of affordances and mental models.

In Universal Principles of Design affordances is defined as “a property in which the physical characteristics of an object or environment influence its function” (pag. 20) and the authors give an specific example that goes beyond the physical designs. They mention that the design of common physical objects in a screen helps us associate its function with those of the real world such as buttons, folders and trashcans. Which made me wonder, what are the affordances in the design of the icons, buttons and other graphic designs on iOS that makes it so easy for my mom to understand their intended function and not use them improperly?

I’m old enough to know these are floppy disks and to have used them.

The “save” button.

 

 

 

 

 

 

 

Side question: what happens when the design of the icon no longer holds the concept it used to in the real world? for example, the “save” button is a floppy disk. Most people younger than me had never used one or even know what it is, so the icon is mainly recognized as a “save” button.

But going back to my mom’s enlightening with iOS. Besides the concept of affordances to explain part of this interaction, I also see a connection with the concept of mental models. In Universal Principles of Design the authors express that “people understand and interact with systems and environments based on mental representations developed from experience” (pag. 131) and they make a clear distinction in between how the system works and how people interact with it. They even affirm that, most of the time, designers know much about how a system works, but little about how people interact with it, while the users know very little or sometimes inaccurately things about the system but, by use and experience, are able to attain interaction models better than the designers.

Based on this description, and my mom’s experience, it would be safe to say that the use of both mental models was very well thought and applied in the design of both iPhone and iOS. At the end of the day, as it was mentioned last week, my mom doesn’t care about how it works but that it works. In this case it doesn’t only work but it does easily and efficiently for her. I thought about this while reading Modularity in Technology and Organization in which Langlois illustrates the benefits and the costs of both decomposable and non-decomposable systems in relation to interdependence. Langlois cites Alexander who was referring to architecture and urban design when he said “the most attractive and durable systems are those ones develop through an unselfconscious process” (pag. 23). I might be wrong but I think this could be applied to my mom’s interaction with iOS.

Who taught her how to use that?

I’m not saying this particular characteristic pertains solely to iOS or Apple. We see it with almost every device and operating system. We describe this experience as “user friendly” and we see this interaction on everyone: from our technology-allergic parents/grandparents to toddlers ordering things from amazon. However, it’s interesting to think about this bridge between not knowing how it works but knowing perfectly how to interact with it.

 

References:

  • Lidwell, W., Holden, K., & Butler, J. (2003). Universal Principles of Design. Rockport.
  • Langlois, R. (2002). Modularity in technology and organization. Journal of Economic Behavior & Organization. Vol. 49.
  • Images: stock photos. Google.

 

Things I still don’t understand very well: layers, symbolic abstraction and hierarchy.