Author Archives: Linda Bardha

The Internet as we don’t know it

CCTP 820: Leading by Design: Principles of Technical and Social Systems – Fall 2017

By Linda Bardha

“Having the world at your fingertips”


Today we are globally connected with each other. We have the opportunity to learn, chat, share information and we can do all of these things at the convenience of our home. Just by having an electronic device (computer/phone/tablet) and being connected to the Internet, we can have, as they say, the world at our fingertips. But, how are we connected to the internet? What is the history of the Internet? What were the design principles and methods that were used to design the Internet architecture? Who were the factors and the people that played an important role in designing the Internet that we use today? How did we go from the Internet to the Web? Having a Bachelor’s degree in Computer Science, it was my curiosity that lead to explore this topic and try to give answers to some questions that I’ve had for a while. As a programmer, I have created websites using different programming languages such as HTML, CSS, JavaScript, Python, SQL. This experience, from a back-end developer’s point of view, helped me to understand that “the invisible” part, what we can’t physically see, is so much more powerful than “the visible” part. The Internet is something that we use every day, but the Internet is not just a singular artefact. As a consumer society, we never worry about how something works, and the Internet in particular, is such a delicate case. Since we cannot physically see all the connections that happen when information is transmitted from one point to another, sometimes we use the word “magic” to make up for the lack of knowledge. The Internet is a complex socio-technical system, that has changed the world that we live in, and trying to understand the history and the design principles behind it, will help us to have a better understanding of what it means to be connected to the Internet.


“The Internet has always been, and always will be, a magic box”. Marc Andreessen

There is something about this quote that just doesn’t seem right. I will admit it, I used to think like Mr. Andreessen. The internet, “this thing” that we use every day, is such a strange concept, and because we cannot physically see how the internet actually works, we don’t think twice and use the word “magic” to justify the work that is done in the background. But, the internet is not just an isolated box, and certainly it isn’t a magical box. Internet isn’t just a thing. The internet is a system of distributed agencies and technical meditations,  that has changes the world that we live in. The internet is a product of its social environments, it has shaped the characteristics of communication media and information technology.

Major historical events lead to the creation of new technologies and developments. The Internet was born as a historical accident and different agents played significant roles in creating what the internet is today. The question that we need to ask regarding the Internet, is not about inventions or inventors, since as we will see, there are certainly a group of people, organizations and technologies that shaped The Internet that we use today.  The more correct question would be to ask about the design principles, combinations of different methods and technologies that lead to the architecture that we use today. First, let’s start by taking a look at the historical events that shaped the Internet.

The history of the internet timeline

As Janet Abbate explains in her book “Inventing the internet”, The Internet and its predecessor, the ARPANET, were created by the US Department of Defense’s Advanced Research Projects Agency (ARPA), a small agency that has been deeply involved in the development of computer science in the United States. In 1957, USSR launched Sputnik into space and with it, global communications. In 1958, The United States government created the Advanced Research Projects Agency (ARPA) in response to Sputnik launch. ARPANET was a single network that connected a few dozen sites, which computer scientist used to trade files and information. Joseph C. R. Licklider, was the first director of ARPA’s Information Processing Techniques Office (IPTO). Licklider’s influential 1960 paper “Man-Computer Symbiosis” became an important document for computer science and technology to serve the needs and aspirations of the human user, rather than forcing the user to adapt to the machine.  Licklider (1960, pp. 4–5) wrote:

“The hope is that, in not too many years, human brains and computing

machines will be coupled together very tightly, and that the resulting partner-

ship will think as no human brain has ever thought and process data in a way

not approached by the information-handling machines we know today. . . .

Those years should be intellectually the most creative and exciting in the

history of mankind”.

Another important figure that needs to be mentioned is Robert Taylor, a system engineer in the aerospace industry. He believed that if ARPA’s computers could be linked together, hardware, software and data could be efficiently pooled among contractors rather than wastefully duplicated(Abbate,2000). In 1966 Taylor recruited Lawrence Roberts, a program manager at MIT’s Lincoln Laboratory to oversee the development of the ARPANET.  A group of computer scientists were constantly working in system-building strategies that used effective design principles such as layering and modularization. These two principles that were characteristics of the ARPANET, were later on, successful models that were used for the architecture of the internet.  In 1967, Lawrence Roberts lead Arpanet’s design discussions and published first ARPANET design paper: “Multiple Computer Networks and Intercomputer Communication”. Wesley Clark, suggested that the network is managed by interconnected ”Interface Message Processors” in front of the major computers. Called IMPs, they evolved into today’s routers.

One of the major problems that the engineers and computer scientist were trying to solve while working with ARPANET, was designing a network that could allow any kind of computer to exchange data over a common network with no single point of failure.  The concept of switching small blocks of data was first explored independently by Paul Baran. He described a general architecture for a large-scale distributed network. The main focus of his idea was to use a decentralized network, where a message could be successfully delivered between any two points, while using multiple paths. This message would be divided into different blocks, and then reassembled in the end when it reached the destination. In 1961, Leonard Kleinrock introduced the packet-switching concept in his MIT doctoral thesis about queuing theory: “Information Flow in Large Communication Nets”. His Host computer became the first node of the Internet in September 1969, and it was the first message to pass over the internet.

An animation demonstrating data packet switching across a network.

First, the TCP protocol breaks data into packets or blocks. Then, the packets travel from router to router over the Internet using different paths, according to the IP protocol. Lastly, the TCP protocol reassembles the packets into the original whole, and that’s how the message is delivered.

Working with this idea, were two other scientists Vint Cerf and Bob Kahn. According to Kahn, the origin of the word “the Internet”, is “inter-networking”.  In the late, 1960s, he faced the problem of three communication networks that did not connect to each-other. He worked with Vint Cent to solve the problem.  They invented an internetworking protocol to share information using packet-switching method. The Transmission Control Protocol (TCP) is the main protocol of the Internet Protocol (IP) suite. A lot of internet applications that share information over the internet, rely on TCP.  So, the packet switching method for data protocols for any computer, provided the solution to the problem that I mentioned earlier. Now, any computer could exchange data over a common network, with no point of failure. This was a major invention that happened during the ARPANET research, and it is now one of the major main concepts of the networks that are connected today on the Internet.

In order for us to have a better idea of how the messages are sent from one point to the other, let us take a look at this video, where Spotify engineer, Lynn Root and Vint Cerf, an Internet pioneer, explain what keeps the internet running and how information is broken down into packets.

While the ARPANET was a single network that connected a few dozen sites, the Internet is a system of many interconnected networks, capable of indefinite expansions. At the start of 1980s, the internet was still under military control. (Abbate, 2000) But then, it shifted from military control to academic research. In 1983, the US Department of Defense split the ARPANET into MILNET which was a military site, and ARPANET which became a civilian research site. This division made possible for different scientists and organizations from around the world to do research and explore the possibilities of designing the internet’s architecture that we have today.

While the research was still going on, the idea to divide the internet space into smaller domains, was an invention by Paul Mockapetris. He invented the Domain Name System (DNS). Six large domain names were created to represent different types of network cites: edu (education), gov (government), mil (military), com (commercial), org (organization) and net (network). This division helped to categorize different types of networks and it made possible the idea to expand. Beneath the top level domains, were different categories, so under edu, each university would have it’s own domain and so on.

ARPA helped in funding the research that was done in creating a new generation of technologies for inter-networking, the concept of packet switching, the development of Transmission Control Protocol and Internet Protocol (TCP/IP), the concept of the “network of networks”.

So yes, the Department of Defense and the military programs that funded the research that was done, shaped the history of the internet that we use today.  But, that is only one part of the big story.  To make the global connections possible, different distributed agencies also known as the “Internet Ecosystem” help to develop the internet and make these connections possible. It is important to mention organizations such as the International Standard Organization (ISO), the Internet Engineering Task Force (IETF), the World Wide Web Consortium (W3C), the Internet Corporation for Assigned Names and Numbers (ICANN), Internet Assigned Numbers Authority (IANA), Internet Registries (RIR). There are policy and decision makers that provide regulations on cross-border communications, there are international agreements, there are vendors that provide network infrastructure, there are internet users and educators that use the Internet to communicate, teach and build new technologies. My point is that The Internet is not a thing, a company or a product. The internet is a global system of distributed agencies and technical mediation that make possible to link networking devices worldwide.

The Internet Architecture and the Design Principles

After knowing the history and the events that happened, it is important to also try and “de-blackbox” the design principles and the architecture of the Internet. As Irvine explains in his article “The Internet: Design Principles and Extensible Futures”, there are three main design principles that make it possible for us today to use the Internet in different ways: Extensibility, Scalability and Interoperability.

Extensibility has to do with the idea that an implementation can grow, and to extend a system, means to add new functionality and modify parts of the system, without changing the overall behavior of the system.

Scalability is the ability of a program or a system to run effectively, even when it is changed in size or volume.

Interoperability is the ability of a system to exchange/communicate information.

Irvine highlights the Internet as “the mother of all case studies”.  The Internet is a modular system, it is a complex socio-technical system, it has cumulative combinatorial design principles, and it has an open architecture.

Modularity is a design principle where the components in a system are highly independent. (Schewick, 2010). This means that there are minimal dependencies among the components of a system. So, you can change certain parts of the modules, without affecting the whole system. This design principle reduces the complexity of a system.

As explained by Schewick, Layering is a special form of Modularity.  In a layered system, modules are organized in layers that constrain dependencies between modules.  The architecture of the Internet is based on a layering principle called relaxed layering with a portability layer.  By basing design on increasing levels of abstraction, layering greatly reduces complexity.

Variants of the layering principle (Schewick)

As Irvine suggests, it is somewhat easier to manage the technical layers of how networks are connected, since it is a complex but manageable engineering problem, but, on the other hand, it is much harder to understand the international political-economical issues between different countries, conflicts in ownership of network infrastructure, agreements on standards and control of content. Exactly these issues, make the Internet a complex socio-technical system.

The Internet was built and designed in an open environment, where different communities, researchers from all over the world designed and worked on the prototype. This was made possible because the architecture of the Internet has come to be called “open architecture“.  Ronda Hauben, in her paper “Encyclopedia of Computers and Computer History” explains the definition of what an open-architecture means:

“Open architecture…describes the structure of the Internet, which is built on standard

interfaces, protocols, a basic data format, and a uniform identifier or addressing

mechanism. All the information needed regarding the interconnection aspects is publicly

available.In the case of networks, the challenge in designing an open architecture system is

to provide local autonomy, the possibility of interconnecting heterogeneous systems, and

communication across uniform interfaces. Providing a basic format for data and a

common addressing mechanism makes possible data transmission across the boundaries

of dissimilar networks.”

After having an idea about the design principles and the architecture of the Internet, we need to try and understand how different agents are connected and work together in transmitting information. The Internet is a system that includes everything from the cables that carry information, to routers, modem servers, cell phone towers, satellites all interconnected, transmitting information using the Internet Protocols.

When you send e message from your computer to a friend, using the Internet as the mean of this communication, that message is divided into packets/blocks as we saw earlier, it finds different paths from the modem, to the router, finds the Domain Name Server and then the appropriate Web Server using the Internet Protocols, and at this point the message is than reassembled into the packets  from the original whole, and that’s how your friend receives that message. There is a trade of complexity and performance that happens while using these design principles, but the end goal of this architecture is to effectively have the flow of information, the transmission of the data packets from one end of the server to the other.

The Internet and the Web

When we use the terms “the Internet”, and “the Web”, we usually refer to the same thing. There is a distinguishment between these two terms, and it is important to know the difference. As explained earlier, the Internet is a a system of interconnected computer networks that use TCP/IP to link networking devices worldwide. On the other hand, the Web is a system of web pages and sites that use the Internet to share their information. It was Tim Berners-Lee that invented the Web in 1989. Tim Berners-Lee, in his paper “Weaving the Web: The Original Design and Ultimative Destiny of the World Wide Web” explains that when he thought of the Web, he envisioned ”a space in which anything could be linked to anything”. He wanted this to be a single,global information space. He explains the idea behind this “space” by saying that every information would be labeled, have an address, and then, by being able to reference this information, the computer could represent association between things, and all this could be an open space for everyone to use and share. Similarly to the Internet, the Web is a protocol layer that works over the architecture of the Internet. The Web is based on different standards and protocols including data standards, network services, HTTP protocols.  There are different layers that make up the web architecture.

Understanding web architecture (Petri Kainulainen)


The three layers of every web page (Alex Walker)


The Internet that we use today was born as a historical accident and different agents (government research, private research, university research) played significant roles in designing the Internet architecture. ARPA helped in funding the research that was done in creating a new generation of technologies for inter-networking, the concept of packet switching, the development of Transmission Control Protocol and Internet Protocol (TCP/IP), the concept of the “network of networks”. Packet-switching method for data protocols for any computer, provided the solution to one of the main challenges that researchers were trying to solve, and that was designing a network that could allow any kind of computer to exchange data over a common network with no single point of failure. Internet development converged with the PC industry. In the beginning, computers were seen as single artefacts that could perform calculations in a short amount of time. Now, their purpose has changed and evolved, and data network chips are standard equipment in every PC, so they can connect to the network. There were two main dimensions that made the Internet successful. Firstly, its design principles, modular architecture based on open standard and secondly, computer networks are mediators of the larger network of social, political and economical factors. The Internet is a complex socio-technical system, with a cumulative combinatorial design, an open-architecture that uses the design of TCP/IP. To make the global connections possible, there is a whole “Internet ecosystem”that helps in regulating international agreements and standards, so the architecture of the Internet is open and because of that it’s not owned or controlled by a specific group that has dominant powers over the others. There are a lot of debates recently concerning “the future of the internet”, and we are hearing a lot about net neutrality, the issue among corporations who own network infrastructure and those who own access to content and media services. By understanding the design principles and the architecture of the Internet, we can be part of these discussions and we can find ways for all the combinations of different technologies, so the principles of Extensibility, Scalability and Interoperability can help us to be globally connected, in an open environment. That’s why it it important to not think about the Internet as just “a thing” or “magic”. Rather, it is a complex system that we all are part of, and the more we know and understand the history, the design principles, and the architecture, the more we can help to develop and design the future of the Internet.


Abbate, Janet. Inventing the Internet. Cambridge, MA: The MIT Press, 2000.

Andrew, Evans. Who invented the internet? History Stories. December 2013.

ARPANET archival documentary Computer Networks: The Heralds Of Resource Sharing. Arpanet. 1972.

Baldwin, Carliss Y, and Kim B. Clark. Design Rules, Vol. 1: The Power of Modularity. Cambridge, MA: The MIT Press, 2000.

Baran, Paul. On Distributed Communications Networks. IEEE Trans. Comm. Systems, March 1964.

Bardha, Linda. The internet, this complex social-technical system. Leading By Design: Principles of Technical and Social Systems. Georgetown University. November 2017

Bardha, Linda. How does Google search bar work?. Leading By Design: Principles of Technical and Social Systems. Georgetown University. November 2017

Berners-Lee, Tim Weaving the Web: The Original Design and Ultimative Destiny of the World Wide Web. New York, NY: Harper Business, 2000.

Cerf,Vint. A Brief History of Packets. IEEE Computing Conversations.1997

Cerf, Vint, and David Clark. A Brief History of the Internet. Internet Society.1997

Cerf, Vint, and R. E. Kahn. A protocol for packet network interconnection. IEEE Trans. Comm. Tech., vol. COM-22, V 5, pp. 627-641, May 1974.

Clark, David. The Design Philosophy of the DARPA Internet Protocols. Originally published in Proceedings SIGCOMM ‘88, Computer Communication Review Vol. 18, No. 4, August 1988

Denning, Peter J. Design Thinking. Communications of the ACM, 56, no. 12. December 2013.

Evolution of the Web. Visualization.

Hauben, Ronda. Open Architecture. Raul Rojas (ed),”Encyclopedia of Computers and Computer History”, Fitzroy Dearborn, Chicago, 2001. vol. 2 pg 592.

Hobbes, Robert Zakon. Hobbes’ Internet Timeline. An Internet timeline highlighting some of the key events and technologies that helped shape the Internet as we know it today.

Internet Society. Who Makes the Internet Work: The Internet Ecosystem. February 2014.

Irvine, Martin. Introducing Internet Design Principles and Architecture: Why Learn This?

Irvine, Martin. Intro to the Design and Architecture of the Internet. November 2014

Kleinrock, Leonard. Information Flow in Large Communication Nets. RLE Quarterly Progress Report, July   1961.

Licklider, Joseph C.R. and, W. Clark. On-Line Man Computer Communication. August 1962.

Lidwell, William, Kritina Holden, and Jill ButlerUniversal Principles of Design. Revised. Beverly, MA: Rockport Publishers, 2010

Roberts, Lawrence. Multiple Computer Networks and Intercomputer Communication. ACM Gatlinburg Conf., October 1967.

Schewick, Barbara van.  Internet Architecture and Innovation. Cambridge, MA: The MIT Press, 2012.

Walker, Alex. The Three Layers of Every Web Page. SitePoint. Infographic. May 2014.

White, Ron. How the Internet Works. 9th ed. Que Publishing, 2007.

Zittrain, Jonathan. The Future of the Internet–And How to Stop It. New Haven, CT: Yale University Press, 2009.

How does Google search bar work?

As we learned from the readings this week, Tim Berners-Lee explains that when he thought of the Web, he envisioned ” a space in which anything could be linked to anything”. He wanted this to be a single,global information space. He explains the idea behind this “space” by saying that every information would be labeled and with an address, and then, by being able to reference this information, the computer could represent association between things, and all this could be an open space for everyone to use and share.

As Irvine explains, the web is a protocol layer that works over the underlying technical architecture of the internet. The web is based on different standards and protocols including data standards, network services, HTTP protocols. So imagine all these different layers that make up the web architecture.

While reading about the web, I started thinking about the google search bar. In an article by Larry Page and Sergey Brin, the founders of Google, they explain that they build a search engine that used links to determine the importance of the individual pages on the World Wide Web. This engine was first called, wait for it, “The Backrub”. Soon after, it was renamed to Google. But, how does the search bar work? Since we cannot physically see the process that happens behind this web page, a lot of it remains unclear.

We usually refer to Google by saying that Google developed this program or created this algorithm, and sometimes we forget that behind the name is a big team of software developers, researchers, engineers, scientists, analysts that do all the work.

Back to my previous question, luckily, just by doing a google search on how google search works,(funny, right), we are able to have some answers,

Google uses a special algorithm to generate search results. It also uses automated programs called spiders or crawlers (which scan Web pages and create indexes of keywords and links from that page to other sites), and has a large index of keywords and where those words can be found. The most important part of the process is the ranking of the results when we search for something, which determines the order that Google displays results. Google uses a trademark algorithm called PageRank, which assigns each page a score, based on factors like the frequency and location of keywords within the Web page, how long the web page has existed, the number of other Web pages that link to the page in question. So, if you want your web page to be higher in the search results, than you need to provide good content so that other people will link back to your page, and the more links your page gets, the higher the PageRank score will be.

To make this search even better, we can do specialized searches including images, videos, maps, news articles, products, content in books, scholarly papers etc.. For these searches, Google has created specialized indexes that only contain the relevant sources.

Work cited:

Berners-Lee, Tim Weaving the Web: The Original Design and Ultimative Destiny of the World Wide Web. New York, NY: Harper Business, 2000. Excerpts.

“How We Started and Where We Are Today.” Google. Google, n.d. Web. 29 Nov. 2017. <>.

Irvine, Martin “Introduction to the Web

Strickland, Jonathan. “How Google Works.” HowStuffWorks. HowStuffWorks, 20 Dec. 2006. Web. <>.


The internet, this complex social-technical system

The internet started as a happy accident and now we can’t imagine our life without it. But the internet is not just a singular artefact, and we simply can’t refer to it as one. Rather, it is a complex system that involves a technological evolution, a social aspect, a global infrastructure and a commercialization aspect, according to A brief history of the internet. It is a communication system, that somehow connects the whole world, but it’s not owned or controlled by a specific group that has dominant powers over the others. So, the internet is open to all of us, and there’s so much that we can do, but first let’s try to understand what The internet is

What’s the origin of the word, “the internet”?

According to Khan, the origin of the word is “inter-networking“. In the late 1960s, he faced the problem of three communication networks that did not connect to each other. He worked with Vint Cert to solve this problem, and together they created and built the TCP/IP, which is the Transmission Control Protocol/Internet Protocol; actually a collection of methods used to
connect servers on the Internet and to exchange data. TCP/IP is a universal standard for connecting to the Net.

For something that we use so much, how come it is so difficult to explain what it is and how it works?

Well, this is not a surprise. As we’ve talked in our class, everything that we cannot physically see, makes it hard for us to understand what it is, and to explain how it works. So, after this week readings, it is much easier for me to use the correct terms and understand this complex system. The internet is a system that includes everything from the cables that carry information, to routers, modem servers, cell phone towers, satellites all inter-connected, transmitting information, using protocols, which I referred to earlier.

What I find most fascinating about the internet is it’s design principle and architecture. Modularity and layring approach are two concepts that we’ve discussed earlier, but understanding how they apply to the design of the internet is critical. There is a trade on complexity and performance that happens with these design principles, but the end goal of this architecture is to effectively have the flow of information, the transmission of the data packets from one end to the other.

This means that the development of all media, services, apps all depend on the internet architecture.

Most of the time, as a consumer society, we never worry about how something works, and the internet in particular, is such a delicate case, because of the “magic” that happens, since we don’t see all these network connections and how the packets transmit information from one server to the other.  But as we all know now, there is no magic when it comes to technology and understanding that the internet was designed to do what it does, is part of this system that we all can contribute.

Building websites is the most fascinating thing that I really enjoy doing. Learning HTML, CSS, JQuery and JavaScript helped me understand how  we see what we see in a website. There are a lot of website building platforms, but creating a website from scratch was a unique experience.

I gave this example to make the point that, the most powerful things are invisible and we don’t see what is happening behind this “blackbox”, but trying to understand how it works and asking the right questions, really helps to give meaning and to make us think about ways that we can contribute, and the Internet, this complex system, if full of opportunities to do just that.


Campbell-Kelly Martin and Aspray, William Computer: A History Of The Information Machine. 3rd ed. Boulder, CO: Westview Press, 2014.

Irvine,Martin Introducing Internet Design Principles and Architecture: Why Learn This?

White, Ron How Computers Work. 9th ed. Excerpts from chapter, “How the Internet Works.” Que Publishing, 2007.

Experiencing film based photography

Digital Media minor

During my junior year in undergrad, a new minor was being offered, and that was a minor in Digital Media, and students had the option to choose that minor with an emphasis in Computing, Digital Communications or Digital Arts. The program that I had chosen to major in was Computer Science and it didn’t allow me to take a lot of classes outside of the engineering school, but I was always interested in other things like photography and design and arts in general, so this new minor was my perfect chance to study other interests of mine, and I choose the Digital Media minor with the emphasis in Digital Arts.

I remember the first few classes, we talked about this dichotomy of analog vs. digital and how these terms have changed and influenced professions that are becoming more and more popular today.

Now I understand that such dichotomy is not that accurate to use because media is so much more complex than that, and it’s part of our social-technical system. As Dr. Irvine mentions, sometimes we use the terms of digital media as objects, rather than artefacts, and we forget that media is a continuum system that can be designed to be used in different form and formats, which now days we say digital.

One of my interests is photography, and I always wanted to know more about it. Now days it’s so easy to take a picture. In most cases, you only need your phone to snap a photo, and there you have it, ready in seconds. You can download editing programs on your phone, change the filters, add some changes and now you have a second photo, which is a reproduction of the first photo, but with a few changes in it.

Black and white darkroom photography 

So, I decided to take my interest a bit further and take a class in Photography, and after talking to the teacher, I found out it was a class in darkroom photography. As I said earlier, today the process of taking a photo is so easy and going through a film based class was something very different to me, so I decided to give it a try.

As I found out, for a lot of years, the art of photography has been a chemical process, and the images are captured on a film, which is super-sensitive and it takes time (between 40 min to more than an hour) to develop just one image. You use different chemical solutions, a developer, a stop bath and a fixer, and then the film has to dry, so it’s a whole process of following a recipe, and it’s so easy to mess up.

Process of developing film using different chemicals

A darkroom to develop the film

Film photography is usually considered as analogue to distinguish from digital photography. As I learned, this had to do with the light meter which is considered an analog instrument. The light meter is also present on a digital camera, but you have the option to control the brightness and the amount of light in it.

The darkroom photography was one of the best experiences that I had, understanding how technology transforms from one medium to another.

Sure, I agree that digital cameras have made the process of taking a picture so easy, and you’re in control of all the elements to capture the perfect shot, but they have similar characteristics too.

In a digital camera the image is captured and stored in a memory card, and in the black and white photography, the image is “stored” in the film. For both, you have to understand the basic concepts of angle and light and distance from the object you’re trying to capture.

But a lot of advantages of a digital camera is that you have reduced costs, since you don’t have to go through the process of buying the chemicals and the film, you can just re-use the memory card to store and save the files from your camera to your computer and use it over and over again.

So, in a way, we are designing technology and digital media to make our life easier in the sense that we can have more control with the products that we’re making, but the beauty of a medium is that it can be transformed and evolved in different forms, but in a way it is never lost, and media in general is part our social-technical system. We can have new design elements and mediations but we have to keep in mind all sign, symbols and material are part of our cultural history and they will be used in one way or another.



Irvine, Martin Key Design Concepts for Interactive Interfaces and Digital Media

Manovich, Lev. The Language of New Media: “What is New Media ” (excerpt). Cambridge: MIT Press, 2001.

White, Ron and Downs, Timothy How Digital Photography Works. 2nd ed. Indianapolis, IN: Que Publishing, 2007.

Has the consumer culture changed the way we think about the products we use?

As we have learned by studying different concepts throughout this course, there is no “magic”, when it comes to the world of technology and computers.  But somehow, it is so hard for people to understand how something works and why does it work in that specific way? While there are many theories and research on fields like cognitive sciences and psychology,  that can come with different explanations to the human brain and how we perceive information, I highly believe that by living in a world where we are consumers, by living in a consumer culture, we have lost the sense of participating in the process of building things, we now can just buy what we need, and just make sure that the things we buy work, and never worry about how those things work.

Today, you hear about Iphone X and the “new amazing features” that the new phone can offer to it’s consumers, or maybe you looked at the new Apple Macbook Pro, or Microsoft’s Surface laptop with new improvements and more ways to make it interactive.  So many new things, and in order to participate in the discussions happening in social media (because who doesn’t want to share their personal opinions with the world) and let the world know how “in” they are with the new technologies, you have to buy the newest products, because everyone else seems to use them, and you don’t want to stay behind, right?

I have done that mistake too, and part of this is because you never actually see what’s happening behind the visible layer, what’s behind that blackbox. To cite Bruno Latour, blackboxing is “the way scientific and technical work is made invisible by its own success. When a machine runs efficiently, when a matter of fact is settled, one need focus only on its inputs and outputs and not on its internal complexity. Thus, paradoxically, the more science and technology succeed, the more opaque and obscure they become.

Everyone knows about the new features, but I doubt that people actually know the history of how these new features were invented? And where did they come from?

Lev Manovich, in his book “Software takes command” makes the point that industry is more supportive of the new innovative tech & applications than academia is. Modern business thrives on creating new markets, new products, and new product categories.

But to analyze his point, new discoveries almost always don’t include new content but rather new tools to create, edit, distribute and share this content. To add new properties to physical media, it requires to modify it’s physical substance. But since computational media exists as a software, we can add new properties, new plug-ins, new extensions, by combining the services and the data.

Software lies underneath everything that comes later.

So, the next time you hear about the new cool features of a new product, think of the branding and  marketing side of it.

Ted Nelson and his idea of software, as mentioned in his article Way Out of the Box

“In the old days, you could run any program on any data, and if you didn’t like the results, throw them away.  But the Macintosh ended that.  You didn’t own your data any more.  THEY owned your data.  THEY chose the options, since you couldn’t program.  And you could only do what THEY allowed you — those anointed official developers”. This is a quote by Ted Nelson, in his article Way out of the Box.

In his article, Nelson brings to our attention all the possible ways that we can do things. Just because some companies (Apple and later Microsoft) took the paper simulation approach to the behavior of the software, doesn’t mean that that is the only way to do it. They got caught up to the rectangle metaphor of a desktop, and used a closed approach. Hypertext was still long rectangular sheets called “pages” which used one-way links.

Nelson recognized computers as a networking tool.

Ted Nelson’s network links were two ways instead of one-way.  In a network with two-way links, each node knows what other nodes are linked to it. … Two-way linking would preserve context. It’s a small simple change in how online information should be stored that couldn’t have vaster implications for culture and the economy.

This is an example that demonstrates not to get caught up by the whole computer industry, as software gives plenty of possibilities to look at new ways to implement, rather than just believing and thinking that there is only one way.

Alan Key’s idea of a computer as a “metamedium”, a medium representing other media, was groundbreaking. It is the nature of computational media that is open-ended and new techniques will be invented to generate new tools and new types of media.

Vanneva Bush’s article “As we may think” in 1945, discussed the idea of the Memex, a machine that would act as the extension of the mind, by allowing its user to store, compress and add additional information. It would use methods of microfilm, photography and analog computing to keep track of the data.

You can clearly see the metamedium idea at the Memex.  The second stage in the evolution of a computer metamedium is about media hybridization, which as Manovich explains, is when different medias exchange properties, create new structures and interact on the deepest level.

It was Douglas Engelbart who recognized computers not just a tool, but a part of the way we live our life. The mother of all demos, demonstrated new technologies that have since become common to computers today.  A demo featured first computer mouse, as well as introducing interactive text, video conferencing, teleconferencing, email, hypertext, and real time editing.


All these examples make you think about different ways that software could behave and interact, and how these pioneers continued to push their tools to new limits to create creative outcomes, even without having access to the technology that we have today.

It really is inspiring to look at their work and understand that sometimes it is us who creates limitations to our technology, sometimes pushed by the computer industry and other factors, but it is crucial to understand that there are no limitations to the development of software and graphical interfaces in order to create new ways of human computer interaction (HCI)


Bush, Vannevar “As We May Think,” Atlantic, July, 1945.

Engelbart, “Augmenting Human Intellect: A Conceptual Framework.” First published, 1962. As reprinted in The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 93–108. Cambridge, MA: The MIT Press, 2003.

Latour, Bruno“On Technical Mediation,” as re-edited with title, “A Collective of Humans and Nonhumans — Following Daedalus’s Labyrinth,” in Pandora’s Hope: Essays on the Reality of Science Studies. Cambridge, MA: Harvard University Press, 1999, pp. 174-217. (Original version: “On Technical Mediation.” Common Knowledge 3, no. 2 (1994): 29-64.

Manovich, Lev. Software Takes Command. New York: Bloomsbury, 2016. Print.

Nelson, Theodor Holm.  WAY OUT OF THE BOX. EPrints, 3 Oct. 2009. Web. <>.

Computer Science, the science that teaches you how to find solutions to everyday problems

Every time I answered the question of what my major was, I usually got the same response back, which was  “Oh, that must be so hard, I could never do it”.

And the irony of that is, that EVERYONE is capable to learn how to code. As Denning says, “Computing may be the forth great domain of science along with the physical, life, and social sciences“, which to me means, it is a science like any other science, but we just need to give it a try. In my opinion, everything looks hard if it is unknown. As humans, we would rather do something that we feel comfortable with, rather than “taking a leap into the unknown” and getting comfortable with the uncomfortable.

Computer Science was not offered everywhere as a major, but now you can learn how to code and program online

Back home in Albania, Computer Science was not a popular program for girls, and the stereotypes of guys putting their headphones on, not socializing and doing their own thing were true. By the time I was ready to graduate from high school and pursue my education, Computer Science became such a hot topic, and everyone was encouraging students to give this major a try and see if we liked it. Computer Science received attention on media, certain programs were talking about all these different opportunities that this major could offer.

So, I decided to study Computer Science as my major during undergrad. I always was interested to see how computers work, and how does a computer understand a certain command, specifically,  I wanted to find “the magic” behind it. I didn’t have previous experience with programming. My high school didn’t offer Computer Science classes, and frankly I didn’t know what to really expect, but in my mind the option that I could change my major at any time, If I didn’t like it, put me at ease.

My first programming class was Java 101. At the beginning, I don’t really think I knew what I was doing, but as the class progressed, I really understood how powerful coding and programming can be.

Give a computer the most complicated exercise in finding a certain value, and it will never disappoint you

For a computer, calculating values is the easiest thing in the world. From last week’s readings we read about the information theory, and how for a computer, the smallest unit of storage is a bit, which stores either a 0 or a 1. This seems very inefficient for us, because we are used to work in base 10. But for a computer, base 2 does all the work. You combine bits to form bytes and you end up with 256 combinations, and you can have different combinations let’s say 2 bytes and so on, and all of the sudden you have all these switches working at the same time and that is powerful.

What is fascinating to me is the use of symbols and the grammar that we use when we program. And it is like learning a new language in a way, meaning that you have to follow the rules of the grammar of this language. There are a lot of different languages that you can choose based on what your end goal is:

Symbols are powerful because not only they represent different data structures but they also give meaning to the commands that we write.

Python is one of my favorite languages. It is easier to write commands in fewer lines (compared to java), which makes it easier to read and understand. As part of the assignment, I opened an account with code academy, and after finishing my undergraduate degree, it reminded me of my first experience with programming and how everything started from learning the syntax, defining variables, assigning values and running the code.

But apart from coding and writing programs to do a certain task, what computer science mostly taught me was logic.  We use algorithms all the time and every day. Whenever you were reading a manual on how to operate a new equipment, you were following an algorithm, whenever you had the option to choose between two things (let’s say there are two ways to get from your house to school) you made a decision based on some variable which might be time, you chose the shortest way that takes you from your house to school.

Computer Science helps you with how to find solutions to different problems we face, and not just homework assignments. Thinking “algorithmically” about the world, helps you to tackle the problem fundamentally, by breaking it down in it’s easiest parts, studying it and find better solutions to the possible errors, just like running a program in the console.

Now days, we can combine the power of computing and programming with any other discipline and the options and opportunities on what can be achieved are limitless. From social sciences, to humanities, to fine arts, to engineering, science and technology we can expand our curiosity and knowledge, and we can start just by taking the first stem into getting uncomfortable until it becomes comfortable.


Denning, J Peter, Martell, H. Craig Great Principles of Computing. Cambridge, MA: The MIT Press, 2015

Evans,David Introduction to Computing: Explorations in Language, Logic, and Machines. Oct. 2011 edition

Irvine, Martin Introduction to Computation and Computational Thinking

Verma, Adarsh  “How To Pick Your First Programming Language (4 Different Ways).” Fossbytes. N.p., 06 Mar. 2017. Web

“The book of faces” and the meaning of it

There is so much information around us. As Floridi puts it, Information is notorius for coming in many forms and having many meanings.  Over the past decades , it has been common to adopt a General Definiton of Information (GDI), in terms of data and meaning. That means that we can manipulate it, encode it, decode it as long as the data must comply with the meanings (semantics) of a chosen system, code or language. There has been a transition from analogue data to digital data. The most obvious difference is that analog data can only record information (think of vinyl records) and digital data can encode information, rather than just recording it.

But how is the information measured?

Claude Shannon, in his publication “A mathematical theory of communication”, used the word bit, to measure information, and as he said, a bit is the smallest measuring unit of information.  A bit has a single binary value, either 0 or 1.

When I think of information, I almost never associate it with data, but rather with meaning.  In a way, information to me serves the function of communicating a message. But, when we look at how is the message sent and delivered, is when we can see the data in it.

Now that we know the process, let’s take a look at Facebook, a social network that is changing our society.

History of Facebook

Facebook was created as part of a class project at Harvard. Mark Zuckeberg, created FaceMash, a software that compared two photos, taken from the student’s directory book, and the user decided which of the two photos was hotter. The cite was shutdown a few hours, since it violated copyrights and privacy. After this, Zuckerberg created “The Facebook”.

In 2003, there were no universal online facebooks at Harvard, but only papers that held photos and basic information of the students. So Zuckerberg, had the idea to create an online directory, and that’s how Facebook started.

From this example, we see the change going from analog to digital, and we see the function and  meaning behind Facebook, which was to create an online directory of Harvard students. But today, Facebook has evolved and changed it’s meaning and function.

Today, Facebook is participating in polls of political campaigns; we’re sharing information from different fields with our friends; we’re connecting with people from all over the world; we’re texting, sending messages, photos, videos etc…

The process of sending e text message through Facebook

When sending a text, we have audio that confirms it  and a symbol that shows up.  When you send a private message including audio, video, to your friend, the message is encoded through the interface of your phone or computer, then message is sent through the wireless network or  cellular company to the Facebook database software. The message then is decoded and sent to your friends. This process is done in seconds, and you never think of the information and data storage that happens behind the screen.

The information age and digital revolution, has helped people to create and design different technologies that use different means of communication.

Information theory helps to design physical architectures that can encode, decode and send a message but the combination of information theory and semiotics (signs, symbol) give a more meaningful experience to us as humans, and that to me means that one cannot go without the other, same as data and meaning go to the definition of information.


Bell, Tim and Denning, Peter “The Information Paradox.” From American Scientist, 100, Nov-Dec. 2012.

Floridi, Luciano Information: A Very Short Introduction. Oxford, UK: Oxford University Press, 2010.

Gleick, James Excerpts from The Information: A History, a Theory, a Flood. (New York, NY: Pantheon, 2011)

Irvine, Martin Introduction to the Technical Theory of Information

Phillips, Sarah. “A Brief History of Facebook.” The Guardian. Guardian News and Media, 25 July 2007. Web.

Amazon’s website interfaces, affordances and constrains

This weeks readings were based on the terms of affordances, constrains, interfaces and design interactions. As Murray points out in his article, digital artifacts pervade our lives, and the design decisions that shape them affect the way we think, act, understand the world, and communicate with one another.

We always find new ways of designing new medium, that’s part of our human nature. But it is critical to have a good design process, keeping in mind that it should serve a function and have a purpose.

Amazon’s website now and then

Let’s take a look at Amazon’s website since its first launch and compare it with today’s website.

Amazon’s 1994 homepage

Today’s website

Constrains, Interface, Affordances

As we can see, a lot has changed. There were a lot of constrains with the old website. First, the web page was not designed to be accessed from a phone, so it was not responsive to different screen sizes. There is the logical constrain of scrolling down. The design of the interface was not very interactive. So you see a lot of text and it constrains the customer to follow a linear path, with not much interaction.

Today’s website is much more interactive, with more options and functions. The simple website has evolved drastically. It has changed the way that people shop, from the convenience of their homes. You can shop from books, to electronic, to clothes, jewelry, shoes, food, home supplies and so much more. But even the most well-designed website can have constrains and that’s just the way it is.

The most obvious constrain is that you have to have power and be connected to the internet in order to shop online. The other constrain is that you have to have a valid credit/debit card to make the payment. And then create a profile with the user’s personal information .

I do like the interface. It is intuitive to me, in the sense of what we see and what can we operate. The search bar is the first step to look at an item that you want to purchase. Of course that the constrain with the experience is that you cannot physically look at the object, but the reviews that clients leave usually give you an idea about the object that you’re looking for. And giving an option to rate the product and leave a review also is meant to make your experience more enjoyable.

Let’s take a look at the digital affordances of this website. The use of the labels (different departments)  help the customer to correctly find a product. There is the symbol of the shopping cart to give the idea that you put the items you buy in the cart, so it’s kinda mimicking the actual experience. Now the website is responsive and you can access it on your phone as well.

I think the purpose of the amazon’s website has changed a lot since it first started. It is so convenient to “go shopping” from the comfort of your home, saving you time and in most cases also money. But is it changing the experience and the term of shopping. It makes you think that all these conveniences, are making us lazy in a way, that we don’t even want to go shopping anymore because we can do it online. And about the design process, it will not be that long before the website incorporates audio, and you could just active the speaker and tell it to search a product for you, you won’t even have to write it.


Murray, Janet  Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, MA: MIT Press, 2012.

Kaptelinin, Victor. “Affordances.” The Encyclopedia of Human-Computer Interaction, 2nd Ed., 2013.

Norman, A. Donald, “Affordance, Conventions, and Design.” Interactions 6, no. 3 (May 1999): 38-43.

Quito, Anne Amazon’s 1994 homepage. Digital image. N.p. July 18, 2016. Web. <>.

Bryant, Miranda. Evening Standard, Amazon’s website. Digital Image. N.p. June 27, 2013. Web.<>.

Sociotechnical artefacts and the mediation behind it

Sociotechnical system

The way in which people interact with one-another and relate to nature is determined by the technological resources they have at their disposal.  Technical artifacts have come to dominate the world we live in today, from the most common objects to the most complex systems. In order for a technical artefacts to “survive” the time is to both have a function and a use plan, as mentioned in the article A philosophy of Technology. But how did we get to talk about Sociotechnical artefacts and understanding the mediation behind it?

Latour argues that there is no relation between “the material” and “the social world”.  When you think of it, this idea is absurd. Even the term itself “Sociotechnical” makes you think of a codependent relationship between society and technology. As Irvine states it, a Sociotechnical system is a system of interconnected agency and co-dependency.  This idea can be seen in the human-machine interface, where different actions are designed to go back and forth among human agents and artefact machines. Indeed, it is the human agents who design technological tools/objects that are than combined with symbolic cognition.


The idea of Blackboxing

This is a complex system because a lot of the work happens “behind the scenes”. Let’s take an example of an Iphone interface. So, we have the artefact object in our hands and it does so many things, from playing the music and the movie we want, to sending a text to our friend, to scheduling an appointment and putting it in the calendar, to playing games etc…The main idea is that the Iphone itself does not do any of these actions, but it is designed by a human actor to do all these actions. It is difficult to measure the mediation role of techniques, and as Latour puts it, because the action we are trying to measure is subject to blackboxing. This refers to the idea that scientific and technical work is made invisible, as long as the action works. Making something invisible, visible, requires time and can be tricky, but it is the best option we have to understand the hidden dependencies in a complex system.

So now, we mostly never worry about how something works, but we just want to make sure that it works. We only focus on inputs and outputs, and not on the internal complexity.

We live in a consumer culture

And to me, that has to do with the idea of living in a world where we are consumers, living in a consumer culture. Since most of us don’t participate in the actual making of an artefact or a new object, we tend to take their existence for granted, as these new objects magically appeared and became part of our everyday life. Building something from scratch on your own is a feeling that you don’t get when you just buy a product. Think for a moment when you were a child and you made your first science project, or helped your grandmother grow vegetables in the garden, or changed the oil of a car with your father. Some people still do these things, because it gives them a sense of purpose, and other just go and have someone change the oil for them, or go to the supermarket and buy all the vegetables and products that they need.

Barbara Kuger’s “I shop Therefore I am” (1987)

Of course there are many factors to take into consideration, and in one way or another you will be a consumer, but my point is to not take things for granted, and just know that somebody went through the whole process of creating/making something, and it just didn’t magically appeared.

The medium is the message

Marshall McLuhan prompt “the medium is the message”, makes me think of all these different media such as speech, writing, images, videos and how we incorporated them, using different tools and methods, into the sociotechnical system. By creating and designing different objects from tv, radio, to tablets and laptops and cell phones we can say that we use the different media as interfaces, to create more complex systems. I think that all these different channels of  communication technology, much like conservation of energy, can never be lost and destroyed, only transformed from one medium to another.

Any time we look at an artefact or a complex system we need to keep in mind these questions: What is it for? What does it consist of? How was it designed, created?  How must it be used? and by trying to answer these questions, we start the reverse engineering process of deblackboxing and this will help us understand the object and the system itself.


Irvine, Martin “Understanding Sociotechnical Systems with Mediology and Actor Network Theory (with a De-Blackboxing Method)”

Pieter Vermaas, Peter Kroes, Ibo van de Poel, Maarten Franssen, and Wybo Houkes. A Philosophy of Technology: From Technical Artefacts to Sociotechnical Systems. San Rafael, CA: Morgan & Claypool Publishers, 2011.

Debray, Regis “What is Mediology?” (Also as PDF.) Le Monde Diplomatique, Aug., 1999. Trans. Martin Irvine.

Latour, Bruno  “On Technical Mediation,” as re-edited with title, “A Collective of Humans and Nonhumans — Following Daedalus’s Labyrinth,” in Pandora’s Hope: Essays on the Reality of Science Studies. Cambridge, MA: Harvard University Press, 1999, pp. 174-217. (Original version: “On Technical Mediation.” Common Knowledge 3, no. 2 (1994): 29-64.

Iz Quotes. The Medium is the message . Digital image. Izquotes, n.d. Web. <>

Science, Technology and Society Triangles. Pinimg. Digital Image, n.d Web  <–student-desks-triangles.jpg>

Barbara Kuger. I shop therefore I am. Digital Image, 1987 n.d Web <>


Understanding how the human brain works, can lead to new, exiting technologies

Symbolic cognition can bridge the gap between humans and technologies, mind and language. As Colin Renfrew argues in his article, the phase of symbolic culture has undergone different phases and transitions, from Episodic culture (characteristic of primate cognition) to External Symbolic Storage  (characteristic of urban societies). Although it’s hard to really understand these transitions, one thing it’s clear: that human brain and intelligence has been evolving while adjusting to its environment, conditions and cultural background from one  historic period to another.

As seen in babies and kids, a lot of the learning process is done by frequent repetition. The more they grow up, the more you see the incorporation of logic and cognitive artifacts to regulate their interactions with the world. Cognitive artifacts, according to Cole, are simultaneously both ideal (conceptual) and material. We learn a language by studying its symbols, and we use different tools to produce material products. An example of this idea would be going from studying a language, to use that language to write books, which then are stored in libraries (physical or digital), which then we use to develop new technologies. As we see, this will always be an ongoing process, and by studying the cognitive continuum, it is important to take a step back and see how did we get where we are today, because sometimes we take technologies and tools for granted, when it is obvious that they are correlated and will always be.

By studying cognitive science we can find ways to enhance human abilities. As Norman mentions in his article, a lot of the studies has been done from contemporary cognitive science, despite the importance of discoveries of early days of psychological and anthropological investigations. Let us look at an example on how we use our brain for new technologies.

NYT, Computers are taking design cues from human brains.

On a recent article that I read on New York Times, “Chips Off the Old Block: Computers Are Taking Design Cues From Human Brains”, new technologies are testing the limits of computer semiconductors. To deal with that, researchers have gone looking for ideas from nature, hence the human brain. For a long time, computer engineers have built systems around a single chip, the CPU. Now, machines are dividing work into tiny pieces and spreading them among more simpler, specialized chips that consume less power. Companies like Microsoft are using Neural Networks to improve their products and services for their customers. Neural networks have been used on a variety of tasks and systems, including computer vision, speech recognition, machine translation, and in many other domains. Systems that rely on neural networks can learn largely on their own, meaning that the system learns by studying different patterns repeatedly, which requires a lot of trial and error, and tweaking the algorithm to improve the training data over and over. In other words, my point is that even though we are creating new exciting technologies, we still need the human brain to make it all possible.


Irvine,Martin “Introduction to Cognitive Artefacts and Semiotic Technologies”

Renfrew, Colin  “Mind and Matter: Cognitive Archaeology and External Symbolic Storage.” In Cognition and Material Culture: The Archaeology of Symbolic Storage, edited by Colin Renfrew, 1-6. Cambridge, UK: McDonald Institute for Archaeological Research, 1999.

Cole,Michael  On Cognitive Artifacts, From Cultural Psychology: A Once and Future Discipline. Cambridge, MA: Harvard University Press, 1996. Connected excerpts.

Norman, A. Donald  “Cognitive Artifacts.” In Designing Interaction, edited by John M. Carroll, 17-38. New York, NY: Cambridge University Press, 1991. Read pp. 17-23.

Medz, Cade  (2017, September 17). Computers are taking design cues from human brains. Retrieved September 26, 2017, from