Author Archives: Chirin Dirani

Chatbots and Emojis For an Improved Human Experience

 Chirin Dirani


With the growing use of conversational user interfaces, stems the need for a better understanding of social and emotional characteristics embedded in the online dialogues. Specifically, textbase chatbots face the challenge of conveying human-like behavior while being restricted to one channel of interaction, such as texts. The aim of this paper is to investigate whether or not it is possible to normalize and formalize the use of emojis for a comprehensive and complete means of communication. In an effort to answer this question, and as a primary source, the paper will investigate the findings of a new study published in 2021 by the University of Virginia, How emoji and word embedding helps to unveil emotional transitions during online messaging. The study found that chatbots design can be enhanced to have the ability to understand the “affective meaning of emojis.” By that, chatbots will be more capable to understand the social and emotional state of the users and subsequently conduct a more naturalistic conversation with humans. The paper concludes by calling for more empirical research on chatbots using emojis for emotionally intelligent online conversations.


Throughout history, humans established relationships using explicit means of communication, such as words, and implicitly by using their body language. Most of these relationships were developed through face-to-face interactions. Body language delivers important visual clues to what is said. In fact, small clues such as facial expressions or gestures add a great deal of meaning to our words. In the last few decades and with the growing use of different forms of technology, people shifted to communicating through text and voice messaging in an online form. Chatbots, which is the common name of voice assistants and virtual assistants, as well as the text chat, is an important technology that has various implementations. Despite the widespread use of this service, especially to support businesses, chatbots technology still lacks efficiency due to the absence of body language. In this paper, I will explore the impact of using other informal means of communication in the chatbot texting service to replace body language and identify emotions. Using Emojis to infer emotional changes during chatbot texting, is one of the means I propose. In an effort to make textbase chatbots’ conversations with humans more efficient, I will try to answer the question on whether or not it is possible to normalize and formalize the use of emojis, for a comprehensive and enhanced means of communication. For this purpose, I will start by looking into the history of text chatbot service, and will then deblackbox the different layers, levels, and modules composing this technology. The aim of this paper is to contribute to finding solutions for the urgent challenges facing one of the most growing services in the field of technology today. 

Definition of Chatbots

According to Lexico, a chatbot is “A computer program designed to simulate conversation with human users, especially over the internet.” Also, it is an artificial intelligence program, as well as a Human–Computer Interaction (HCI) model.” This program uses natural language processing (NLP) and sentiment analysis to conduct an online conversation with humans or other chatbots via text or oral speech. To describe this software, Micheal Mauldin, called it “ChatterBot” in 1994, after creating the first Verbot; Julia. Today, Chatbots is the common name of artificial conversation entities, interactive agents, smart bots, and digital assistants. Due to its flexibility, the chatbots’ digital assistants proved useful in many fields, such as education, healthcare and business industries. Also, it is used by organizations and governments on websites, in applications, and instant messaging platforms to promote products, ideas or services.

The interactivity of technology in combination with artificial intelligence (AI) have greatly improved the abilities of chatbots to emulate human conversations. However, chatbots are still unable to conduct conversational skills like humans do. This is due to the fact that chatbots today are not fully developed to infer their user’s emotional state. Progress is achieved everyday and chatbots are gradually getting more intelligent and more aware of their interlocutor’s feelings.

The Evolution History of Chatbots

What is called today as a benchmark for Artificial Intelligence (AI), “Turing test,” is rooted in Alan Turing’s well known paper that was published in 1950; Computing, Machinery and Intelligence. The overall idea of Turing’s paper is that machines too can think and are intelligent. We can consider this as the starting point of bots in general. For Turing, “a machine is intelligent when it can impersonate a human and can convince its interlocutor, in a real-time conversation, that they are interacting with a human.”

In 1966, The German computer scientist and professor at Massachusetts Institute of Technology (MIT),  Joseph Weizenbaum, built on Turing’s idea to develop the first chatterbot program in the history of computer science; ELIZA. This program was designed to emulate a therapist who would ask open-ended questions and respond with follow-ups. The main idea behind this software is to make ELIZA’s users believe that they are conversing with a real human therapist. For this purpose, Weizenbaum programmed ELIZA to recognize some key words from the input and regenerate an answer using these keywords from a pre-programmed list of responses. Figure 1, illustrates a human conversation with ELIZA. It shows clearly how this program picks up a word, and responds by asking an open-ended question. For example, when the user said: “He says that I’m depressed much of the time,” ELIZA took the word “depressed” and used it to formulate its next response, “I am sorry to hear that you are depressed.” This case of open-ended questions created an illusion of understanding and having an interaction with a real human being, meanwhile the whole process was an automated one. PARRY is a more advanced copy of ELIZA that was founded in 1972. It was designed to act like a patient with schizophrenia. Like ELIZA, PARRY was a chatbot but with limited capabilities in terms of understanding language and expressing emotions. Add to it, PARRY was a slow respondent and couldn’t learn from the dialogue.

Figure 1: A human Conversation with ELIZA, Source: 

The British programmer Rollo Carpenter was the first pioneer to use AI for his chatbot; Jabberwacky, back in 1982. Carpenter aimed at simulating a natural human chat that can pass the Turing test. “Jabberwacky was written in CleverScript, a language based on spreadsheets that facilitated the development of chatbots, and it used contextual pattern matching to respond based on previous discussions.” Like his predecessors, Carpenter was not able to program Jabberwacky with high speed or to deal with large numbers of users.  

The actual evolution of chatbot technology happened in 2001 when it was made available on messengers such as America Online (AOL) and Microsoft (MSN). This new generation of chatbots “retrieved information from databases about movie times, sports scores, stock prices, news, and weather.” The new improvement in this technology paved the way for a real  development in machine intelligence and human–computer communication.

A new improvement to AI chatbots took place with the development of smart personal voice assistants, which were built into smartphones and home speaking devices. These voice assistants received voice commands, answered in digital voice and implemented tasks such as monitoring home automated devices, calendars, email and other applications. Multiple companies introduced their voice assistants; Apple SIRI (2010), IBM Watson (2011), Google Assistant (2012), Microsoft Cortana (2014) and Amazon Alexa (2014). The main distinction between the new generation of chatbots and the old ones is the quick meaningful response to the human interlocutor.

By all means, 2016 was the year of chatbots. In this year, there was a substantial development in AI technology, in addition to introducing the Internet of Things (IoT) to the field of chatbots. AI changed the way people communicated with service providers since “social media platforms allowed developers to create chatbots for their brand or service to enable customers to perform specific daily actions within their messaging applications.” The integration of chatbots in the IoT scenario opened the door wide for the implementation of such systems. Thanks to the development in natural language processing (NLP) and compared to ELIZA, today’s chatbots can share personal opinions and are more relevant in their conversation. However, they can be vague and misleading as well. The important point to note here is that chatbots are still being developed and as a technology, it hasn’t yet realized its fullest potential. This brief historical overview of the evolution of chatbots tells us that although the technology has experienced rapid developments, it is yet to promise us a world of possibilities, if properly utilized.

Chatbot Categories  

There are several ways to categorize chatbots (see Figure 2). First, they can be categorized according to their purpose as either assistants or for interlocutors. Assistant chatbots are developed to assist users in their daily activities, such as schedule an appointment, make a phone call, search for information on the internet and more. Second, Chatbots can also be grouped according to their communication technique, and this can be either via text, voice or image, or all of them together. Recently, chatbots can respond to a picture, comment and even express their emotions towards this picture. The third categorization is related to the chatbots’ knowledge domain, and it is the access range provided for the bots. Based on the scope of this access, a bot can be either generic or specific. While generic bots can in fact answer questions from any domain, the domain-specific chatbots respond only to questions about a specific knowledge domain. Interpersonal chatbots are also under the communication technique category and they are the bots that offer services without being a friendly companion. In addition, there are the Intrapersonal chatbots, which are close companions and live in their user’s domain. The Inter-agent chatbots are the ones that can communicate with other chatbots such as Alexa and Cortana. Fourth category is according to classification. Under this category, chatbots are classified into three main classes. The informative chatbots are used to give information to their user; these information are usually stored in a fixed source. The chat-based/conversational chatbots which conduct a natural conversation with their user like a human. Finally, the task-based chatbots that handle different functions and are excellent at requesting information and responding to the user appropriately. It is important to mention that the method that a chatbot uses to generate its response categorizes it into a rule-based, retrieval based, or a generative based chatbot. This paper will focus on the class of bots that use texts as means of communication.

Chatbots Categories

The Chatbots Technology

Depending on the algorithms and techniques, there are two main approaches for developing chatbot technology; the pattern matching and the pattern recognition using machine learning (ML) algorithms. In what follows, I will provide a brief description of each technique, however, this paper is concerned with AI/ML pattern recognition chatbots. 

Pattern Matching Model  

This technique is used in rule-based chatbots, such as ELIZA, PARRY and Jabberwacky. In this case, chatbots “match the user input to a rule pattern and select a predefined answer from a set of responses with the use of pattern matching algorithms.” In contrast to knowledge-base chatbots, rule-based ones are unable to generate new answers because their knowledge comes from their developers who developed this knowledge in the shape of conversational patterns. Despite the fact that these bots are fast responding, however, their answers are automated and not spontaneous like the knowledge-base chatbots. There are three main languages used to develop chatbots with the pattern-matching technique; Artificial Intelligence Markup Language (AIML), Rivescript, and Chatscript.

Pattern Recognition Model: AI/ML Empowered

The main distinction between the pattern matching and pattern recognition bots, which is in more scientific words, rule-based and knowledge-based bots is the presence of Artificial Neural Networks (ANNs) algorithms in the latter case. By using AI/ML algorithms, these relatively new bots can extract the content from their users input using natural language processing (NLP) and the ability to learn from conversations. These bots need an extensive amount of Data training set as they do not rely on predefined response for every input. Today, developers use ANNs in the architecture of ML empowered chatbots. It is useful to mention here that retrieval-based chatbots use ANNs to select the most relevant response from a set of responses. Meanwhile, generative chatbots synthesize their reply using deep learning techniques. The focus of this paper is on the chatbots using deep learning methods since this is the dominant technology used in today’s chatbots. 

Deblackboxing Chatbot technology

Uncovering the different layers, levels, and modules in the chatbots will help us to better understand this technology and the way it works. In fact, there are many designs that vary depending on the type of chatbot. The following description reveals the key design principles and main architecture that applies to all chatbots.

Figure 3 Demonstration of the general architecture for AI chatbot of the entire process. Source:  How emoji and word embedding helps to unveil emotional transitions during online messaging

In an analysis of Figure 3, we can see the different layers of operation within a chatbot including the user interface layer, the user message analysis layer, the dialog management layer, the backend layer and finally, the response generation layer. The chatbot process begins when the software receives the user’s input through an application using text. The input is then sent to the user message analysis component to find the user’s intention following pattern matching or machine learning approaches. In this layer, Natural Language Processing (NLP) breaks the input down, comprehends its meaning, spell checks and corrects user spelling mistakes. The user’s language is identified and translated into the language of the chatbot with what is called Natural Language Understanding (NLU) which is a “subset of NLP that deals with the much narrower, but equally important facet of how to best handle unstructured inputs and convert them into a structured form that a machine can understand” and act accordingly. Then the dialog management layer controls and updates the conversation context. Also, it asks for follow-up questions after the intent is recognized. After the intent identification, the chatbot proceeds to respond or ask for information retrieval from the backend. The chatbot retrieves the information needed to fulfill the user’s intent from the Backend through external Application Performance Interfaces (APIs) calls or Database requests. Once the appropriate information is extracted, it gets forwarded to the Dialog Management Module and then to the Response Generation Module which uses Natural Language Generation (NLG) to turn structured data into text output that answers the main query. 

The chatbots architecture is supported today with three important trends of technology; AI/ML algorithms, Big data and cloud computing systems. On one hand, AI/ML enable intelligent algorithms that are capable of learning on the go. These algorithms are the artificial neural networks (ANN) which are means of training data that empower chatbots’ outputs with greater “accuracy” (lower error rate). On the other hand, Big data provides AI/ML hungry ANN algorithms with a big amount of data which in turn enriches chatbot’s backend storage. Then the vast amount of AI trained chatbots’ output data needs the scalability and extensibility offered by cloud computing, in the shape of cheap extensible storage memories. This unique combination “offers huge advantages in terms of installation, configuration, updating, compatibility, costs and computational power” for chatbots.” 

The above shows us that the chatbots technology is very complex and intricate. Nevertheless, it is at the same time flexible and can be easily further developed and upgraded with new layers. After analyzing the chatbots layers and gaining a better understanding of the role of each of these layers, we can in fact incorporate our desired upgrades and prepare for the new generation of charbots that are able to relate to the interlocutors’ emotions over text.

Discussion around the Main Argument:

As humans, we develop relationships through everyday face-to-face interactions. Body language delivers important visual clues to what we say. In fact, small clues such as facial expressions or gestures add a great deal of meaning to our words. In the 60s, Professor Albert Mehrabian formulated the 7-38-55% communication rule about the role of nonverbal communication and its impact during face-to-face exchanges, (see figure 4). According to this rule, “only 7% of communication of feelings and attitudes takes place through the words we use, while 38% takes place through tone and voice and the remaining 55% of communication take place through the body language we use.”

Figure 4: Theory of communication. Source: 

In the last twenty years and with the growing use of different forms of technology, people shifted to communication through text and voice messaging in the online space. Chatbots is one of the important technologies that has various implementations. Despite the widespread use of this technology, chatbots still lack efficiency due to the absence of body language and ability to infer emotions, feelings and attitudes of its interlocutor. To solve this issue, many researchers proposed different scenarios. Our guide in this discussion is the primary source “How emoji and word embedding helps to unveil emotional transitions during online messaging.” This source is the first study of its kind, by the University of Virginia, and it suggests that using emojis and word embedding to model the emotional changes during social media interactions is an alternative approach to making the textbase chatbot technology more efficient. Also, the study advocates for the fact that extended affective dictionaries, which include emojis, will help in making chatbots work more efficiently. The study “explores the components of interaction in the context of messaging and provides an approach to model an individual’s emotion using a combination of words and emojis.” According to this study, detecting the user’s emotion during the dialogue session will improve chatbots’ ability to have a “more naturalistic communication with humans.”

Moeen Mostafavi and Michael D. Porter, the researchers who conducted this project, believe that tracking a chatbot user’s emotional state during the communication process needs a “dynamic model.” For this model they consulted the “Affect Control Theory (ACT) to track the changes in the emotional state of the user during his/her communication with a chatbot after every dialogue. Figure 5 demonstrates the interaction between a customer and a Chatbot using emojis. This interesting study concludes with an important finding: chatbots design can be enhanced to have the ability to understand the “affective meaning of emojis.” However, there is a need to extend dictionaries to support the researchers’ use of ACT to apply new designs for chatbots behaviors. The researchers claim that the increasing use of emojis in social media communication today will facilitate adding them to dictionaries to support the researchers’ efforts. 

As this research paper demonstrates, the chatbots’ flexibility and the technological advances make it easy for the chatbot designers to incorporate the use of emojis in a more intelligent manner. This integration would increase this tool’s ability to understand, analyze and respond to the emotional changes of the human on the other end of the chat is experiencing. Nevertheless, I suggest that the challenge for this process is in building a rich foundation for these emojis in the dictionaries and which requires a collaboration at higher levels and more.

Figure 5: An interaction between a user and a Chatbot using emojis. Source: How emoji and word embedding helps to unveil emotional transitions during online messaging


Taking into account the significant human financial and capital investment committed to the development of chatbots and other AI-driven conversational user interfaces, it is necessary to understand this complex technology. The focus of the chatbot community has so far concentrated on the language factors, such as NLP.  This paper argues that it is equally important to start heavily investing in the social and emotional factors in order to enhance the abilities of the textbase AI-driven chatbots. Chatbots have a long way to go before they realize their fullest potential and pass the Turing test. However, promising improvement surfaced in the last few years. The goal of this paper was to investigate whether or not it is possible to harness the use of an already available tool, such as emojis, to enhance the communication power of chatbots. A unique newly published primary source was investigated to help in answering this question. Understanding the evolution history of chatbots, their categories, and technology was important to deblackbox this complex technology. This clarity helps us realize that adding emojis to this complex process is not an easy one but still not impossible, given the additional support provided by three important technologies, AI/ML, Big data and Cloud computing. Using Emojis for chatbots involves applying modifications to the main structure of chatbot’s architecture by adding a new layer. Add to it, there will be a need to extend traditional dictionaries by adding emojis to support the process. The primary source found, by evidence, that implementing this new approach will definitely provide chatbot with new abilities to become more intelligent. To conclude, there is still a need for more empirical research on chatbots’ use of emojis as a leverage. The process is not easy but given the huge investment and growing need for chatbot in many fields, the potential for the outcomes of such research will be groundbreaking and will transform the human’s experience with chatbots as a tool of support.


Adamopoulou, Eleni, and Lefteris Moussiades. “Chatbots: History, Technology, and Applications.” Machine Learning with Applications 2 (December 15, 2020): 100006.

The British Library. “Albert Mehrabian: Nonverbal Communication Thinker.” The British Library. Accessed May 8, 2021.

“Chatbot.” In Wikipedia, May 6, 2021.

“CHATBOT | Definition of CHATBOT by Oxford Dictionary on Lexico.Com Also Meaning of CHATBOT.” Accessed May 8, 2021.

Fernandes, Anush. “NLP, NLU, NLG and How Chatbots Work.” Medium, November 9, 2018.

Hoy, Matthew B. “Alexa, Siri, Cortana, and More: An Introduction to Voice Assistants.” Medical Reference Services Quarterly 37, no. 1 (January 2, 2018): 81–88.

“Jabberwacky.” In Wikipedia, November 15, 2019. 

Jurafsky, Daniel and James H. Martin. Speech and Language Processing: An Introduction to           Natural Language Processing, Computational Linguistics, and Speech Recognition, 2nd ed. (Upper Saddle River, N.J: Prentice Hall, 2008).

Kar, Rohan, and Rishin Haldar. “Applying Chatbots to the Internet of Things: Opportunities and Architectural Elements.” International Journal of Advanced Computer Science and Applications 7, no. 11 (2016): 8.

Mell, Peter, and Tim Grance. “The NIST Definition of Cloud Computing.” National Institute of Standards and Technology, September 28, 2011.

Molnár, György, and Zoltán Szüts. “The Role of Chatbots in Formal Education.” In 2018 IEEE 16th International Symposium on Intelligent Systems and Informatics (SISY), 000197–000202, 2018.

Mostafavi, Moeen, and Michael D. Porter. “How Emoji and Word Embedding Helps to Unveil Emotional Transitions during Online Messaging.” ArXiv:2104.11032 [Cs, Eess], March 23, 2021.

Shah, Huma, Kevin Warwick, Jordi Vallverdú, and Defeng Wu. “Can Machines Talk? Comparison of Eliza with Modern Dialogue Systems.” Computers in Human Behavior 58 (May 1, 2016): 278–95.

Shum, Heung-yeung, Xiao-dong He, and Di Li. “From Eliza to XiaoIce: Challenges and Opportunities with Social Chatbots.” Frontiers of Information Technology & Electronic Engineering 19, no. 1 (January 1, 2018): 10–26.


Synthesis of Learning and AI Ethics- Chirin Dirani

A black box, in science, engineering and computing, “is a device, system, or object which can be viewed in terms of its inputs and outputs, without any knowledge of its internal workings.” Before CCTP-607 course, the computing system, AI/ML, Big data and Cloud computing system were a mere blackbox to me. However, this course has enabled me to  learn the design principles of these technologies, deconstruct the many interdependent layers and levels which compose this sophisticated system, and read about the history on how technologies have been developed. This new knowledge has enabled me to de-blackbox this complex system and understand its main architecture, components and mechanism. The de-blackboxing approach made me change the way I perceive technology and the way I interact with it. Many of the previous ambiguities and ‘assumptions’ about these technologies have been cleared out for me. In the previous assignments, we looked into the design principles and architecture of computing systems including Artificial Intelligence (AI) and Machine Learning (ML), Big Data and data analytics, and Cloud computing systems. Also, we investigated the convergence points that combined AL/ML applications, Cloud systems and Big data to emerge together, in a relatively short time, as the leading trends in the world of technology today. With this rapid emergence, a large number of ethical and social concerns have appeared in the last few years. The materials for this class inform us about some of the current issues and promising approaches to solve them.

 The Documentary, Coded Bias, highlights an important civil rights problem that was discovered by a young MIT Media Lab researcher, Joy Buolamwini. She proves the bias within facial recognition programs, in specific against those who do not look like the white men who initially created these technologies. The facial recognition created by biased AI/ML powerful algorithms can cause harm and misinformation to people of color, women and minorities around the world. Also, it can be used as a tool of state and corporate mass surveillance. In their account, Lipton and Steinhardt highlight some “troubling trends” in the creation and dissemination of knowledge about data-driven algorithms by AI/ML researchers. These trends include 1) Failure to distinguish between explanation and speculation, 2) Failure to identify the sources of empirical gains, 3) misuse of mathematics and confuse technical and non-technical concepts, and 4) misuse of language by choosing terms of art or overloading existed technical terms. Through their article, the authors call for recurring debate about what constitutes reasonable standards for scholarship in the AI/ML field, as this debate will lead to a societal self-correction and justice for all.

With the growing number of issues surrounding the AI/ML community, which the private sector cannot resolve alone, comes the need for a governmental thoughtful approach to regulate this field. The European Union (EU) was a pioneer in imposing durable privacy and security law concerning collecting data related to people in the EU. The General Data Protection Regulation (GDPR) panelizes anyone who violates its privacy and security standards with tens of millions of Euros. In the EU Ethics guidelines for trustworthy AI, AI must be lawful, ethical and robust. Also, the guidelines listed seven “key requirements that AI systems should meet in order to be considered trustworthy.;” 1) empower human beings to make informed decisions and nurture fundamental rights, 2) ensures resilience, 3) respects privacy and protects data 4) transparent, 5) diverse, 6) benefits all human beings including future generations, 7) responsible and accountable in its outcomes. Yet, in the US, there are no such regulations to govern and regulate the outcomes of AI/Ml. Until these regulations exist and cause a shift from measuring algorithm performance only to evaluating human performance and satisfaction; Human-centered AI (HCAI), there is a need to learn and understand how this system works. This understanding happens by de-blackboxing and exposing the layers and levels that make this system work the way it does. I have thoroughly enjoyed reading this week about the intersectionality of technology and ethics. The readings were an eye-opener of the amount of work and research that still needs to take place in order to ensure that human beings remain in control of technologies rather than vice versa.


1). Ben Shneiderman, “Bridging the Gap Between Ethics and Practice: Guidelines for Reliable, Safe, and Trustworthy Human-Centered AI Systems.” ACM Transactions on Interactive Intelligent Systems 10, no. 4 (October 16, 2020): 26:1-26:31.

2). Film Documentary, Coded Bias (Dir. Shalini Kantayya, 2020).

3). Professor Irvine article, Introduction to Key Concepts, Background, and Approaches.”

4). The European Commission (EU), Ethics Guidelines for Trustworthy AI .

5). The European Commission (EU) General Data Protection Regulation

6). Will Kenton, “Black Box Model,” Investopedia, Aug 25, 2020, visited April 16, 2021,  

7). Zachary C. Lipton and Jacob Steinhardt, “Troubling Trends in Machine Learning Scholarship,” July 9, 2018.

Big Data, AI/ML and Cloud Computing: The Perfect Match- Chirin Dirani

Undoubtedly, the world is undergoing a technological revolution. This revolution is changing every aspect of our modern daily life and is evident in areas such as “finance, transport, housing, food, environment, industry, health, welfare, defense, education, science, and more.” According to the reading for this class, this revolution stems from the perfect match and combination of Big data, Cloud Platforms, and AI/ML. In week six, we have learned how AI/ML “hungry neural nets” use massive amounts of data for pattern recognition, and then make predictions based on already trained patterns to analyze new data. Last week, we dug deep in the definition and architecture of Cloud computing and identified the importance of Cloud platforms to AI/ML and Data systems.  For this week, I will delve into the world of Big data by explaining the key concepts of this revolutionary technology and elucidate how Big data exists because of Cloud computing.

Relatively, Big Data is a young term that was first used in the 1990s. Similar to Cloud Computing, there is no agreed academic definition of the term. The most common definition of Big data, mentioned in Rob Kitchin’s book, “refers to handling and analysis of massive datasets” and “makes reference to the 3Vs; volume, velocity and variety.” According to these 3Vs, Big data is “huge in volume,” “high in velocity” and “diverse in variety in type.” For Johnson and Denning, the Big data revolution occurred due to the “convergence of two trends: the expansion of the internet into billions of computing devices, and the digitization of almost everything.” In the sense that the internet provides access to massive amounts of data and digitization makes almost everything digital. There is a strong relationship between Big data, AI/ML and Cloud computing. Without Cloud computing, it is impossible for Big Data to exist. In the real world, the main providers of Cloud services provide the infrastructure and services for AI/ML and Big data to thrive. These providers use the concept of convergence to combine the three in one system. Through this system, unstructured Big data is classified, sorted, and analyzed by hungry neural net algorithms provided by AI/ML technologies, and the outputs are saved in cheap memories provided by Cloud computing ubiquitous servers. From this quick analysis, we infer that without AI/ML’s algorithms training, unstructured Big data can’t be classified and sorted. Also, without the infrastructure provided by Cloud computing, AI/ML processes of Big data can’t be implemented. 

The readings for this week varied between optimistic and pessimistic in the way they think of Big data developments; socially, technologically, educationally and application wise. What really resonates for me is Cathy O’Neil’s chapter; Civilian Casualties: Justice in the Age of Big Data. O’Neil’s work puts forward the notion that the incorrect outputs of Big data trained by AI/ML algorithms can lead to inequalities in our societies. O’Neil used examples from politics, education and the business sectors to validate her argument. The important conclusion I had from this chapter is the fact that Big data processes “codify the past” but they do not invent the future. According to O’Neil, only human moral imagination is able to do so. She advocates for the necessity to provide neural nets algorithms with human moral values to be able to produce ethical Big data. The question here is, are the “big four” providers of Cloud services able to put equality ahead of their profits?


Bernardo A. Huberman, “Big Data and the Attention EconomyUbiquity 2017, (December 2017).

Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York: Crown, 2016).

Jeffrey Johnson, Peter Denning, et al., Big Data, Digitization, and Social Change (Opening Statement), Ubiquity 2017, (December 2017).

Rob Kitchin, The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences. London; Thousand Oaks, CA: SAGE Publications, 2014.


Combining AI/ML and Data Systems in the Cloud Architecture- Chirin Dirani

When it comes to cloud computing, most of the readings for this week refer to the fact that there is an uncertainty in the definition of this term. This uncertainty is intentional as Professor Irvine mentioned in his presentation Introduction: Topics and Key Concept of the Course. Cloud is based on an old engineering metaphor and means “a blackbox of connections in a network.” Our readings for this week indicate that the National Institute of Standards and Technology (NIST) defines cloud computing as “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction.” When cloud computing is combined with AI/ML and data systems, the outcome of the three computing trends will not only be a gigantic network that is able to learn, improve and store enormous amounts of data but also a cost effective and environmentally friendly one. If we think of the way these three trends work in combination, it seems to be a complex mechanism. In this assignment, I will try to deblackbox how AI/ML and Data systems are implemented in the Cloud architecture by revealing the key design principles and main architecture of cloud computing system and list some points of its convergence with AI/ML and data systems. 

In his book, Cloud Computing, Ruparelia said that the cloud computing model “promotes availability and is composed of the following:

  1. Five essential characteristics (ubiquitous access, on demand availability, pooling of resources, rapid elasticity and measured service usage).
  2. Three service models (Infrastructure as a service; IaaS, platform as a service; PaaS and software as a service; SaaS) 
  3. Four deployment models (public cloud, private cloud, community cloud and hybrid cloud).

This structure of characteristics, deployments and services is what makes Cloud computing a beneficial network, as it provides agility, elasticity, cost saving and fast global deployment. Cloud computing relies on two basic virtualization technologies; server virtualization and application virtualization. This virtualization enables everything we can do in computing to be virtual and scalable. 

At first glance, AI/ML, data systems and cloud computing look like working separately but in fact, they are proactively linked to each other. While AI/ML and data systems work together in an inseparable way, the vast amount of rich output data needs the scalability and extensibility offered by cloud computing, in the shape of cheap extensible storage memories. On the other hand, blending AI/ML solutions as a service with cloud computing, improves the already existing cloud solutions and takes it to another level of efficiency. This unique combination of the three computing trends encourages organizations of every type, size, and industry to shift to cloud computing for a wide variety of use cases. This is due to fact that this combination “offers huge advantages in terms of installation, configuration, updating, compatibility, costs and computational power.” The best example, I can think of, to demonstrate the convergence between AI/ML, data systems and cloud computing is Amazon Web Services (AWS). This tool is currently the leading platform in the world (according to AWS website). AWS’s ML service provides the broadest and deepest set of machine learning services in one cloud platform. AWS enables data scientists and developers to “create faster solutions and add intelligence to applications without needing ML expertise,” as the platform facilitates using pre-trained data through AI services to many applications such as creating more intelligent contact centers, improving demand forecasting, detecting fraud, personalizing consumer experience and more.” The following diagram illustrates how AWS’s machine learning is used to build, train, and deploy models faster with less effort and at a lower cost.

With the increasing number of cloud computing platforms users in the last few years, a number of risks surfaced. These risks hold some users back from adopting Cloud computing service. The risks include but are not limited to ambiguity of what cloud is, concerns over maturity to meet organization’s needs, security issues caused by lack of direct control over systems and data, and corporate policies permit moving to the cloud, and flexibility in choosing a suitable provider. Given the fact that there are“big four” cloud computing providers, the question, raised by Professor Irvine here, is what are the consequences of merging these big four in one provider? An answer to such an important question requires a separate study. However, I can say that there will be a maximization in both advantages and disadvantages of Cloud computing. The four bodies will grow into one incredible network that is able to gain access to massive economies of scale. On the other hand, this gigantic network will be monopolized by one provider that will control the access of millions if not billions of global users to services by one provider.


Amazon Web Services (AWS) [main site]: browse services and Machine Learning products.

AWS Machine Learning.

Boudewijn de Bruin and Luciano Floridi, “The Ethics of Cloud Computing,” Science and Engineering Ethics 23, no. 1 (February 1, 2017).

Derrick Roundtree and Ileana Castrillo. The Basics of Cloud Computing: Understanding the Fundamentals of Cloud Computing in Theory and Practice.

Nayan B. Ruparelia, Cloud Computing (Cambridge, MA: MIT Press, 2016).

Professor Martin Irvine Irvine, Introduction: Topics and Key Concepts of the Course.


AI/ML Critique- Chirin Dirani

In the last five decades, the major human-made threats that were widely discussed are the population explosion, environmental pollution, and the threat of a nuclear war. Today and due to the revolutionary development in computer systems in general and machine learning (ML) as part of artificial intelligence (AI) in specific, the threat shifted into another human-made threat. This new danger stems from the training process of general purpose AI algorithms (Strong AI). These algorithms pick up large amounts of information from stored data and have the ability to learn faster than humans; Reinforcement learning. The outcomes of these algorithms are “invisible technologies that affect privacy, security, surveillance, and human agency.” When it comes to the future of AI/ML, there is an ongoing debate about the impact of this system on humanity. Some people lean towards the benefits that will come from AI/ML applications; such as education, health care and human development. Others focus on the risks and the destructive power of AI/ML. For such critiques, I will discuss some urging ethical issues surrounding AI/ML today. I will do this for two main reasons. First, to highlight these concerns and think of their remedies before they become alarming issues. Second, it is a humble effort to participate in democratizing and decentralizing knowledge about AI/ML and reinforce human agency from being impacted to impacting how AI should be used for our good. 

Nowadays, AI/ML influences many aspects of our lives and many decisions we make. Our world has become more dependent on AI/ML technologies. For example, these technologies control our smart communication devices, home devices, televisions, businesses and even governmental entities. The more influential AI/ML becomes, the more effective ethics, social and governmental intervention should be. While reading the rich materials for this class, two main concerns about AI/ML stood out for me; bias and lack of surveillance. In the following paragraphs, I will shed light on these concerns in some details.


According to AI, Training Data & Bias, the better the data we input into machine learning training, the higher quality outputs we get. Similarly, the more biased the input data is, the more biased trained outputs we get. To demonstrate this point briefly, computer systems collect training data from lots of sources, then deep learning algorithms train these data by recognizing patterns using large numbers of filters (deep learning). In every single action we take while using AI/ML technologies, we as humans, are providing everyday an endless amount of training data to help machine learning to predict. The problem here lies in the kind of collected data we feed and the filters used to train these data. If these inputs of data are biased, the system’s predictions and outputs will definitely inherit these biases and prioritize or disfavor things over others. What is even worse is that when those who feed these data are not aware of their biases, the system learns from biased data and saves it as sources, to be used in future predictions and here lies the biggest problem. My question here is, how can we control these inherent biases in the data and how can we correct them during the training process or detect and extract them from the system. I would like to share here a personal experience on this matter as an example. The alphabet of my language, Arabic, doesn’t contain the letter P. Although I have trained myself to pronounce this letter as accurately as possible, everytime I speak to a chatbot and try to spell out a word that contains the letter P, the machine asks me over and over if I meant the letter B.  

lack of surveillance

As mentioned in the Ethics & AI: Privacy & the Future of Work video, there is a huge gap between “those are involved in creating the computing systems, and those that are impacted by these systems.” What matters here is what the society (creators and impacted) want to shape? and how technology should be used to achieve the target? The answer for this question may not be easy, but logically, I can suggest giving the agency for more impacted people by giving them and their representatives of community groups and policymakers, the opportunity to get more involved in evaluating and auditing decisions made by creators of AI/ML technologies. By getting involved, users impacted by these technologies get more knowledgeable about the process and make sure that innovations are ethical, inclusive and are useful to everyone in society. In other words and similar to social responsibility organizational departments, I would advocate to stop the vague and hard to implement Responsible AI plans by regulations and legislating Responsible AI departments in every organization active in the field of AI/ML, whether it is striving for profit or power.


  1. AI: Training Data & Bias
  2. Ethics & AI: Equal Access and Algorithmic Bias
  3. Janna Anderson, “Artificial Intelligence and the Future of Humans,” PEW Research Center, 10 December 2018, visited 20 March 2021.
  4. Karen Hao, “In 2020, Let’s Stop AI Ethics-washing and Actually Do Something,” MIT Technology Review, 27 December 2019, visited 20 March 2021.
  5. “Responsible AI,” Google, visited 20 March 2021.


SIRI: Awesome But not Intelligent- Chirin Dirani

In general, virtual assistants (VAs) are software agents that can perform tasks or services for an individual based on commands and questions. These Commands and questions are received by VA through text, voice (speech recognition) or images. The VAs usage increased dramatically in the last three years and many products, using specifically email and voice interfaces, entered the market. While Apple and Google installed bases of users on their smartphones, Microsoft installed Windows-based personal computers, smartphones and smart speakers, Amazon installed base for smart speakers only, and Conversica engagements based on email and sms. In this assignment, I will focus on one of the speech recognition “Virtual Assistant” services by Apple, branded as Siri. By analyzing how Siri works, I will try to explain how NLP can help in converting human commands to actionable tasks by machines. 

 Siri is a speech-activated virtual assistant software that can interpret human speech and respond via synthesized voice. “The assistant uses voice queries, gesture based control, focus-tracking and a natural-language user interface to answer questions, make recommendations, and perform actions. Siri Does this through delegating requests to a set of internet services. The software adapts to users’ individual language usages, searches, and preferences, with continuing use. Similar to other speech- activated virtual assistants, Siri uses speech recognition and natural language processing (NLP) to receive, process and answer questions or implement demands. In what follows, I will try to analyze how this system works.

As mentioned before, NLP and speech recognition are the foundations of virtual assistant design. There are four main tasks that make this system process voice inputs into voice outputs. The process starts with converting a voice input (question or command) into text- interpreting text- taking a decision- converting text to speech out. This cycle repeats as much as the user continues asking or commanding the system. In more technical terms. Virtual assistant (Siri) receives the user’s voice input using a microphone. Speech recognition then uses NLP to encode voice input and convert it into recognizable computer data. Linking speech recognition to complex NLP helps the software to figure out what the user says, means, and what wants to happen. The software connects with a third party to make a decision and implements the user’s command (take action) or answer the user’s question by decoding the answer into recognizable computer data, to be then sent out as a speech sound output in Siri’s speaker. The following diagram illustrates the many levels that Siri’s complex system consists of.

Using virtual assistants already has and will have in the future many useful applications. Especially when it comes to medical applications and dealing with physically challenged individuals. However, the psychological effects derived from the emotional bonds that users could form with the future generations of Siri and similar VAs is alarming. Watching the controversial movie Her and reading about Gatebox made me deeply think of the future social and psychological impact of virtual assistants on the human race. Raising awareness about the design principles of VAs will definitely mitigate illusions and hypes created by marketing campaigns by companies for their VAs. Revealing the layers of this innovative system validates what Boris Katz said “current AI techniques aren’t enough to make Siri or Alexa truly smart.”



AI/ML/NLP application: Google Translator- Chirin Dirani

Reading about the Natural Language Processing (NLP) for this class, reminds us with the test proposal by the famous Alan Turing in 1950. He proposed that “the test is successfully completed if a person dialoguing (through a screen) with a computer is unable to say whether her discussion partner is a computer or a human being.” However, having linguistics involved in the NLP field makes achieving this goal a real challenge.  I truly believe that NLP will succeed in formalizing mechanisms of understanding and reasoning when it develops to an intelligent program. This program can understand what the discussion partner says, and most importantly, concludes from what has been said in a dialogue that keeps the conversation going on. In other words, this will happen when we can’t differentiate between a machine or a human answering all our questions in chatbots and when a translating program is able to translate the most accurate translation from one language to another without any mistakes. For this assignment, I will not argue if Turing’s proposal is achievable or not, rather I will use the knowledge we obtained from the rich materials to describe the main design principles behind Google translator as one of the AI/ML/NLP applications.

My interest in Google translator stems from the fact that it is a significant tool I employ in my professional English- Arabic translation work. I witness its rapid and continuous development when I use this fascinating tool everyday. Thanks to the readings of this class, now I understand how this system functions and how neural system developed from translating piece by piece of a sentence into a whole sentences translation at a time. Thierry Poibeau claims that “Machine translation involves different processes that make it challenging.” In fact, incorporating grammar in translator logic to create meaningful text is what makes translation very challenging. However, “The availability of huge quantities of text, on the Internet, discovering Recurrent Neural Networks (RNN), and the development of the capacity of computers have revolutionized the domain of machine translation systems.” According to Poibeau, using deep learning approaches since the mid-2010s gave more advanced results to the field. As deep learning makes it possible to envision systems where very few elements are specified manually, and help the system extrapolate the best representation from the data by itself. 

Google translation processes are well clarified in How Google Translate works: The Machine Learning Algorithm Explained video. It informs us that language is very diverse and complex and for that, using neural networks (NL) proved to be useful in solving the problem of language translation. As we read last week, “neural networks learn to solve problems by looking at large amounts of examples to automatically extract the most relevant features.” This allows neural networks to learn patterns and data, which enables it to translate a sentence from one language to another on its own. In this context (sentence translation), neural networks are called Recurrent Neural Networks because they deal with longer sentences and they are basically long short-term memory. To activate these networks, there is a need for an encoder- decoder architecture, where the first RNN encodes/converts the source language sentence into recognizable computer data (vectors and matrices) and the second RNN decodes the computer data (vectors and matrices) to the target language sentence. Running different types of information, at the same time, using deep learning, allows better decision making. The whole translation process involves more complex and abstract processes, however, the encoder- decoder architecture principle is the essence of any machine translating system.

Moving forward, every time I use Google translator, I will remember that without the revolutionary development in the NLP field (The discovery of RNN), this tool would not have been available. In conclusion, Yes, machine translation system created useful tools by all means, However, the current encoder- decoder architecture is efficient for medium length sentences but not the long ones. This fact makes translation system still far from being able to give the most accurate translation of everyday texts.



  1. What is the main difference between ConvNet and RNN?
  2. Poibeau said that “through deep learning it is possible to infer structure from the data and for that, it is better to let the system determine on its own the best representation for a given sentence.” How can we trust that the system will make right decisions? 
  3. GPT-2 is a double-edged sword. It can be used for good and malicious causes. Does it make sense to say that when algorithms in this unsupervised model develop to be able to evaluate its accuracy on training data, is the key for GPT-2 to become an open source?


Found in translation: More accurate, fluent sentences in Google Translate (2016) 

How Google Translate Works: The Machine Learning Algorithm Explained (Code Emporium).

Thierry Poibeau, Machine Translation (Cambridge, MA: MIT Press, 2017).

Pattern Recognition: The Foundations of AI/ML- Chirin Dirani

The readings and videos for this week add more level of understanding to the foundations of AI and ML. Learning about pattern recognition is another step toward deblackboxing computing systems and AI. According to Geoff Dougherty, pattern recognition is when we put many samples of an image into a program for analysis, this program should recognize a pattern specific to the input image and to identify the pattern as a member of a category or class this program already knows. Because there are many categories or classes, we have to classify a particular image into a certain class, and this is what we call classification. The recognition process happens by training convolutional neural networks algorithms (ConvNets) to help the program recognize the pattern. These ConvNets can be applied to many image recognition problems like recognizing handwriting text, spotting tumors in CT scan, monitoring traffic on roads and much more. Dougherty emphasizes the fact that pattern recognition “is used to include all objects that we might want to classify.” The materials for this class provide many case studies for the applications of pattern recognition through ConvNets. I will start with Andrej Karpathy piece on how to take the best selfie, then will elaborate on digital image analysis as I understood it from the crash course; computer vision, and will end with the crash course video on using pattern recognition for Python code to read our handwriting. 

The first case is the interesting article by Andrej Karpathy. He tried to find what makes a perfect selfie by using convolutional neural networks (ConvNets). For Karpathy, ConvNets “recognize things, places and people in personal photos, signs, people and lights in self-driving cars, crops, forests and traffic in aerial imagery, various anomalies in medical images and all kinds of other useful things.” Karpathy introduced the basics of convolutional neural networks job and was more focused on his applied techniques of using pattern recognition in digital image analysis. By  training ConvNets, the program was able to recognize the best 100 selfies. Despite the fact that this case is an ideal case study for pattern recognition using ConvNets, however, I finished the article with more questions than the ones I had when I started it. The fact that we can feed ConvNets with images and labels of whatever we like! Made me convinced that these ConvNets will learn to recognize the labels that we want. This fact pushed me to question whether objective or subjective these ConvNets are! In other words, would the outputs change according to the gender, race, orientation and motives of the human feeding inputs?

To bridge the gap in understanding the convolutional neural networks algorithms, missing in Karpathy article, and how it works in decision making and pattern recognition (facial recognition here), I relied on the crash course episode on ML/AL. According to this video, The ultimate objective of ML is to use computers to make decisions about data. The decision is taken by using algorithms that give computers the ability to learn from data then make decisions. To start with, the decision process is called classification and the algorithm that does it is called classifier. To train machine learning classifiers to make good predictions, we need training data. Machine learning algorithms separate the labeled data by decision boundaries. At this stage, ML algorithms work on maximizing correct classifications and minimizing wrong ones. Decision tree is one example of ML techniques and it represents dividing the decision space into boxes. The ML algorithm that produces a decision can depend on statistics for making confident decisions or could have no origins in statistics. The decision tree in this case is called artificial neural networks inspired by the neurons in our brains. Similar to brain neurons, artificial neurons receive inputs from other cells, process those signals and then release their own signal to other cells. These cells form into huge interconnected networks able to process complex information. Rather than chemical and electrical signals, artificial neurons take numbers input and release numbers. They are organized into layers connected by links forming a network of Neurons. There are three levels of layers; Input layer, hidden layer/s and output layer. Hidden layers can be many layers and this is where Deep Learning comes from. There are two kinds of algorithms. The first one is sophisticated algorithms but not intelligent (weak or narrow) because they do one thing and they are intelligent at specific tasks such as finding faces or translating texts. The second kind is the general purpose AI algorithms (Strong AI). These algorithms pick up large amounts of information and learn faster than humans (Reinforcement learning).  

As for the second case of image analysis process. We feed an image as an input into a program, once a face in an image is isolated, more specialized computer vision algorithms layers can be applied to pinpoint facial landmarks. Emotion recognition algorithms can also interpret emotion and give computers the ability to understand when the face is happy, sad or maybe frustrated. Facial landmarks capture the geometry of the face, like the distance between eyes, nose or lips size. As the levels of abstraction are used in building complicated computing systems, similarly, they are used for facial recognition. Cameras (hardware level) provide improved sights then camera data is used to train algorithms to crunch pixels to recognize a face and process outputs from those algorithms to interpret  facial expressions. 

The last case is about crash course video on programming ConvNets to recognize handwritten letters and convert them into typed text. In this case, a language called python is used to write codes. The issue here is what Ethem Alpaydin called “The Additional Problem of segmentation,” which is how to write a code that figures out  where one letter ends and another begins. In this case, the neural network are programmed to recognize a pattern instead of memorizing a specific shape. To do so, the following steps should be implemented:

  1. Create a labeled dataset to train the neural network by splitting data into training sets and testing sets. 
  2. Create a neural network. AI should be configured with an input layer, some number of hidden layers and the ability to output a number corresponding to its letters prediction. 
  3. Train, test and tweak the code until l it’s accurate enough.
  4. Scan handwritten pages and use the newly trained neural network to convert into typed text.

In conclusion and to reemphasize what we said in previous classes, computer systems, AI and ML are useful but can not be intelligent like humans. It is all about understanding computing design layers. By understanding the process of pattern recognition today, we reveal  another level in this system and as Professor Irvine says, “There is no magic, no mysteries — only human design for complex systems.”


Crash Course Computer Science, no. 34: Machine Learning & Artificial Intelligence

Crash Course Computer Science, no. 35: Computer Vision

Crash Course AI, no. 5: Training an AI to Read Your Handwriting

Ethem Alpaydin, Machine Learning: The New AI. Cambridge, MA: The MIT Press, 2016.

Geoff Dougherty, Pattern Recognition and Classification: An Introduction (New York: Springer, 2012).

Professor Irvine Introduction Intro to Computing Design Principles & AI/ML Design


Digital Data: Encoding Digital Text Data- Chirin Dirani

It has always fascinated me when I observed my Japanese colleagues writing their monthly reports in their language and using the same computer to send us emails in English. In fact, the first question I had when we started deblackboxing the computing system, in this course, is how do I use the same device or the same system to send English texts to my English speaking friends and Arabic texts to Arabic speaking friends. In the third week, we were introduced to the binary model used by computing system “in its logic circuits and data.” It was easy to understand the representation of decimal numbers in a binary system but not letters or whole texts. The readings for this week deblackbox another layer in the computing system and differentiate between two methods of digital encoding; digital text data and digital image data. For this week’s assignment, I will try to reflect my understanding of how digital text data (for Natural Language Processing) are encoded to be interpreted by any software.

Before diving into the digital text data encoding, I will start by defining data. Professor Irvine’s reading for this week defines “data as something with humanly imposed structure, that is, an interpretable unit of some kind understood as an instance of a general type.” By interpretable, we mean that data is something that can be named, classified, sorted, and be given logical predicates or labels. It is also important to mention that without representation (computable structures representing types) there is no data. In this context, computable structures mean “byte sequences capable of being assigned to digital memory and interpreted by whatever software layer or process corresponds to the type of representation,” text characters in our case.

The story of digital text data encoding starts with ASCII (American Standard Code for Information Interchange) table A. Bob Bemer developed the ASCII coding model to standardise the way computing systems represent letters, numbers, punctuation marks and some control codes. In chart A below, you can see that each and every modern English letter (small and capital), punctuation mark and code has its equivalent in the binary system. The seven bits binary system used by ASCII represented 127 English letters and symbols only. While the bit patterns of the 127 printable ASCII characters are sufficient to exchange information in modern English,  but most other languages need additional symbols that are not covered by ASCII. 

 Table A

ASCII sought to remedy this problem by utilizing the eighth bit in an 8-bit byte to allow positions for another 128 printable characters. Early encodings were limited to 7 bits because of restrictions of some data transmission protocols, and partially for historical reasons. At this stage, extended ASCII was able to represent 255 characters, as you can see in table B. However, as we read in Yajing Hu final project essay, Han characters, other Asian language families and much more international characters were needed that could fit in a single 8-bit character encoding. For that reason, Unicode was found to solve this problem. 

Table B

Unicode is an information technology standard for the consistent encoding, representation, and handling of text expressed in most of the world’s writing systems. It is intended to address the need for a workable, reliable world text encoding. Unicode could be roughly described as “wide-body ASCII” that has been stretched to 16 bits to encompass the characters of all the world’s living languages. Depending on the encoding form we choose (UTF-8, UTF-16, or UTF-32), each character will then be represented either as a sequence of one to four 8-bit bytes, one or two 16-bit code units, or a single 32-bit code unit. However, UTF-8 is most common on the web. UTF-16 is used by Java and Windows. UTF-8 and UTF-32 are used by Linux and various Unix systems. The conversions between all UTFs are algorithmically based, fast and lossless. This makes it easy to support data input or output in multiple formats, while using a particular UTF for internal storage or processing.

The important question now is how can softwares interpret and format text characters of any language? To answer this question, I’ll go back to Professor Irvine’s definition of data “as something with humanly imposed structure, that is, an interpretable unit of some kind understood as an instance of a general type.” My takeaway here is that the only way for softwares to process text data is to represent characters in bytecode definitions. These bytecode definitions work independently from any software that is designed for them. With that said and in conclusion, unicode uses binary system (bytecode characters) designed to be interpreted as data type for creating instances of  characters as inputs and outputs through any software.


  1. Peter J. Denning and Craig H. Martell, Great Principles of Computing (Cambridge,The MIT Press, 2015), p.35.
  2. Prof. Irvine, “Introduction to Data Concepts and Database Systems.”
  3. ASCII Table and Description.
  4. ASCII.
  5. Han Ideographs in the Unicode Standard, (CCT).
  6. ISO/IEC 8859. 


Kelleher reading: The constrains in data projects relate to what attributes to gather ad which attributes are most relevant to the problem we are solving. Who decides which data attributes to choose? Can we apply the principle of levels on data attributes?  



Information, Data and Meaning- Chirin Dirani

Although E-information transmission model  is very important to understand how an important layer in the semiotic systems functions, however, it can’t be used as a general model for communication and meaning. According to Professor Irvine’s piece, Introducing Information Theory, “the signal transmission theory is constrained by a “signal-unit, point-to-point model,” with the “conduit” and “container” metaphors.” This transmitted signal code model is not a description of meaning, because the technical aspect of the information model was not originally designed to interpret meanings as the human brain does. As a result, the content that passes through the conduit when the E-information is transmitted does not hold a meaning. Another important feature of the E-information is that it needs a symbolic medium for the purpose of “understanding of meaning frameworks for meanings” which is called, by cognitive science research, the “meta-symbolic” knowledge. 

Now we know that we need a medium for meaningful communication and representation, here comes the need for an essential layer in the semiotic systems, which is the “E-information theory.” In fact, in electronic systems, information uses the “physics of electricity” as the required medium to “impose regular interpretable patterns.” The information in this context is designed as units of structure-preserving structures.” As emphasized by Professor Irvine, these electrical signal patterns are designed to be “communicable” internally and externally; internally through the components of the physical system and externally in meaningful symbols to humans.  The physics of electricity as a medium is what makes the information theory model essential for everything electronic and digital.

Finally, It is important to mention here that information theory model is insufficient for extending to models for meanings, uses, and purposes of our sign and symbol systems because as described by Professor Irvine, “meanings” do not have the properties of a physical medium in the communication process. Meanings as generators of symbols are presupposed and taken for granted. This fact brackets the meanings off Shannon’s system problem.  

Case Study: 

In the mid 90s, the only means of shopping in Syria were few physical shops and a famous German shopping catalogue. The minimum time to receive the ordered items from Germany is two months. Paradoxically, last Christmas and through Amazon, I ordered a Sony headset at 10 am, and received it at 3 pm the same day. I can’t think of a better example than online shopping as a case study for this class. Using my Iphone device with its Apple iOS system, I logged into the Amazon app and chose the desired Sony headset. When I clicked “submit order,” my Iphone as a transmitter, used physics of electricity as a medium to impose regular interpretable patterns via electrical signals. These signals were communicated through the components of the physical system (the internet). Amazon (as a receiver) received the transmitted signal and interpreted it into meaningful symbols. My order was processed within Amazon system in a few hours. Amazon (as a transmitter now), sent other types of signals that were interpreted into meaningful symbols by the shipping company, as a receiver, to deliver the ordered item. This case made me pause and think about the importance of computer system as a solution and a mean of protection for human being during COVID.


  1. What is the difference between a symbol and a token? Is the token an encoded symbol?
  2. Information theory is an engineering solution to a semiotic problem — it is a model for “designer electronics.” Why designer electronics not designed electronics?
  3. Can you explain the concept of “electronic systems are designed as units of structure-preserving structure” in more details?


Martin Irvine’s, Introducing Information Theory (2021).

Decipher the Enigma Behind Computer System- Chirin Dirani ( ー・ー・ ・・・・  ・・  ・ー・ ・・  ー・)

Training on Samuel Morse electrical telegraph code was a prerequisite for the completion of my tenth grade mandatory summer camp, back in Syria. I didn’t know then, that this methodology of transforming “patterns of electrical pulses into written symbols” have inspired scientists to create modern computers. The concept of Morse system was used as the basis to transform computers from digital binary to symbol processors. The maturing process of the system witnessed many leaps, which transformed it from a number-crunching tool into a symbol-manipulating process. With time, six principles were identified to produce computation in this seemingly complex system. Understanding the bottom- up design approach provided by the main principles, will help us better understand this system and decipher its codes.

In his video, Professor Irvine explained thoroughly how the binary system, that has only two positions, was used to transform the digital binary computers into symbol processors. The system used binary electronics and logic, in addition to base 2 math for encoding and processing computations. The system uses electronics because they provide the fastest physical structures for registering and transmitting signals that we can encode. It also used electricity to impose a pattern on a type of natural energy. Imposing this pattern, accompanied by assigning human symbolic meanings and values to physical units, created a unified subsystem to build on. We can add different layers on the subsystem to transform inputs into outputs for any technology. This process helps us understand computation and the components of the computer system. 

In their book, Great Principles of Computing, Denning and Martell, introduced six principles of computing; communication, computation, coordination, recollection, evaluation and design. The authors emphasized that these principles are tools used by practitioners in many key domains and are considered the fundamental laws that both empower and constrain technologies. Today’s computing technologies use combinations of principles from the six categories. It is true that each category has its weight in a certain technology, but the combination of the six exists in any technology we examine today. The bottom up approach stems from the fact that these principals work as the basis (bottom) to support technologies’ domains (up). Knowing that computing as a whole depends on these principles is very intriguing. It opens up the door to question and investigate how these principles work and interact to develop new discoveries. 

Understanding the subsystems and layers that compose computers and that computing principles support any technology we use today, made me understand this complex system. However, being a novice in the field of technology, grasping the process of principles supporting other domains will be one of my learning objectives in this class.


Denning, Peter J. and Martell, Craig H. Great Principles of Computing. (Massachusetts: The MIT Press, 2015).

Irvine Martin. Irvine 505 Keywords Computation, 2020.


AI: Between Hope and Mystery- Chirin Dirani

The readings for this week explain how the concept of simulating human intelligence has evolved throughout history. These concepts were accompanied by some serious efforts in the field of computing systems that lead to the emergence of a new science; artificial intelligence (AI). AI opened the door to new applications that invade every aspect of human life and as Ethem Alpadin said,  “digital technology increasingly infiltrates our daily existence” and makes it dependent on computers and technologies. The relatively new discipline of innovation was received with varying reactions. Some perceived it as hope and others looked at it with fear and suspicion. This reflection does not aim at assessing whether the impact of AI on the human race is positive or negative, rather, I will explore the rationale behind the mystery  and misperceptions around AI, and the uncertainty of its effect in our daily lives.

In a basic analogy, we depend on cars to move us around in our daily commute. It is basically not important for us to know how the engine functions, but that doesn’t make us worried about using cars daily. We are more concerned about controlling our car’s speed and destination. Similarly, normal users don’t know how AI works but that shouldn’t stop them from becoming users. The Only difference in our analogy is that the AI user doesn’t control this technology and doesn’t know where it will lead. It is rather controlled and managed by a very small percentage of companies such as Giant corporations and the government. This ambiguity of control and destination combined with the small size of institutions that make the decisions when it comes to the use of AI, have promoted and nurtured such unease and suspicion of AI among the public. 

According to Britannica, human intelligence is defined as “mental quality that consists of the abilities to learn from experience, adapt to new situations, understand and handle abstract concepts, and use knowledge to manipulate one’s environment.” In his account, Introductory Essay: Part 1, Professor Irvine said that today’s technologies of simulating human processes are represented in the shape of codes that run in systems. He added that these codes are protected by Intellectual Property (IP) for a small percentage of companies.  The “lockdown” of codes by these companies combined with the “lock-in” of consumers by other companies, hinders a wide-range access to these codes. These restrictions blackbox AI and deter the public’s ability to understand this science. The same state of ambiguity leaves the AI users vulnerable to falsehoods generated by the media and the common public discourse on AI and technologies in general. 

I am hoping to be able to understand, with time, whether or not such a monopoly over AI is useful. If not, will we witness a phase where AI becomes regulated and tightly monitored to ensure best practices and the protection of the public from the possible diversions in the use of AI by some firms i.e. intelligence, consumerism?


Ethem Alpaydın, Machine learning : the new AI (Cambridge: MIT Press, 2016), p. X.

“Human Intelligence | Definition, Types, Test, Theories, & Facts.”