Category Archives: Week 3

Information with and without meanings

To understand the information theory and its application in the engineering field, we must forget its daily life meaning, which represents the knowledge obtained during investigation and instruction. Signal transmission or processing happens when the content we send or receive in any form of electrical formations like text, sounds, images, films… and even the files we uploaded and converted these into signals. The main feature is simple but puts simple steps repeatedly into layers and layers, contributing to complex transmission functions. First, the message source takes the content, refers to the codebook, and turns the content but detached the meanings into electric signals, then the abstract signals pass through the psychical wires or tubes for transmitting these signals to the destinations. Before I finish this. “
end to end” signal journey, the password will be transferred back to the content with meanings that apply to the social environment using the same codebook (Martell, 2015).

As professor Irvine mentioned in his article: “The model provides an essential abstraction layer…The meaning and social uses of communication are left out of the signal transmission model because they are everywhere assumed or presupposed as what motivates using signals and E-information at all” (Irvine, 2020). For this reason, I think to expect another practical sense of why the signal-code transmission model is not a description of meaning may also relate to the limitation of signal transformation. According to Shannon’s A Mathematical Theory of Communication, the large amount of signal will reduce its accuracy and activity due to the long-distance process of transforming. And the interference like noise will also disturb its preciseness. Two ways to solve it are using enough energy to boost a strong enough signal not to get affected by surrounding irrelevant signal resources. The second is to have bandwidth large enough to allow these massive amounts of signal to pass through without getting deducted.

Speaking of big data, the description of meaning applied in social content represents a lot of information if conducted into signals. That’s why Shannon uses “bit” instead of other decimal systems. It leaves the machines less opportunity to make mistakes and gives them more chance to represent more meanings and contents by reducing the input choices. Because the more we use as input, the less we get as output. And the reason why the information theory is only sufficient as substrates is that without the comprehension of semiotic meaning human uses every day, it loses the purpose of encoding, decoding, and transforming.

 

References 

Denning, P. J., & Martell, C. H. (2015). Great principles of computing. The MIT Press.

Irvine, M. (2020). Introduction to Computer System Design. 

 

Human controls the AI –Fudong Chen

In my undergraduate thesis, I used a mood recognition tool to collect and analyze comments below the videos to find the audience’s emotion towards the videos’ topic, so I am interested in the natural language processing. The method of the bag of words is a really classical method of machine learning, building a dictionary with words, transforming the text into a specific vector so that the computer can understand and setting some words and rules to make the result more accurate. Outside the black box, we can just find the data in and the result out, while inside the black box, we can see the design and idea of human through the process of machine learning. It’s similar to the idea of the article of Johnson and Verdicchio. The autonomy of AI is limited by designer. Although we do not know how the AI deal with the data, but we can limit the result by the analog input and the actuators.

When come to the topic between AI and designer, it seems like that the article of Johnson and Verdicchio considers AI as a type of tool. It points out that the AI discourse neglects human actors and human behavior and emphasizes the effect of designer which can limit the AI. The designer should be responsible for AI. But I am concerned whether this statement ignores the users of AI. In reality, more and more AI are open to individual. When we pay attention on the responsibility of AI’s designer, should we also consider the responsibility of AI’s users? For example, the mood recognition can be used to determine customers’ feeling about a product, but it can also be used to monitor the public opinion.

 

Reference

Alpaydin, E. (n.d.). Machine Learning. MACHINE LEARNING, 225.

Johnson, D. G., & Verdicchio, M. (2017). Reframing AI Discourse. Minds and Machines, 27(4), 575–590. https://doi.org/10.1007/s11023-017-9417-6

Behind our familiarity with AI

A big part of “deblackboxing” the mystery behind computer is realizing that there isn’t something necessarily to de-blackbox. The gap between true understanding of computing and what goes on behind ‘closed screens’ is the complex arbitrary notions that we ourselves have given to something we have actually created. The truth behind it, is that it is not as unfamiliar as we think. If we think about it, all these designs, systems, software didn’t create themselves. Someone had to build them based on human knowledge, needs, desires, etc. In reality, they are only a reflection of our human day-to-day functions that we encoded to make our lives easier and faster, or at least that is the goal without including all the ethical, security, privacy, etc. issues that have rises over the years. “The action of computing comes from people, not principles” (Denning & Martel, 2015, 19). However, breaking down and highlighting these subparts of computing and systems in order to understand the information process and algorithms that guide them towards executing specific commands and demands. We use design structures and principles of computing to transform information, discover it, classify it, store it and communicate it, these “structures are not just descriptive, they are generative” (Denning & Martel, 2015, 15). The countless masses of information whether physical, digital or even conceptional have been overwhelmingly growing through out the years and scientists, coders, etc. have needed to find different and more sufficient ways to manage such matters but also “build systems that could take over human cognitive work” (Denning & Martel, 2015, 27) and as Morse had suggested; to “construct a system of signs which intelligence could be instantaneously transmitted” (Irvine, 2020, video)

Digging into what are these main concepts helps us realize that in reality computing and this black box isn’t so dark and mysterious after all. A simple duo of numbers, 1 and 0 have managed to create such a vast system of knowledge, storage and processing of information that have ultimately changed life as we know it forever. For example, just as human memory is crucial in conducting really any type of daily matter no matter how important or unimportant it can be. Similarly, computer, digitized and software memory is a crucial design principle for the functionality and existence of computes as we know them today and “the most sophisticated mechanism” (Denning & Martel, 2015, 23). However, in order to keep that memory and all of its functionalities safe, the concept of security came to play a major role in the computer’s system design principles as life slowly started taking a turn “online”, we had to find ways to secure privacy, individuality, etc. the same way we did in real life, online. Starting with time-sharing systems in the 1960s, information protection, ways to control access to confidential and private information, ways to file systems hierarchically to provide user customization and more, policies for computer operators ((Denning & Martel, 2015, 23), needed to be created in order for people to share the same familiarity and feeling of safety that they do in real life, virtually. 

 

Because of the aforementioned, two number usage, the “Y2K” problem arose highlighting the importance of danger in information vulnerability that can be found due to network sharing, the World Wide Web and more, database records, passwords, personal information, etc. can be easily accessed and uncovered if they want to be (Denning & Martel, 2015, 23-25). Machine Learning and Artificial Intelligence have made it possible to create for security purposes factors of authentication and identification. Biometrics, for example is the “recognition or authentication of people using their physiological and/or behavioral characteristics”, these can include “the face, […], fingerprints, iris, and palm [as well as]. dynamics of signature, voice, gait and keystrokes” (Alpaydin, 2016, 66). Even under these circumstances where technology has developed to such an extent where we can literally unlock our phones with our faces or walk through stores and office spaces while purchasing things and tracking location is rooms through facial recognition, to unlocking high risk information and privacy matters with your eyeball or finger-print, we can extensively comment and discuss the social and ethical issues that arise from such capabilities, showing exactly the idea that in reality all of these “ultimate-crypto-computer-sciency-too-hard-for-anyone-else-to-understand” myths, are truly just a reflection of very human selves onto something technological that we have created so extensively to the point were even our human biases, debates, prejudices, etc., have been unconsciously (or consciously) applied on/in to them. 

Denning & Martell, 2015 p.27-28 

“Automated cognition systems need not work the same way the human mind works; they do not even need to mimic a human solving a problem. […] The methods used in these programs were highly effective but did not resemble human thought or brain processes. Moreover, the methods were specialized to the single purpose and did not generalize.” 

So why do we alienated ourselves and are so concerned/scared about the development of tech, AI, computers, etc. when they can basically never be as intelligent and as advanced as the human cognitive brain and mind? 

 

References 

Alpaydin, E. (2016). Machine learning: The new AI. MIT Press.

Denning, P. J., & Martell, C. H. (2015). Great principles of computing. The MIT Press.

Irvine, M. (2020). Introduction to Computer System Design. 

 

The Transformation of Information

In the Great Principles of Computing, Denning describes the three waves of computing, 1) a science of the artificial (1967), 2) programming (1970s), 3) the automation of information processes in engineering (1983).  Yet, it was not entirely clear if it was described what wave of computing we are currently in, and,  if it was not described, how might we describe it? Furthermore, there were several terms used which were not explained in sufficient detail, such as “batch processing” and “cryptography,” which seem to play important roles in computing. 

In Machine Learning, Alpaydin explains how systems are still outdone by humans as far as recognizing handwritten language. (Alpaydin 58) However, there are handwriting recognition programs which will likely make “Captchas” obsolete in the coming years. (Burgess 2017) Additionally, the battle being waged between spam filtering and spam emails is representative of an even greater war transpiring in social media platforms to prevent bot herding. (Alpaydin 16) While social media platforms utilize machine learning to extract trending topics and collect data on user habits, certain trends are being cultivated by the same forms of machine learning, coupled with bot herding (and other methods), to create what is known as “computational propaganda.” (Computational Propaganda  2021) While much research is still being done to determine what exactly constitutes computational propaganda, it is believed to have been present in social media for almost a decade. (Computational Propaganda  2021) 

Kelleher explains how deep learning was the key to unlocking big data, but also explains its potential for harming individual privacy and civil liberties. (Kelleher 35) Computational propaganda – which has the capability to impact civil liberties – is likely a side effect of deep learning, but can also be mitigated by the same deep learning which enables it.  Furthermore, Deep Learning describes why the development of a computer system capable of competing against expert players in the board game, “Go,” was so far behind DeepBlue (the chess system). Nevertheless, what was perplexing to me, was the reasoning behind Kelleher’s explanation, for example, where Chess has fewer options, but is more complex; Go has much simpler rules with many more board layouts. One might assume the simpler Go game would be easier to develop a computer system for, but the opposite is actually true.  

References

Alpaydin, E. (2016). Machine learning: The new AI. MIT Press.

Burgess, M. (2017, October 26). Captcha is dying. This is how it’s being reinvented for the AI age. Wired UK. https://www.wired.co.uk/article/captcha-automation-broken-history-fix

Computational propaganda . (n.d.). The Project on Computational Propaganda. Retrieved February 8, 2021, from http://comprop.oii.ox.ac.uk/about/#continue

Denning, P. J., & Martell, C. H. (2015). Great principles of computing. The MIT Press.

Kelleher, J. D. (2019). Deep learning. The MIT Press.

The Information Science Revolution

It’s interesting as an approach to say that most of the sciences are moving towards this concept of being an information science. Even my home in Psychology is rooted in this idea of being driven by the data. As the large data revolution took hold, and big data has become more available there is a question that lingers over the discipline of how we should pursue getting answers to questions we may have in our discipline. Should it be driven by the person or by the data?

There are many good reasons for doing either, with machine learning approaches co-opting the same tools that Psychologists use to run analysis, linear regression, latent frameworks, and covariation between variables. As data has gotten more rich and complex there has been a surge of different types of modeling needed to meet the demand of researchers for their experiments. I believe it comes down to the problem of being able to de-black box the journey of the data and not just how to find its solution. To be able to understand the ramifications of the question being asked and not simply running it.

It was a couple years ago when I sat in a stuffy conference room and two scientists from Iran had come up with a machine learning approach of determining whether someone was gay using photographs and profile pictures. An interesting application of data that already seems problematic but especially so when considering being gay in Iran is illegal. Lots of questions came up about the validity of the project and whether the data was valid but these are the things we are going to have to contend with. As we get more sophisticated models and richer data, even though each piece of the data may contribute only a small margin to the greater statistical story, when adding 10,000 variables with 100,000,000 rows we can start to predict just about anything, the question is, should we?

 

Questions – There were many but these are the once I am going to start with:

How do we try to understand the data which goes through these computational models?

Is network security, like physical security measures (e.g. locks on doors), play more of a role of security theatre and deterrence rather than being fully secure?

Citations –

Peter J. Denning and Craig H. Martell. Great Principles of Computing. Cambridge, MA: The MIT Press, 2015. 

John D. Kelleher, Deep Learning (Cambridge, MA: MIT Press, 2019).

Magical DL, and How to Plan for the Future?- Jianning Wu

There are several new conceptions for me in the readings of this week. From Alpaydin and Kelleher, we know that Deep Learning intimates human brains to build neurons and set false neural networks with several layers so that the learning algorithm could develop recognitions on what has been learned/processed. In the meantime, from the Introductory Essay written by Prof. Irvine, we learned that how modern computers work on the basis of the binary system (0&1), which makes DL even more magical since mentioned by Alpaydin in Machine Learning: The New AI, it studies with “hidden layer combining the values (which are not 0 or 1 but continuous allows a finer and graded representation of similar inputs).” In other words, it looks like DL is based on the binary system but surpasses that, which means DL supports the development of AI to learn more abstract things (more like humans).  

In addition, about the discourse in AI and ML, I also learned about several clarifications. According to the video- Techniques for Interpretable Machine Learning released by Association for Computer Machinery, “powered by complex models and deep neural networks, interpretable ML is progressing at an astounding rate; however, despite the successes, ML has its limitations and drawbacks- some decisions made by algorithms with ML are hard to interpret.” This fact relates to Reframing AI Discourse. ‘Machine autonomy’ is not equal to human autonomy. Although designers set patterns for the AI system, the AI will become an entity (run by rules that may be unexpected when encountering real problems). This kind of entity does not mean AI can determine where it will go by itself but become an independent program if there is no intervention. However, this also exposes practical problems: how should we make the regulations for an AI system; how could we evaluate the purpose of designers (whether we can get help from six principles proposed by Alpaydin); how should we make the guide for AI practice? Assumptions are assumed for us to predict the future, but questions are asked to solve. Although Johnson and Verdicchio said that the popular AI concepts are futuristic and too hard to achieve, we need to plan for the future. 

AI is…simple?

In a world where everything seems chaotic and it seems that many things happen randomly, it is quite comforting to hear that “machine learning, and prediction, is possible because the world has regularities. Things in the world change smoothly.” Of course, in this case, Ethem Alpaydin is speaking about the ways in which we can train our AI in order to complete a task or make a prediction, but nevertheless, these systems are trained on data from the world we live in. In fact, the smoothness assumptions of our sensory organs and brain are so important because they are necessary for our learning algorithms, which make a set of assumptions about the data to find a unique model. And while we probably all hold the viewpoint that many of the technologies we see today are extremely complex (which they are), the reason we can train our data to make predictions is that we are unconsciously trying to find a simple explanation for this data. Beyond technology itself, preferring simple explanations is human nature; philosopher Occam’s argues that we should eliminate unnecessary complexity for more favorable interactions. In fact, barcodes and single fonts are the ideal images because there is no need for learning, we can simply template match them

One could argue simplicity is why the binary system works so well for our electronics. Because this system is discreet, aka able to be distinct and differentiated. “We need designer electronics to impose a shape, a pattern, a structure, on a type of natural energy that is nothing like human logic or meaningful symbolic patterns” Professor Irvine states. And the simplest electrical pattern we can design and control is switch states (like on/off, open/closed, etc). Given this, the binary system, which only has two positions and two values, is an efficient way to transform digital binary computers into symbol processors. Binary and base 2 math lead to a mapping system for a one-to-one correspondence and overall present a solution to a symbolic representation and symbolic processing problem. Through this process, we can make electricity hold a pattern in order to represent something not electronic (i.e. something more human). The binary system provides us with a unified subsystem with which we can build many layers on and thus create data structures in a defined pattern of bytes.

When applying this to deblackboxing, in which we remove the notion that a computer/program’s inputs and operations are not visible to the user or another interested party, we can see that at its heart simple systems are used to create our technologies. The principles of computing (communication, computation, coordination, recollection, evaluation, design) in this case are useful, as “some people see computing as computation, others as data, networked coordination, or automated systems. The framework can
broaden people’s perspectives about what computing really is.”

Again, when specifically applied to the principle of computation, we can see that at the heart our systems are composed of layers of binary maps – yes/no’s, 0/1’s, on/offs. It’s beautiful, but there is no ‘magic’ underneath the hood of our systems. We store and train on data, use math, and develop our algorithms to create the technologies we have today.

 

From machine to machine learning.

The most valuable takeaway for me was learning the difference between a computer, a computer system, AI, machine learning, and deep learning (AI Robertson ML Robertson DL). In Peter J. a computer originally represented as a job title which refers to people who calculates an artifact that can automated processing info, and eventually developed into a machine that can understand the info. Denning and Craig H. Martell’s Great Principles of Computing, the author broke the statement about “Computer is just coding or programming” by stating the long progress of computing development and its controversial evolution.

            Early 1970, “Computer science equals programming.”

            In the 1970s, “Computing is the automation of information processes.”

            Late 1970 s, “Computing as the study of ‘what can be automated.”

            1980 s, “Understanding their information processes and what algorithms might govern them.”

Looking back to history makes me even more surprised about how rapidly the computing technology has been developed and how fast people can keep up with all these updates and react to such changes. But still, “with the bounty come anxieties.” In Kashmir Hill’s article The Secretive Company That Might End Privacy as We Know it clearly spilled out our concerns. Using ML as a tool to help law enforcement should be a way to decrease the criminal rates and processing the case even faster by replaying human labor into tireless machines. However, because these machines can have “unintended operations”, the results aren’t always right, especially towards specific groups of people, and the idea of extracting a face behind every phone or even videos it occurs freaks people out. This face recognition technology hasn’t been generally authorized.

It reminded me of a case that happened several years ago about how the public concerns about their privacy while “enjoying” the convenience the private intruding technology brings them, like location sharing and tagging. IPhone by that time took advantage of it and started advertising how important they value their customers’ privacy. The ironic thing is, they still cooperate with Google and many other information collecting companies to spy on their customers and predict their preferences to make more profit, but refuse to provide a password to the law enforcement for providing evidence and solving a crucial murder case to prove to the public about how much they “value their privacy”.

Another takeaway I contracted was from John D. Kelleher’s Deep Learning about how machine learning was designed to learn the patterns from the massive data by providing a calibration so they “understand” what right or wrong. It reminds me of how humans learn from the beginning. Our past experience (knowledges, relationships with others, rewards…) are the massive data base, we learn from the past to find the pattern so we know what to do what’s not to do, and what works what doesn’t.

Evolution of Computation

This week provided almost an evolutionary timeline of computation, which was super cool. Starting the reading with Prof. Irvines video we understand that computation is information processing in the sense of information transformation for all types of digitally encodable symbolic forms. This is done through binary electronics in which we impose a structure on electricity in sequences of on//off states and then assign symbolic values to physical units. From this we created the modern digital computer and the computer system that “orchestrate” (combine, sequence, and make active) symbols that mean (data representations) and symbols that do (programming code) in automated processes for any programmable purpose. 

Then we go into computer principles with Denning and Martell which further defines computation as dependent on specific practices and principles, where each category of principle is a perspective on computing. This image sums of the initial chapter: 

Then we make this big jump into Machine Learning which is the next step of computation in part because the world has regularities we can collect data of example observations and analyze it to discover relationships. Machine learning involves the development and evaluation of algorithms that enable a computer to extract (or learn) functions from a dataset (sets of examples). This is done through algorithms that induces (or extracts) a general rule (a function) from a set of specific examples (the dataset) or assumptions (inductive bias). Following this is Deep Learning another derivative of computation introduced as the subfield of Machine Learning that focuses on the design and evaluation of training algorithms and model architectures for modern neural networks using mathematical models loosely inspired by the brain. I believe this as an evolution of Machine Learning because Deep Learning has the ability to learn useful features from low-level raw data, and complex non-liner mappings from inputs to outputs rather than having a human input every feature (correct me if I’m wrong but features are the inputs for data within a dataset). Deep Learning was spurred from Big Data which has some notable ethical questions regarding privacy that I would love to further dissect. Overall this mean that Deep Learning’s ability to compute information is much faster and more accurate than many other machine learning models that use hand-engineered features. 

It is honestly inspiring and jaw dropping to see the jump from Dartmouth to Machine Learning and now Deep Learning. So many questions still exist, but now I have a decent grasp that the devices I’m using now to create this post consist of humans imposing symbolic meaning to electricity that at its root is just 1/0s that through layers of my computer system is creating comprehensible images. From that we have evolved computers from a device that stores and transports data to actual machines capable of learning through data and deriving computation. I’m still curious the nature of Deep Learning and its difference and applicability to our issues today as opposed to Machine Learning. Also what is noise? 

Best,

Chloe

—BREAK—

References

Alapaydin, Ethem. 2016. Machine Learning-The New AI. MIT Press Essential Knowledge Series. Cambridge, MA: MIT Press. https://drive.google.com/file/d/1iZM2zQxQZcVRkMkLsxlsibOupWntjZ7b/view?usp=drive_open&usp=embed_facebook.
 
Denning, Peter, and Craig Martell. 2015. Great Principles of Computing. MIT Press. https://drive.google.com/file/d/1RWhHfmv4oJExcpaCpMe85MLtOgATLZ5Z/view?usp=drive_open&usp=embed_facebook.
 
Kelleher, John. 2019. Deep Learning. MIT Press. https://drive.google.com/file/d/1VszDaSo7PqlbUGxElT0SR06rW0Miy5sD/view?usp=drive_open&usp=embed_facebook.
 
Martin Irvine. 2020. Irvine 505 Keywords Computation. https://www.youtube.com/watch?v=AAK0Bb13LdU&feature=youtu.be.

Decipher the Enigma Behind Computer System- Chirin Dirani ( ー・ー・ ・・・・  ・・  ・ー・ ・・  ー・)

Training on Samuel Morse electrical telegraph code was a prerequisite for the completion of my tenth grade mandatory summer camp, back in Syria. I didn’t know then, that this methodology of transforming “patterns of electrical pulses into written symbols” have inspired scientists to create modern computers. The concept of Morse system was used as the basis to transform computers from digital binary to symbol processors. The maturing process of the system witnessed many leaps, which transformed it from a number-crunching tool into a symbol-manipulating process. With time, six principles were identified to produce computation in this seemingly complex system. Understanding the bottom- up design approach provided by the main principles, will help us better understand this system and decipher its codes.

In his video, Professor Irvine explained thoroughly how the binary system, that has only two positions, was used to transform the digital binary computers into symbol processors. The system used binary electronics and logic, in addition to base 2 math for encoding and processing computations. The system uses electronics because they provide the fastest physical structures for registering and transmitting signals that we can encode. It also used electricity to impose a pattern on a type of natural energy. Imposing this pattern, accompanied by assigning human symbolic meanings and values to physical units, created a unified subsystem to build on. We can add different layers on the subsystem to transform inputs into outputs for any technology. This process helps us understand computation and the components of the computer system. 

In their book, Great Principles of Computing, Denning and Martell, introduced six principles of computing; communication, computation, coordination, recollection, evaluation and design. The authors emphasized that these principles are tools used by practitioners in many key domains and are considered the fundamental laws that both empower and constrain technologies. Today’s computing technologies use combinations of principles from the six categories. It is true that each category has its weight in a certain technology, but the combination of the six exists in any technology we examine today. The bottom up approach stems from the fact that these principals work as the basis (bottom) to support technologies’ domains (up). Knowing that computing as a whole depends on these principles is very intriguing. It opens up the door to question and investigate how these principles work and interact to develop new discoveries. 

Understanding the subsystems and layers that compose computers and that computing principles support any technology we use today, made me understand this complex system. However, being a novice in the field of technology, grasping the process of principles supporting other domains will be one of my learning objectives in this class.

Bibliography

Denning, Peter J. and Martell, Craig H. Great Principles of Computing. (Massachusetts: The MIT Press, 2015).

Irvine Martin. Irvine 505 Keywords Computation, 2020. https://www.youtube.com/watch?v=AAK0Bb13LdU&feature=youtu.be.

 

Computing Design Principles – Heba Khashogji

Deblackboxing the logic of how and why computers embody some specific kinds of computer system designs may lead us to the main concept of computing process. Today, we understand that the “Modern Computing is about designs for implementing human symbolic thought delegated to physical(electronic) structures for automating symbolic processes that we can represent in digital form”.

According to Prof. Irvine[1], The logical design, implemented physically, for automated controlled sequencing of input encoded symbols to output encoded symbols is what makes a computer a computer.

On the other hand, using the ideas raised by Alpaydin and Kelleher, a recent paper written by Brian Haney[2] can illustrate a way for understanding the “bottom up” system design approach. The paper explains how the “Scholars, lawyers, and commentators are predicting the end of the legal profession, citing specific examples of artificial intelligence (AI) systems out-performing lawyers in certain legal tasks.“. The article shows that “technology’s role in the practice of law is nothing new. The Internet, email, and databases like Westlaw and Lexis have been altering legal practice for decades.”. The increasing demand on the automated service in the field of legal profession was the main reason behind more and more bottom-up required designs. Similarly, we can find many other examples from other professions like accounting and statistics which will face the same destiny.

Until now, working through the main principles and learning precise definitions of terms helped us to “deblackbox” what seems closed and inaccessible in understanding the sophisticated concept of computer design and computing process. However, I would stay wondering why the scientists, producers, and engineers did not find a “more-comprehensive” terms since “Computing” is the term that came from a pure mathematical and accounting background, although the computer is a type of device/technology used for more than computing issues? Is it difficult to use a broader term in referring to the real nature of this technology because it became commonly used and hard to change it, or it is because of the mathematical basis of encoding symbols and logical designs?

 

[1] Prof. Irvine, (Video) “Introduction to Computer System Design”.

 

[2] Brian S. Haney, Applied Natural Language Processing for Law Practice, 2020 B.C. Intell. Prop. & Tech. F. (2020).