Author Archives: Zachary Omer

From AI to Straight A’s: Artificial Intelligence Within Education

Zachery Omer

Abstract

As U.S. students’ test scores in math, reading, and science continue to fall in the middle of pack worldwide, it’s become apparent that a change to our education system is necessary. I believe that the system could be bolstered by recent advancements in artificial intelligence technology, such as automation and adaptive learning, including gamification and knowledge monitoring. I also think that the implementation of these technologies could lead to significant changes in the profession of teaching, such as the methods, curriculum, environment, and materials needed to do the job effectively. This essay will pursue the research question: How can these AI technologies best be implemented and integrated into our education system without putting too much pressure on teachers, students, or the technologies themselves? I will explore the writings and musings of professionals and thought leaders across the fields of technology and education, as well as anecdotal evidence and several case studies from the past few years. 

Introduction

When I think back on my own educational experience, I can distinctly remember the impact of technology as the years passed. In early elementary school, when a television would be rolled into the classroom on a cart and we realized that we’d get to watch a VHS episode of Bill Nye the Science Guy or Reading Rainbow with Levar Burton, the excitement was palpable.

By 5th grade, we had all survived Y2K, and the digital revolution had officially begun. Our class had a small set of Alphasmart 2000 keyboard devices so we could begin learning how to type. In 6th grade, our classroom was the site of the school’s first SMART Board, and there was a bulky desktop computer for every 2-3 students. For a small old school building in rural Missouri that didn’t even have air conditioning in every room, this felt very cutting edge, futuristic, and exciting.

Throughout middle and high school, there were still plenty of dry erase markers and overhead projectors, but SMART Boards became more and more common, and course materials became increasingly digital. First it was correspondence: emails, syllabi, grades, etc., but eventually class notes, presentations, modules, and assignments moved online as well. I got my first smartphone for my 18th birthday, and my first laptop as a high school graduation gift.

In college I became familiar with a learning management system (LMS) for the first time, using Blackboard as a central location for assignments, blog posts, course materials, grades, and more for almost all of my classes. This LMS experience had mixed results, largely depending on the professor using the system and their relationship with technology. Nearly all assignments, quizzes, and essays were submitted electronically, and I even took several courses that were entirely online, where I never physically met my professor or classmates.

Upon graduation, I began working at a public high school, and found that the technological environment had continued it drastic development. Every classroom had a SMART Board, most classes had their own set of Google Chromebook laptops, and students were allowed– although not encouraged– to keep their personal smartphone devices out in plain view on their desk. As I observed students listening to music, texting, playing games, taking photos, and browsing social media throughout the school day, I couldn’t help but think of my own high school experience (only 5 years prior), where if you were caught with your phone out during class, it was taken for the remainder of the day.  Nearly all of my coworkers at the school agreed that these devices were a distraction, and many had different methods of trying to govern or regulate their use during class, but very few were willing to endure the inevitable revolt that would accompany the outright banning of phones in class. Simply put, most students expected perpetual connectivity through their smartphones, and depriving them of that, even for a few hours, led to feelings of isolation and irritability.

In only 20 years, the education system underwent a colossal change, on a scale that has likely never been seen before. In parallel with society, it was a shift from primarily analog activity to almost exclusively digital. Along with that shift came massive changes in pedagogy and even epistemology. As we look forward from our current point in time, the possibilities appear endless for technology within education. Artificial intelligence, automation, and machine learning have become quite the buzzwords across all industries, and education is no exception. Can artificial intelligence help to produce real intelligence in the classroom? Can deep-learning algorithms produce deep-learning students? How can these technologies best be implemented and integrated into our education system without putting too much pressure on teachers, students, or the technologies themselves?  

According to Pew Research data, students in the United States rank near the middle of the pack in math, science, and reading, and are below many other industrialized nations in those categories. According to the 2015 study, among 71 participating countries, the US ranked 38th in math and 24th in science (Desilver, 2017).

In an attempt to help counter this disappointing educational mediocrity, I have researched several different aspects of AI and machine learning to discern how these readily available technologies could be utilized effectively in schools. Over the course of this essay, I will explore the potential role(s) of artificial intelligence in our education system, and discuss the changing role of educators alongside these new AI technologies, to effectively prepare and equip our students (and teachers) for the inevitable advancement of the Digital Age.   

 

Section 1: The Role of Artificial Intelligence in Education

While artificial intelligence seems like a product of the 21st century, the concept was actually conceived back in 1936 by Alan Turing and the term was coined in 1956 at Dartmouth University (Computer History Museum, 2014). Since its ideation, AI has undergone multiple cycles of hope, hype, and hysteria, where people marvel at its potential, get excited at its release, and become concerned that it will somehow destroy us. The terms “artificial intelligence” and “AI” have been eagerly– and broadly– adopted by companies and media outlets without realizing the full meaning behind them, causing a rift in the public’s understanding of these technologies. According to Margaret Boden (2016) in her book AI: It’s Nature and Future, “Intelligence isn’t a single dimension, but a richly structured space of diverse information-processing capacities. Accordingly, AI uses many different techniques, addressing many different tasks” (p. 12). For the purposes of this essay, I will focus on the areas of automation and adaptive learning within artificial intelligence, and how those concepts may be applied to the field of education.

  • Automation

Schools have been already begun implementing automation in several capacities, such as machine-graded Scantron tests and automated class registration, but further potential applications are vast. Automation can fast-track many of the tedious, repetitive, paper-heavy administrative tasks that are necessary for the system but have burdened educators for ages, such as “creating class schedules, keeping student attendance, processing grades and report cards, as well as helping to admit new students” (Ostdick, 2016). School support staff can also benefit from automation. Librarians, for example, are utilizing specialized search portals, streamlined shelving navigation, and automated self-checkout more commonly; this frees the staff from “repetitive and low-value tasks so they can help students with more educational inquiries, while giving students more autonomy through technology” (Kinson, 2018).

For teachers, with routine tasks like grading, attendance, and scheduling being further outsourced to automation technologies, more time will be available to concentrate on relationship-building with students and pedagogical strategy. Automated grading software can already handle multiple choice assignments and exams, and most fill-in-the-blank exercises, but with advancements in natural language processing the practice of essay grading will also soon be in the hands of artificially intelligent software (TeachThought Staff, 2018). One example of this type of software is Gradescope, an AI-based grading system already used by universities like Stanford and UC-Berkeley (Rdt, 2018). Simon Rdt of Luminovo AI, an affiliate of Medium, writes about the effectiveness of these automated essay scoring (AES) programs:

One approach to AES is finding objective measures such as the word length, the number of spelling mistakes, and the ratio of upper case to lower case letters. However, these obvious and quantifiable measures are not insightful for evaluating crucial aspects of an essay such as the argument strength or conclusiveness. (Rdt, 2018).

Despite this glaring flaw, back in 2012 when these types of technologies were first being introduced, the William and Flora Hewlett Foundation organized a competition to compare the grading of AES programs and real teachers. According to Rdt (2018), “the output of the winning team was in 81% agreement with the teachers’ gradings, an impressive result that marked a turning point in teachers’ perceptions towards education technology.”

With this kind of technological assistance, students will no longer need to wait days or weeks to receive grades and feedback on their work; instead, this will be done within moments of submitting. Advanced progress monitoring will allow for faster identification of gaps in the class material and the need for more focused personal intervention (Ostdick, 2016). This opens the door for a more individualized learning experience for students, and a more reflective and purposeful teaching experience for educators.

Furthermore, the potential of natural language processing (NLP) within education can go far beyond just grading essays. Automated virtual assistants, such as Alexa and Siri, use NLP to receive spoken commands and questions, and to react accordingly, often serving as a convenient and efficient source of knowledge, information and feedback. These types of technology could be extremely useful in orally centered educational environments, such as speech pathology and foreign language courses. Several schools, such as Saint Louis University, have even begun installing specialized Amazon Echo devices equipped with Alexa in campus dormitories and other living spaces (Saint Louis University, 2018).

These types of automated technology will ideally help to cut down on educational bureaucracy and free up time for more creative and engaging instruction, and a more autonomous learning experience for students. In doing so, automation will also help to appease the insatiable desire for instant gratification that has been fostered by the immediacy of the Digital Age.

  • Adaptive Learning

In the same way that social media, online shopping sites, and media streaming platforms can observe our behavior and cater to our interests, preferences, and abilities, so too could educational materials. Perhaps the most impactful implementation of artificial intelligence in education will come in the form of adaptive learning, specifically in the areas of gamification and knowledge monitoring. One head of product management at Google expects that AI adaptive learning will lead to personalized instruction for students “by suggesting individual learning objectives, selecting instructional approaches and displaying exercises that are based on the interests and skill level of every student” (Rdt, 2018).

Some of the earliest conceptualizations of adaptive learning stemmed from the notion of cybernetics and the work of Warren McCulloch. According to Boden (2012), McCulloch’s “knowledge of neurology as well as logic made him an inspiring leader in the budding cybernetics movement of the 1940s” (p. 13). Boden goes on to explain the primary themes of the field of cybernetics:

Their central concept was “circular causation,” or feedback. And a key concern was teleology, or purposiveness. These ideas were closely related, for feedback depended on goal differences: the current distance from the goal was used to guide the next step. (Boden, 2012, p. 13)

As evidenced by the above quote, these ideas of cybernetics are imperative to the implementation of adaptive learning in schools. Using continuous feedback to guide the student toward desired learning goals can be achieved fairly easy through artificial intelligence software. As such, a crucial element of this process is identifying the student’s Zone of Proximal Development (ZPD), which is the cognitive area “between a student’s comfort zone and their frustration zone. It’s the area where students are not repeating material they’ve already mastered nor challenging themselves at a level so challenging that they become frustrated, discouraged, and reluctant to keep learning” (Lynch, 2017). Progress often occurs at the edge of our comfort zones, and so by effectively maximizing ZPD, adaptive learning programs will better prepare students to master the course material and develop creative critical problem solving abilities that can benefit them inside and outside of the classroom (Lynch, 2018). This process could also feature an element of “scaffolding,” where the educator (and/or AI program) “gives aid to the student in her/his ZPD as necessary, and tapers off this aid as it becomes unnecessary, much as a scaffold is removed from a building during construction” (Culatta, 2011). Module-based, goal-oriented online education programs often utilize this method.

Gamification has been a popular concept in education for years, and has seen mixed results. Researchers who have studied this method have found that “The use of educational games represents a shift from ‘learning by listening’ to ‘learning by doing’ model of teaching. These educational games offer different ways for representing complex themes” (Peixoto et al., 2018, p. 158). However, with the exception of a few early-learning games that are cartoon-ified and fun (like Schoolhouse Rock, which I loved as a child and still remember playing to this day), many module-based online learning platforms fail to engage students or achieve their desired outcomes because the material is still presented as it might be on a worksheet or set of lecture slides.

Why do we stick to this outdated method of delivery in the classroom, when the games that kids are playing at home (or in their pockets) are infinitely more fun and engaging? For example, the Assassin’s Creed franchise has been around for over a decade, and while the actual objectives can be a bit dark and gory, the entire premise of the game is traveling back to different (historically accurate) time periods and exploring their cities and culture to solve mysteries and track down your targets. Each game covers a different historical era, such as the Ottoman Empire, Ancient Rome, Industrial London, the Italian Renaissance, Revolutionary America, and many more. Simply by playing these games and achieving their objectives, users can gain a deeper understanding for the culture, architecture, clothes, events, and main characters of these important time periods in our world’s history.

In theory, similar styles of games could be adapted for classroom use, although research and development for educational implementation has been relatively minimal thus far. These types of games would utilize elements such as quest/saga-based narratives, continuous performance feedback, instant gratification in the form of progress-tracking and/or rewards systems, objective-based progression, and even an adaptive CPU that gets harder or easier based on a student’s performance. All of these criteria could be met while still delivering the stunning graphics, dynamic gameplay, customizable features, and even theatrical cuts that users have come to expect.

With that said, studies have been conducted into the different effects of gamified learning within education, including potential negative effects. Toda et al. (2018) completed one such experiment, and identified four negative effects of gamification: indifference, loss of performance, undesired behavior, and declining effects (p. 150). Most of these effects are closely related; for example, the main difference between loss of performance and declining effects were the factors of motivation and engagement, and they found that declining effects often to led to loss of performance (Toda et al., 2018, p. 151). A few common aspects of gamification they found to be particularly problematic and contributed to these negative effects were leaderboards, badges, points, and levels (p. 152). The researchers noted that most of these negative effects could be remedied with more efficient game design and instruction (p. 153).  

While the notion of gamified learning is often met with resistance or labeled as “edu-tainment,” we must face the fact that we now live– and modern students were raised from birth– in a society that completely revolves around entertainment. Our phones are always buzzing, social media feeds are always scrolling, TVs are always flashing in the background, headphones are always in, sporting events and other ceremonies are always being covered, and the current President of the United States is a former reality television star. That pervasive entertainment-based lifestyle of perpetual stimulation isn’t the healthiest option for anyone’s brain, let alone a developing child’s, but to completely exclude these modern technologies and platforms from the classroom creates a foreign environment of regressive isolation and uncomfortable disconnection for students. Of course a balance must be struck between screen-based learning and interpersonal interaction, but at the moment the screen-based learning being implemented is often inefficient and and disengaging for students. If we could responsibly harness the technology that drives the rest of our daily entertainment wants and needs, we may see those aforementioned mediocre educational rankings for the United States begin to rise.

Knowledge monitoring is another key to adaptive learning technologies. While the AI-powered program will track progress and skills gained, and the educator will track comprehensive retention and practical application, the student must also be aware and responsible for their own knowledge monitoring as well. The rationale is fairly straightforward:

If students fail to differentiate what they know or have learned previously from what they do not know or need to learn (or relearn), they are not expected to engage more advanced metacognitive strategies, such as evaluating their learning in an instructional setting, or employing more efficient learning and studying strategies. (Tobias & Everson, 2009)

According to a study by Kautzmann & Jaques (2018), effective knowledge monitoring provides long-term metacognitive benefits to students throughout their academic careers by helping them become self-regulated learners. These researchers claim that “Self-regulated learners are proactive in setting goals, planning and deploying study strategies, monitoring the effectiveness of their actions and taking control of their learning and more prone to achievement in college” (Kautzmann & Jaques, 2018, p. 124). Therefore, it is vital that knowledge monitoring take place throughout the AI-supported learning process, by the adaptive learning program, by the educator, and– perhaps most importantly– by the students themselves, to ensure that the material being taught is understood, contextualized, and applied in a practical way.

Section 2: The Role of Educators Alongside Artificial Intelligence

If these AI-powered technologies do integrate within the education system and catalyze impactful change on a large scale, then the role of educators will need to adjust accordingly. While certain elements of AI provide the potential to make the lives of teachers easier, these adjustments will spread across nearly every aspect of the teaching profession, including methods, curriculum, class environment, and materials. This will not be a seamless transition; in the current stage of the Digital Age, many teachers struggle to keep up with the necessary trainings for new technology implementation. According to Daniel Stapp, a current high school teacher:

The level of expectation placed on teachers is ridiculous…I teach five classes of 35 kids who are always writing essays, and the expectation of my school is that I’m using TurnItIn.com. But they gave us training on [TurnItIn] on one of our Grading Days, where we have contracted time to grade…it was ‘volunteer training.’ (Stapp, 2018)

As the development of these new types of technologies advances and becomes more accessible, more time and resources will need to be dedicated to training teachers in these new platforms and programs. By doing so, changes to the roles of the profession will be more readily acquiesced.   

  • Methods

One major change in pedagogy will be a shift from stand and deliver instruction to more of a coaching and facilitation role for educators (Wagner, 2018). As Wagner (2018) writes, “In an information age, with content available with the click of a mouse, teachers must shift from the ‘sage on a stage’ to the ‘guides on the side.’” This will require a stronger focus on the personal needs of students, and further emphasis on contextualized learning, dynamic methods of class engagement, assisted knowledge monitoring, and extra support to students who may be less tech-savvy. This is not to say that teachers will be handing over the reins of their classroom to these technologies and regressing into a supporting role, but they will need to rethink how these technologies can be effectively maximized while still developing positive interpersonal relationships with students, contextualizing knowledge and applying it practically, and providing mentorship along the way. 

  • Curriculum

A teacher’s curriculum will become increasingly dynamic and individualized in the coming years alongside automation and adaptive learning programs. According to Wagner (2018), it will be a transition from developers of content to developers of learning experiences. This would entail multimodal presentation of materials, such as text, audio, and video, in order to connect with students of all different learning styles. This may also include small reflection groups for certain topics, and providing anecdotal examples and supporting evidence for more contextualized learning. Because syllabi are now primarily online and editable, it has become easier to make alterations throughout the course of the semester such as adding or subtracting required readings or assignments based on student (or class) progress.

  • Environment

In the past, the communication of teachers and student’s learning were often confined within the walls of classroom or the textbook provided, but with the advent of the internet and social media, the learning environment has the potential to become more ubiquitous. Wagner (2018) describes it as a shift from siloed classrooms to virtual social networks. Collaborative platforms such as Google Drive are extremely helpful in this regard. Wagner (2018) mentions another platform called Brainly that can connect students with peers to address subject-specific questions. He gives a fun example:

They type in their question on Brainly and are connected to a short narrated video that uses modern day Marvel characters to explain the concept. If they wish to ask follow up questions, they are connected through to the student creator of the video via a chat box. (Wagner, 2018)

  • Materials

Expanding upon the “Curriculum” section, there will be a strong shift among educators from using textbooks and a set curriculum to blended courses and customized class design (Wagner, 2018). Traditional textbooks are often expensive, heavy, and underutilized by the end of the semester. Furthermore, most textbooks lose value exponentially each year when a new edition is released, making their contents outdated and thus difficult to reuse or sell back.  

Blended courses will combine elements of online learning with interpersonal instruction. Several AI-powered platforms have been developed to help teachers with the creation and implementation of these types of courses, such as Content Technologies, Teachable, CourseCraft, and Udemy (Wagner, 2018).

 

Conclusion

I do not mean to assert with this essay that technology is the be-all and end-all solution for education in the United States. One of the most important aspects of a child’s education is the socialization process that accompanies interpersonal interaction at school. Daniel Stapp (2018) elaborates on this phenomenon, and the impact that technology can have:

I think one of the biggest skills people gain from a brick-and-mortar school is interpersonal communication and relationship-building. And I think adding a layer of tech between people to do that sometimes takes away from the power of that connection. And it can add to it too, it just depends on what kind of tech you’re using and how you’re using it. (Stapp, 2018)

These AI technologies, when utilized appropriately and effectively, are intended to supplement educators and their mission, not supplant them. It should not be a process that is rushed into; these types of technological transitions take time and training to implement effectively. There have already been examples of AI-powered learning platforms that have backfired, such as the Summit Learning– a platform developed by Facebook– rollout near Wichita, Kansas and other cities around the US in 2018 (Bowles, 2019). Teachers and students alike were unprepared for the massive changes in pedagogy and curriculum that accompanied the Summit Learning program, as well as the cognitive and physical effects of spending more time in front of screens. As a result, many students protested, and parents began pulling their children from the schools participating in the program (Bowles, 2019).

Further research will need to be conducted into the long-term cognitive and physical effects of these types of AI learning programs. Ideally, educators will be able to find a healthy balance in the classroom between traditional, seminar-based instruction and online, self-guided, screen-based learning. To that end, more research will also need to be conducted into the effects of modality switching on the learning process and comprehension abilities of students.

However, the tremendous potential of automation and adaptive learning for education is too tantalizing to resist. This essay painted with broad strokes in an attempt to cover the education system in general, but there will be certain nuances that accompany the integration of technology at the different levels of schooling, from elementary school up to higher education. With the appropriate amount of training, transition time, and consent, these technologies could be utilized to stimulate incredible change in the American school system, and help to re-establish the United States as an educational superpower in the world.

Images Used

“AV Trolley” by mikecogh is licensed under CC BY-SA 2.0

“AS3K and Neo” by nathanww is licensed under CC BY-SA 2.0

“Reflected light Smartboard” by touring_fishman is licensed under CC BY-NC-SA 2.0

DS-UNIVAULT-30 Chromebook Cart sourced from www.ipadcarts.com

Zone of Proximal Development” sourced from (Culatta, 2011)

“SchoolHouse Rock!” sourced from CDAccess.com

Assassin’s Creed Anthology” sourced from Reddit.com

References

Boden, M. A. (2016). AI: its nature and future (First edition). Oxford, United Kingdom: Oxford University Press.

Bowles, N. (2019, April 21). Silicon Valley Came to Kansas Schools. That Started a Rebellion. NY Times. Retrieved from https://www.nytimes.com/2019/04/21/technology/silicon-valley-kansas-schools.html

Computer History Museum. (2014). Artificial Intelligence. Retrieved from https://www.youtube.com/watch?v=NGZx5GAUPys&list=PLQsxaNhYv8dbK3yMHXk35jtZFdu7o46gu&index=5

Culatta, R. (2011). Zone of proximal development. Retrieved from Innovative Learning website: http://www.innovativelearning.com/educational_psychology/development/zone-of-proximal-development.html

Desilver, D. (2017, February 15). U.S. students’ academic achievement still lags that of their peers in many other countries. Pew Research Center. Retrieved from https://www.pewresearch.org/fact-tank/2017/02/15/u-s-students-internationally-math-science/

Kautzmann, T. R., & Jaques, P. A. (2018). Improving the Metacognitive Ability of Knowledge Monitoring in Computer Learning Systems. In Higher Education for All. From Challenges to Novel Technology-Enhanced Solutions (Vol. 832, pp. 124–140). https://doi.org/10.1007/978-3-319-97934-2_8

Kinson, N. (2018, October 24). How automation will impac education. Retrieved from Beta News website: https://betanews.com/2018/10/24/how-automation-will-impact-education/

Levesque, E. M. (2018). The role of AI in education and the changing US workforce. Retrieved from Brookings Institution website: https://www.brookings.edu/research/the-role-of-ai-in-education-and-the-changing-u-s-workforce/

Lynch, M. (2017, August 7). 5 Things You Should Know About Adaptive Learning. Retrieved from The Tech Advocate website: https://www.thetechedvocate.org/5-things-know-adaptive-learning/

Ostdick, N. (2016, December 15). Teach Me: Automation’s Role in Education. Retrieved from UI Path website: https://www.uipath.com/blog/teach-me-automations-role-in-education

Peixoto, D. C. C., Resende, R. F., & Pádua, C. I. P. S. (2018). An Experience with Software Engineering Education Using a Software Process Improvement Game. In Higher Education for All. From Challenges to Novel Technology-Enhanced Solutions (Vol. 832, pp. 157–173). https://doi.org/10.1007/978-3-319-97934-2_10

Saint Louis University. (2018, August). SLU Installing Amazon Alexa-Enabled Devices in Every Student Living Space on Campus. Retrieved from SLU Alexa Project web page: https://www.slu.edu/news/2018/august/slu-alexa-project.php

Stapp, D. (2018, March). Technology in Secondary Education [Phone].

TeachThought Staff. (2018, September 16). 10 Roles for Artificial Intelligence in Education. Retrieved from TeachThought website: https://www.teachthought.com/the-future-of-learning/10-roles-for-artificial-intelligence-in-education/

Tobias, S., & Everson, H. T. (2009). The importance of knowing what you know: A knowledge monitoring framework for studying metacognition in education. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), The educational psychology series. Handbook of metacognition in education (pp. 107-127). New York, NY, US: Routledge/Taylor & Francis Group.

Toda, A. M., Valle, P. H. D., & Isotani, S. (2018). The Dark Side of Gamification: An Overview of Negative Effects of Gamification in Education. In Higher Education for All. From Challenges to Novel Technology-Enhanced Solutions (Vol. 832, pp. 143–156). https://doi.org/10.1007/978-3-319-97934-2_9

Wagner, K. (2018, January 15). A blended environment: The future of AI and education. Getting Smart. Retrieved from http://www.gettingsmart.com/2018/01/a-blended-environment-the-future-of-ai-and-education/

Using AI to Enhance Education

Can artificial intelligence help to produce real intelligence in the classroom? Can deep-learning algorithms produce deep-learning students? How can these technologies best be implemented and integrated into our education system without putting too much pressure on teachers, students, or the technologies themselves?

These are the types of questions I’m seeking to answer with my upcoming research project.

According to Pew Research data, students in the United States rank near the middle of the pack in math, science, and reading, and are below many other industrialized nations in those categories. According to the 2015 study, among 71 participating countries, the US ranked 38th in math and 24th in science (Desilver, 2017).

In an attempt to help counter this disappointing educational mediocrity, I intend to explore several different aspects of AI and machine learning to discern how these readily available technologies could be utilized effectively in schools.

Based on preliminary research, there are a few elements I plan to emphasize in my writing.

According to Wagner (2018) in a blog on GettingSmart, implementing AI in schools could result in 5 major shifts within schools:

  1. Stand and Deliver Instruction —>  Facilitation and Coaching
  2. Developers of Content —> Developers of Learning Experiences
  3. Siloed Classrooms —> Virtual Social Networks
  4. Textbooks and Set Curriculum —> Blended Courses and Customized Design
  5. Hierarchical Top-Down Network —> Lateral Virtual Global Network

The primary takeaway I gathered from this article was that the role of teaching will need to change alongside these powerful AI technologies. The traditional “sage on a stage” model will give way to a “guide on the side” facilitator role. Classes will become increasingly interdisciplinary and teachers will need to focus on emphasizing media literacy and creating engaging ways of delivering class material across different mediums. Additionally, since AI will be able to automate many basic tasks like grading, tutoring support, and information processing, teachers may need to find a way to supplement some of those automated tasks with personal feedback and support for their students.

The drawback to this notion arises from the Digital Divide, or the older generation of teachers (some still relatively young!) who were not educated– or taught how to teach– using this model or these types of technology. Many teachers in the public school system already feel overwhelmed by the amount of trainings they’re expected to participate in each year, especially trainings that center around technology and teaching style (Stapp, 2018). These are difficult skills to adopt for people who were not direct products of the Digital Age, and especially so when considering the lack of time and other critical factors that play into a teacher’s busy schedule.

Sourced from CDAccess.com

Furthermore, I hope to explore the idea of individualized, adaptive learning programs. With the exception of a few early-learning games that are cartoon-ified and fun (like Schoolhouse Rock, which I loved as a child and still remember playing to this day), many module-based online learning platforms fail to engage students or achieve their desired outcomes because the material is still presented as it might be on a worksheet or set of lecture slides.

Why do we stick to this outdated method of delivery in the classroom, when the games that kids are playing at home (or in their pockets) are infinitely more fun and engaging? For example, the Assassin’s Creed franchise has been around for over a decade, and while the actual objectives can be a bit dark and gory, the entire premise of the game is traveling back to different (historically accurate) time periods and exploring their cities and culture to solve mysteries and track down your targets. Each game covers a different historical era, such as the Ottoman Empire, Ancient Rome, Industrial London, the Italian Renaissance, Revolutionary America, and many more. Simply by playing these games and achieving their objectives, kids can gain a deeper understanding for the culture, architecture, clothes, events, and main characters of these important time periods in our world’s history.

Sourced from Reddit

I have no doubt that similar styles of games could be created for classroom use, that rely on instant gratification, objective-based progression, and even an adaptive CPU that gets harder or easier based on a student’s performance, while still delivering the stunning graphics, dynamic gameplay, and customizable features that kids have come to expect.

The notion of gamified learning is often met with resistance or labeled as “edu-tainment,” but the fact of the matter is that we now live in a society that completely revolves around entertainment. Our phones are always buzzing, social media feeds are always scrolling, TVs are always flashing in the background, headphones are always in, games and events are always being covered, and we recently elected a reality television star as President of the United States. Trying to counter that pervasive entertainment-based lifestyle by teaching children exclusively “the old fashioned way” is doing them a disservice. Of course a balance must be struck between screen-based learning and interpersonal interaction, but at the moment the screen-based learning being implemented is often inefficient and and disengaging for students. If we could responsibly harness the technology that drives the rest of our daily entertainment wants and needs, I think we would see those aforementioned mediocre educational rankings for the United States begin to rise.

 

 

Works Cited

Desilver, D. (2017, February 15). U.S. students’ academic achievement still lags that of their peers in many other countries. Pew Research Center. Retrieved from https://www.pewresearch.org/fact-tank/2017/02/15/u-s-students-internationally-math-science/

Stapp, D. (2018, March). Technology in Secondary Education [Phone].

Wagner, K. (2018, January 15). A blended environment: The future of AI and education. Getting Smart. Retrieved from http://www.gettingsmart.com/2018/01/a-blended-environment-the-future-of-ai-and-education/

Our Heads in the Cloud(s)

Researching cloud computing led me to two different ideas that I wanted to write about, one of which carries over from a course I took last semester, but both deal with the notion of ownership in the digital age of streaming, sharing, and cloud storage.

Outsourcing Memories

I haven’t had available local storage space on my iPhone since 2012. By the time widespread music streaming came about, and I was able to delete the 10-20 GB of my imported/downloaded music library, I still had several years’ worth of apps, photos, videos, texts, documents, software updates and other miscellaneous data clogging up the available storage on my device. iCloud storage was a tremendous relief to those concerns. At 99 cents per month for 50GB of cloud storage, I haven’t had to worry about deleting precious data or memories in several years.

What’s concerning to me about cloud storage for our mobile phones is that so much of our life experiences are mediated through these devices. It becomes a way of outsourcing our memories so that we can better focus our attention and cognition on processing the incessant flow of information that comes pouring across our screens every day. It allows us to reflect in a mediated, visceral way on what we’ve done, where we’ve been, who we’ve met, etc. in a linear, narrative fashion (reading through text histories, reviewing the camera roll, using “Timehop” to see past social media posts and interactions). But if we’re paying a company to store all that information for us in ‘the cloud,’ rather than keeping it stored away in our own heads (impossible) or in some kind of physical scrapbook or memory box (outdated), are those memories still technically ours? If (god forbid) the cloud servers went down for Apple, or Google, or Amazon, etc. we wouldn’t have access to our own precious, private, and personal data. That doesn’t seem like ownership– in the traditional sense, at least– to me. And while I trust that these companies are doing everything they can to protect our information and ensure that it doesn’t fall into the wrong hands or that their services don’t fail us, we’ve seen dozens of examples of massive data breaches and leaks over the past decade that have exposed very personal, private, and often compromising information, images, etc. of countless people around the world.

I guess these are the risks and sacrifices we are willing to make for the convenience of never needing to delete anything. As de Brun and Floridi write in their 2017 paper on “The Ethics of Cloud Computing”:

“We observe that cloud computing suits the interests and values of those who adopt a deflated view of the value of ownership and an inflated view of freedom (De Bruin 2010). This is especially, but not exclusively, Generation X or the Millennials, who care less about where, for instance, a certain photograph is stored and who owns it (Facebook? the photographer? the photographed?) and care more about having the opportunity and freedom to do things with it (sharing it with friends, posting it on websites, using it as a background for one’s smartphone).” (p. 22)

This segues nicely into my next topic, on streaming and remix culture in the age of cloud computing.

Cloud-Based Streaming

All this reading on cloud computing made me think back to Dr. Osborn’s Remix Practices course in the Fall, where we read Remix by Lawrence Lessig (2008). I was amazed that Lessig was able to predict the impending cultural obsession with online, cloud-based streaming platforms like Netflix and Hulu (Netflix started offering streaming services in 2007, and Hulu was created in 2008, the same year this book was published). In his book, Lessig (2008) writes, “In the twenty-first century, television and movies will be book-i-fied. Or again, our expectations about how we should be able to access video content will be the same as the expectations we have today about access to books…in both cases, according to your schedule, not the schedule of someone else” (p. 44). Seven years later, “binge-watch” was named the 2015 Word of the Year by Collins English Dictionary, after “lexicographers noticed that its usage was up 200% on 2014” (BBC News, 2015). These numbers have continued to rise rapidly, with many more streaming services becoming available and encouraging consumers to ‘cut the cord’ of cable television or imported music libraries.

It seems that cloud-based streaming may be the solution that many entertainment companies have turned to in their attempt combat piracy, and it saves a lot of storage space on our devices in the process. Since the videos or music are streamed, there are often no downloadable files to “steal” and store locally, as was made popular by torrenting sites and other file-sharing platforms in the not-so-distant past. However, consumers can still easily obtain the content by screengrabbing photos and videos or recording audio with their own equipment. As we’ve seen, copyright lawyers can be pretty fierce, but how far are they willing to go to stifle the free exchange of content and media in the anarchistic Wild West of the internet?

 

References

“Binge-watch is Collins’ Dictionary’s Word of the Year” (5 November 2015). BBC News.

de Bruin, B., & Floridi, L. (2017). The Ethics of Cloud Computing. Science and Engineering Ethics, 23(1), 21–39. https://doi.org/10.1007/s11948-016-9759-0

 

Lessig, L. (2008). Remix: making art and commerce thrive in the hybrid economy. New York: Penguin Press.
Rountree, D., & Castrillo, I. (2014). The basics of cloud computing: understanding the fundamentals of cloud computing in theory and practice. In The Basics. Amsterdam ; Boston: Elsevier/Syngress.

The Ethics of AI: Tell Me What I Want, What I Really, Really Want

Think about the impact that AI has on a modern person’s life, both in day-to-day activities and in the grand scheme of things. AI recommends songs to listen to; videos, shows, and movies to watch; news to read; places to eat; people to meet; people to date; and even places to live and work. All of these are recommended based on our past behaviors and usage of the internet, which makes for tremendous convenience, but also an overwhelming amount of homogeny and close-mindedness. It’s easy to feed into the patterns that determined those recommendations in the first place, until our lives get so “personalized” that we’re extremely put off by anything or anyone that runs counter to our preferences.

We absent-mindedly give these machines enormous power over our cognition, emotion, and socialization without knowing much about how or why these algorithms are functioning and using our data the way they are (Hewlett Packard & The Atlantic, 2018). This creates dangerous precedents for our partiality to AI. As one speaker in the Hewlett Packard video says, “At it’s best, [AI] is going to solve all of our problems, and at its worst, it’s going to be the thing that ends humanity” (2018). We tend to gravitate toward the former, despite any hiccups in development or ethical implications that arise from these kinds of technology and their algorithms.

News (and now social media) is the primary mode of perceiving the world outside of our immediate surroundings. A flippant approach to AI implementation in the news can have serious consequences on the construction of people’s understanding of the world. Georgetown professor Mark MacCarthy (2019) recently wrote, “When platforms decide which news stories to present through search results and news feeds, they do not engage in the same exercise of editorial judgment. Instead, they replace judgment with algorithmic predictions.” These types of personalization algorithms create echo chambers and filter bubbles on our perceptions, which increase polarization and incentivize clickbait journalism (MacCarthy, 2019). By only being exposed to the types of news, people, places, and ideologies that support our own, and the online anonymity to loudly decry any opposition to our views on the web, we are doing a disservice to the ideals of a functioning democracy, where an informed citizenry can view multiple sides of a story and participate in civil discourse about the merits of each.

Both the Hewlett Packard video from The Atlantic (2018) and the article by Dr. MacCarthy (2019) touch on the issues that arise from implementing AI into our justice system as well, such as using algorithms to predict recidivism rates among convicts (a prelude to Minority Report, it would seem). There have been racial inequalities in the predictions made by these algorithms, which can have a devastating effect on a national criminal justice system that is already flawed in many ways. While technology itself does not contain bias, the humans who design and program it are inherently biased, and that bias (conscious or otherwise) tends to show through in their creations. This begs the question: why are we so eager to outsource life-threatening decisions (such as military strikes, incarceration, or even driving a car) to machine algorithms that have proven to share the human bias of their creators? One possible answer is that technology is an easy scapegoat when things go awry. It’s easier to offload the guilt of a potential life-threatening mistake if it can be blamed on the technology that carried out the action.

I believe we shouldn’t rush these AI technologies, because they certainly have the potential to make our lives (and the world) a lot better. But we need to be cognizant of these ethical issues that accompany them, and only time can reveal the full impact and scope of those issues, along with a diverse approach to solving the problems. We have undergone a complete societal revolution since the dawn of the Digital Age, which was only about 20 years ago. To put that in perspective, over 2000 years elapsed between the earliest writing systems of the Sumerians and Egyptians and the development of the Greek alphabet that we still know today (Wolf, 2008). These revolutions take time and deliberation to be done correctly; let’s be mindful of that. Next time a computer-generated recommendation pops up for you (probably within the next few minutes), consider what’s going on behind the screen before you proceed.

 

 

Works Cited

Hewlett Packard Enterprises, & The Atlantic. (2018). Moral Code: The Ethics of AI. Retrieved from https://www.youtube.com/watch?time_continue=481&v=GboOXAjGevA

MacCarthy, M. (2019, March 15). The Ethical Character of Algorithms—and What It Means for Fairness, the Character of Decision-Making, and the Future of News. Retrieved from https://ai.shorensteincenter.org/ideas/2019/1/14/the-ethical-character-of-algorithmsand-what-it-means-for-fairness-the-character-of-decision-making-and-the-future-of-news-yak6m

Wolf, M., & Stoodley, C. (2008). Proust and the squid: the story and science of the reading brain (1. Harper Perennial ed). New York: Harper Perennial.

A Brief History of the Future: Virtual Assistant Technologies

Back in the late 90s, my uncle proudly pulled out his flip phone at a family reunion to show me– and whoever else would listen– the “future of tech.” He proceeded to shout very limited voice commands (“CALL…..BOB!”), which the phone could register but often got wrong (“Calling…Mom”).

Fast forward a few years, and I remember getting the RAD 4.0 robot for Christmas. The TV commercials made that toy seem like a perfect robot companion and servant, like Rosey from the Jetsons or Goddard from Jimmy Neutron. RAD could respond to voice commands, move autonomously (or with a remote control), and had robot arms with clamps to pick up your toys, laundry, soda cans, etc. It even came equipped with NERF-style bullet launchers on his chest for security measures! However, after testing it out around the house, I remember being a little underwhelmed with its efficiency. I wore myself out yelling repeated commands until it would respond with an action that was usually not exactly what I had commanded. Below you can see RAD’s design and its simplistic “speech tree chart” which outlines all the verbal cues it could (supposedly) respond to.

Source: http://www.theoldrobots.com/images7/RAD-40.pdf

Source: https://robotsupremacy.fandom.com/wiki/R.A.D._4.0

 

 

 

 

 

 

 

 

Even as a 10 year old kid, I understood that Natural Language Processing technology wasn’t yet advanced enough to accurately understand more than a handful of commands. But I was patient, and a few years later I came across the chatbot SmarterChild, who was developed by Colloquis (acquired by Microsoft in 2006) and released on early instant messaging platforms like AIM and MSN Messenger (Gabriel, 2018). While entirely text-based (not voice-activated), SmarterChild was able to play games, check the weather, look up facts, and conversate with users to an extent. One of its more compelling canned responses came if you asked about sleep:

Source: https://contently.com/2018/02/27/chatbot-revolution-here/

This was about the same time that the movie i, Robot (Proyas, 2004) came out, which contained another (somewhat chilling) quip about robots dreaming and the future of artificial intelligence:

Detective Spooner: Robots don’t feel fear. They don’t feel anything. They don’t get hungry, they don’t sleep-
Sonny the RobotI do. I have even had dreams.
SpoonerHuman beings have dreams. Even dogs have dreams, but not you. You are just a machine; an imitation of life. Can a robot write a symphony? Can a robot turn a… canvas into a beautiful masterpiece?
Sonny[with genuine interest] Can you?

Over the next decade, AI began to evolve at an unprecedented pace. Nowadays, Google Assistant has a much more complex algorithmic process (see below) for decoding language than my old friend RAD 4.0, and can provide much more natural and sophisticated interaction than SmarterChild.

 

 

 

 

 

 

 

 

These virtual assistant technologies haven’t been without hiccups in their integration, such as when I first got the updated version of iPhone with Siri included. I remember ordering at a Taco Bell drive thru while my phone was in the cupholder of the car. My order included a “quesarito” (pronounced “K-HEY-SIRI-TOE”), and when I got home I realized that Siri had “woken up” in the drive thru and was running searches on everything that was said on the car radio from the drive back. It’s incidents like these, and many other with far more sensitive or compromising information at stake, that have given people concerns about our virtual assistants always listening. But Apple has recognized these, and has gone to lengths to reduce such concerns, such as two pass detection, personalized “Hey Siri” trigger phrases, and cancellation signals for common pronunciation similarities, such as “Hey, seriously” (Siri Team, 2017).

Now, building off their popular devices such as the Echo and Alexa, Amazon is rolling out programs like Amazon Lex, where the general public can create their own conversational and text interfaces for their websites, apps, and other technologies (Barr, 2017). This is a huge step for the integration of AI, machine learning, and deep neural networks into the public sphere, making it accessible on a much wider scale than the computer scientists in Silicon Valley.

The big question that comes to mind, as always, is what’s next? Despite most of the above evidence being anecdotal, it does show a massive progression in the field of artificial intelligence over the past 20 years. Does the evolution of virtual assistant technologies continue to accelerate behind the rapid progress in fields like machine learning and natural language processing? Where does it end? Will we become too dependent on these technologies? If so, what if they fail? Will there eventually be a cultural backlash?

“Hey Siri, what will the future look like?”

References
Barr, J. (2016, November 30). Amazon Lex – Build Conversational Voice & Text Interfaces. Retrieved from https://aws.amazon.com/blogs/aws/amazon-lex-build-conversational-voice-text-interfaces/
Barr, J. (2017, April 19). Amazon Lex – Now Generally Available. Retrieved from https://aws.amazon.com/blogs/aws/amazon-lex-now-generally-available/
Proyas, A. (2004). i, Robot. Retrieved from https://www.imdb.com/title/tt0343818/
Siri Team. (2017, October). Hey Siri: An On-device DNN-powered Voice Trigger for Apple’s Personal Assistant. Retrieved from https://machinelearning.apple.com/2017/10/01/hey-siri.html
Gabriel, T. (2018, February 27). The Chatbot Revolution is Here. Retrieved from https://contently.com/2018/02/27/chatbot-revolution-here/

Phonemes From Our Phones: Natural Language Processing

Imagine creating a step-by-step list to understanding language. That’s the daunting task that computer scientists face when developing algorithms for Natural Language Processing (or NLP).

First of all, there are roughly 150,000 words in the English language (Oxford English Dictionaries, n.d.), and many of them are synonyms or words with multiple meanings. Think of words like “back” or “miss,” which would be maddening to understand for an English learner: “I went back to the back of the room and laid on my back.” or “Miss, even though you missed the point, I will miss you when you’re gone.”

After parsing through those tens of thousands of words and all their associated meanings and variations, there arises the issue of dialects. English in Australia sounds different than English in Ireland, which sounds different than English in Canada. Moreover, even within a country, there can be multiple dialects: in the United States, consider how different people sound from Mississippi compared to Michigan, or Massachusetts compared to New Mexico. This blog post by internet linguist Gretchen McCulloch dives into some of these issues, and raises another interesting point: how do we teach computers to read, pronounce, and/or understand abbreviations and the new forms of English specific to internet communication, such as “lol,” “omg,” and “smh”?

Other issues such as tone and inflection can drastically change the meaning of a sentence when spoken aloud. I found one example from the Natural Language Processing video from Crash Course Computer Science to be especially powerful, where they took a simple sentence “She saw me” and changed the meaning 3 times by altering the inflection (Brungard, 2017):

“Who saw you?” … “She saw me.”

“Who did she see? … “She saw me.”

“Did she hear you or see you?” … “She saw me.”

I want to take a brief moment to appreciate the Crash Course Computer Science video series. That series takes extremely dense and complex topics and packages them into brief, comprehensive, lighthearted videos with delightfully animated (and often topical) visual aids and graphics. I will undoubtedly be returning to them for many more computer science-related quandaries. 

Anyway, all these different obstacles that make natural language processing difficult to program and code for computer scientists (vocabulary, synonyms, dialects, inflection, tone, etc.) change from language to language. So designing for Spanish or Chinese or Arabic will have many similar obstacles as English, while also presenting new and different hurdles unique to each language and its particular nuances. Luckily for us, companies like Google are rolling out huge supercomputers like BERT, “with 24 Transformer blocks, 1024 hidden layers, and 340M parameters,” that are capable of processing (and, in effect, “learning”) billions of words across multiple languages and “even surpassing human performance in the challenging area of question answering” (Peng, 2018). This helps explain why “talking robots” like Siri and Alexa have become less creepy-sounding, more efficient, and much more popular in recent years. 

Obviously, NLP is a huge undertaking for computer scientists, and there is still plenty of work to be done before computers can consistently, efficiently, and seamlessly understand and interact with human language. But with the sheer amount of language and linguistic data available online now (and increasing at an exponential rate), we may look back on this conversation in 5-10 years and laugh. And the computers might laugh with us.

 

References

Brungard, B. (2017). Natural Language Processing [Video] (Vol. 36). PBS Digital Studios. Retrieved from https://www.youtube.com/watch?v=fOvTtapxa9c

How many words are there in the English language? (n.d.). Oxford English Dictionaries. Retrieved from https://en.oxforddictionaries.com/explore/how-many-words-are-there-in-the-english-language/

McCulloch, G. (2017). Teaching computers to recognize ALL the Englishes. Retrieved from https://allthingslinguistic.com/post/150556285220/teaching-computers-to-recognize-all-the-englishes

Peng, T. (2018, October 16). Best NLP Model Ever? Google BERT Sets New Standards in 11 Language Tasks. Medium. Retrieved from https://medium.com/syncedreview/best-nlp-model-ever-google-bert-sets-new-standards-in-11-language-tasks-4a2a189bc155

Can You Read This? Thank a Data Scientist!

Daniel Keys Moran, an American computer programmer and science fiction writer, once said, “You can have data without information, but you cannot have information without data.” This seems like a fairly straightforward way of distinguishing between data and information, right? Data is everywhere; artificially intelligent machine learning software is embedded in nearly all of our technological devices, monitoring and recording our every digital move, to the point where almost every aspect of our daily activity (even sometimes when we’re offline!) is quantified and turned into data. In the digital realm, information derives from the context that gives meaning to this ever-increasing stockpile of data.

As all of this digital information continues to grow and becomes more diverse and complex, the need arises to better classify and categorize all that data. Dr. Irvine, a professor of Communication, Culture & Technology at Georgetown University, explains the differences between different types of datasets by splitting them into subgroups. He begins by writing, “Any form of data representation is (must be) computable; anything computable must be represented as a type of data. This is the essential precondition for anything to be “data” in a computing and digital information context” (Irvine, 2019, p. 2). According to Irvine (2019), “data” can be seen as:

  • Classified, named, or categorized knowledge representations (tables, charts, graphs, directories, schedules, etc., with or without a software and computational representation)
  • Information structures (represented in units in bits/bytes, such as internet packets)
  • Types of computable structures (text characters & strings, types of numbers, emojis, etc., with standard byte code representations)
  • Structured vs. unstructured data
    • Structured – database categorized and labeled
    • Unstructured – data transmitted in email, texts, social media, etc. that is stored in data services (like “the cloud”)
  • Representable logical and conceptual structures: an ‘object’ with a class/category and various attributes or properties assigned and understood
  • ‘Objects’ in databases, as units of knowledge representation (such as all items in an Amazon category, or the full list of different movies directed by Quentin Tarantino in IMDb)
  • Decomposed into values and distributions in ML nodal algorithms, such as data points in a graph

One subgroup of data I found to be particularly interesting was the types (as in typing on a keyboard) of computable structures, including The International Unicode Standard. As Irvine (2019) writes, “Unicode is the data ‘glue’ for representing the written characters (data type: “string”) of any language by specifying a code range for a language family and standard bytecode definitions for each character in the language” (p. 3). This includes all the letters, numbers, symbols, and accents of different languages, as well as special characters from math and science and even emojis! Each of these minor representations of language and expression has its own specific set of bytes to be rendered before we can make meaning out of it.

According to the Unicode website, there are over 1700 different emojis for a modern digital keyboard (when taking into account the different skin tone variations of each), and each is represented slightly differently across platforms like Google, Facebook, and Twitter. That’s a LOT of data, packaged as information, and stuffed into our sleek and organized emoji libraries before projecting “character shapes to pixel patterns on the specific screens of devices” (Irvine, 2019, p.4).

As you can see, we rely on databases to store, categorize, retrieve, and render information for us in a myriad of ways on a daily basis. From sending emails, to shopping on Amazon, to choosing a show on Netflix, to checking the statistics of your favorite athlete or team, to simply using a smartphone app, databases are always at work, collecting, distributing, and computing information at a clip that’s extremely hard to fathom for the average person.

However, while these databases may seem “artificially intelligent” and autonomous (and many are equipped with machine learning AI algorithms to expedite their processes), they still must be designed, created, coded, managed, and maintained by human computer scientists. In their book Data Science, Kelleher and Tierney (2018) confirm that the total autonomy of these complex databases is a popular myth, saying, “In reality, data science requires skilled human oversight throughout the different stages of the process. Human analysts are needed to frame the problem, to design and prepare the data, to select which ML algorithms are most appropriate, to critically interpret the results of the analysis, and to plan the appropriate action to take based on the insight(s) the analysis has revealed” (p. 34).

So take a moment to appreciate the impressive work that data scientists do, even if most of it is behind the scenes (or behind the screens, if you will). We owe a lot of our digital luxuries to their difficult, meticulous jobs. And for that, I say 👍👏😁.

 

References
Irvine, M. (2019). Distinguishing Kinds and Uses of “Data” in Computing and AI Applications. Retrieved from https://drive.google.com/open?id=1C0zQ9md4WG5VswVdBOCkyw28L39HGZXv
Kelleher, J. D., & Tierney, B. (2018). Data science. Cambridge, Massachusetts: The MIT Press.
Moran, D. K. (n.d.). BrainyQuote. Retrieved from https://www.brainyquote.com/quotes/daniel_keys_moran_230911?src=t_data
The Unicode Consortium. https://unicode.org/

Day to Day, a Data Daze

The smooth, clean interface of our modern communicative technology rarely shows it, but there is a lot that goes on behind the scenes when we share information and interact with each other online. As Dr. Irvine (2019) writes in his introduction to understanding information, data, and meaning, “We only notice some of the complex layers for everything that needs to work together when something doesn’t work (e.g., web request fails, data doesn’t come in, Internet connectivity is lost)” (p. 4).

Back in 1838, Samuel Morse sent the first telegraph in the United States. This was important because it started an evolution in public discourse that hasn’t stopped since. According to White & Downs (2015), “[Morse] used a binary system– dots and dashes– to represent letters in the alphabet. Before Morse, smoke signals did much the same thing, using small and large puffs of smoke from fires” (p. 258). Morse’s binary system has transformed (relatively quickly!) into the data-driven communication of today, where binary code (1s and 0s) is grouped and delivered in the form of bytes, which are assembled in packets and sent across the internet. The packets are sequenced and reorganized after being received by the computer(s) on the other end of the transmission (White & Downs, p. 259). All our digital information and media, in their myriad of forms, begins with simple binary, but can be transformed into text, emojis, images, videos, audio, and other forms of dynamic communication.

Neil Postman (1986), a cultural theorist and author of Amusing Ourselves to Death, takes a page out of Marshall McLuhan’s book (“The medium is the message”) and makes a compelling argument about how the form of our communication directly impacts the content that is conveyed. He also references smoke signals like White & Brown did, which is what made me think of this quote (emphasis added):

It is an argument that fixes its attention on the forms of human conversation, and postulates that how we are obliged to conduct such conversations will have the strongest possible influence on what ideas we can conveniently express. And what ideas are convenient to express inevitably become the important content of a culture. I use the word “conversation” metaphorically to refer not only to speech but to all techniques and technologies that permit people of a particular culture to exchange messages. In this sense, all culture is a conversation or, more precisely, a corporation of conversations, conducted in a variety of symbolic modes. Our attention here is on how forms of public discourse regulate and even dictate what kind of content can issue from such forms. To take a simple example of what this means, consider the primitive technology of smoke signals. While I do not know exactly what content was once carried in the smoke signals of American Indians, I can safely guess that it did not include philosophical argument. Puffs of smoke are insufficiently complex to express ideas on the nature of existence, and even if they were not, a Cherokee philosopher would run short of either wood or blankets long before he reached his second axiom. You cannot use smoke to do philosophy. Its form excludes the content.

Why then, with the boundless potential of digital interaction, do we still struggle with miscommunication and a lack of civil public discourse? Part of the reason may be the sheer amount of information that is loaded into the web each day, hour, and even minute. It’s already almost impossible to keep up with the Niagara of data flooding across our digital screens, and the quantity is increasing at an exponential rate.

A 2017 blog post by Jeff Schultz of Micro Focus revealed that “90% of the data on the internet has been created since 2016, according to an IBM Marketing Cloud study.” The post also references the graphic below, which outlines some staggering statistics about internet usage and data transmission/consumption (by the minute!):

How can we better harness this grand and growing information system to better meet our communicative needs? Does the burden of responsibility fall to us as the producers and consumers of communication data, rather than to simply blame the messenger(s)? What can we expect in the next 5 years in the fields of information, data, and meaning?

 

Works Cited

Irvine, Martin. “Introduction to the Technical Theory of Information” Feb. 4, 2019.

Postman, N. (1986). Amusing ourselves to death: public discourse in the age of show business. London: Heinemann.

Schultz, J. (2017, October 10). How much data is created on the internet each day? Retrieved from https://blog.microfocus.com/how-much-data-is-created-on-the-internet-each-day/

White, R., & Downs, T. E. (2015). How computers work: the evolution of technology (Tenth edition). Indianapolis, IN: Que.

Is Supervised Machine Learning Standardizing Our Selfies?

Group Name: The Georgetown AI 3

Group Members: Zach Omer, Beiyuan Gu, Annaliese Blank

Alpaydin – Machine Learning Notes – Pattern Recognition and Neural Networks

Chapter 3:

  • Captcha corrupted image of words or numbers that need to be typed to prove that the user is a human and not a computer (pg. 58).
  • Semi-parametric estimation model that maps the input to the output but is valid only locally, and for different type of inputs use different models (pg. 59).
  • Localizing data in order to increase complexity
  • Its best to use a simple method, similar inputs have similar outputs

Generative Models

  • These represent how our beliefs can be reflected through or based off the data we generate (pg. 60).
  • Character recognition identity and appearance
  • Invariance size does not affect the identity
  • The generative model is CAUSAL and explains how the data is generated by hidden factors that cause it (pg. 62).
  • Facial recognition
  • Affective computing adapts to the the mood of the user
  • Biometrics recognition of people with their physiological and behavioral characteristics

Dimensionality

  • Inputs are used for decision making (pg.73)
  • Dimensionality reduction learning algorithms, both the complexity of the model and the training algorithm depends on the number of input attributes.
  • Time complexity – how much calculation to do?
  • Space complexity – how much memory we need
  • Decreasing the number of inputs always decreases the time and space, but how much they decrease depends on the particular model and learning algorithm
  • Smaller models are based on small data, which is trained to with fewer data
  • Archive dimensionality can be done in two ways: feature selection and feature extraction
    • Feature selection: process of subset selection where we want to choose the smaller subset of the set of input attributes leading to maximum performance
    • Feature extraction: new features that are calculated from the original features
  • Decision trees

Chapter 4:

  • Neutral Networks and Deep Learning:
    • Perception model
    • Learning algorithms adjust the connection weights between neurons (pg. 88).
    • Hebbian learning rule: the weight between two neurons get reinforced if the two are active at the same time – the synaptic weight effectively learns the correlation between the two neurons
    • Error function : the sum of the difference between the actual outputs the network estimates for an input and their required values specified by the supervisor
    • If we define the state of a network as the collection of the values of all the neurons at a certain time, recurrent connections allow the current state to depend not only on the current input but also the on the previous time steps calculated from the previous inputs.
    • SIMD, NIMD
    • Simple cells vs. complex cells
  • Deep Learning
    • Deep neural networks- each hidden layer combines the value in its preceding layer and learns complicated functions of the input.

In relation to the Karpathy article, this piece helped us understand how the data we produce and generate through a “selfie” can be unpacked and mechanically understood from an IT standpoint. 

 

Case Study – Karpathy – What a Deep Neural Network Thinks of Your #Selfie (notes)

Convolutional Neural Networks

  • The house numbers and street signs in the graphic remind me of those “prove you’re not a robot” activities that you have to do when logging in or creating an account on certain sites. Are those just collecting human input to enhance their ConvNet algorithms?!
  • ConvNets are
    • Simple (one operation, repeated a lot)
    • Fast (image processing in tens of milliseconds)
    • Effective (and function similar to the visual cortex in our own brains!)
    • A large collection of filters that are applied on top of each other
      • Initialized randomly, and trained over time
        • “Resembles showing a child many images of things, and him/her having to gradually figure out what to look for in the images to tell those things apart.”

The Selfie Experiment

  • Interesting experiment, but not all selfies?
    • Full body portraits, couples photographed by a third party, mirror selfies (are they the same as front-facing camera selfies?), soccer players (?), and King Joffrey from Game of Thrones

 

    • Was this human or computer error? Also, why was Joffrey ranked lower than the soccer players? His forehead is even cut off in the shot (which was one of the tips)!
  • Didn’t give a working definition of selfie in the article
  • Didn’t need an algorithm for most of the selfie advice (be female, put a filter/border on it, don’t take in low lighting, don’t frame your head too large, etc.)
    • Is this natural human bias showing through in the implementation of an algorithm that ranks selfies (a very human idea)?
  • Reflections on supervised learning
    • Supervised learning is a learning in which we train the machine using data that is well classified. According to its definition, the selfie experiment is an application of supervised learning as the researcher fed the machine with 2 million selfies that were pre-labeled as “good” or “bad” by his criterion.  
    • In our opinion, supervised learning is not very applicable in this experiment because “good” or “bad” is an ambiguous concept, which makes it more difficult to categorize 2 million selfies to these two categories. We believe the classification of the training data for supervised learning should be uncontroversial. For example, if you are planning to train a machine to recognize a toad, you need to feed it with a great number of pictures with toads. The pictures present either toads or non-toads. But in the selfie experiment, the classification of selfies as “good” or “bad” is not convincing. It is based on individual judgement, which shows significant uncertainty. Also, as pointed out above, there are some errors in the data. Therefore, it makes us reflect on the accountability of supervised learning in the cases where the training data is not well classified. And if it is true that “the more training data, the better,” does the quality of data matter?  
    • As we see it, unsupervised learning would be better in this selfie experiment. Unsupervised learning is a type of machine learning algorithm used to draw inferences from dataset consisting of input data without labeled responses.   

In this case, unsupervised learning can help to find interesting patterns in selfies so that we can see the distinctions between different clusters. It would be more inspiring.

 t-SNE

  • “takes some number of things (e.g. images in our case) and lays them out in such way that nearby things are similar”
  • Visualized pattern recognition
  • Reminded of artwork that is made up of smaller images (mosaic art)

  • t-SNE is a kind of unsupervised learning.

Concluding Thought(s)

  • Everyone has their unique way of taking selfies. It’s a manifestation of our personality, our digital presence, our insecurities, our “brand.” While it’s fun to run algorithmic tests for pattern recognition and even to collect information on different ways of taking selfies, if a computer starts dictating what makes a selfie ‘good’ (a subjective term to begin with) we’re taking steps toward standardizing a semi-unique form of expression in the Digital Age. If everyone’s selfies start looking the same in an effort to ‘look better’ or get more likes, the world will lose some of its charm.
  • Can facial recognition security really be trusted if there are tens, hundreds, or thousands (for some) of our selfies out there on the web being data-mined for their facial properties? Maybe so, but that seems more accessible to hackers or identity thieves than fingerprints or passwords at this point in the Digital Age. 

 

References

Alpaydin, E. (2016). Machine learning: the new AI. Cambridge, MA: MIT Press.
Karpathy, A. (2015, October 25). What a Deep Neural Network think about your #selfie [Personal Blog]. Retrieved from http://karpathy.github.io/2015/10/25/selfie/

Veiled Veillance & Cyborgian Supervillians

I began this week’s batch of readings with the shortest in length, but I found The Society of Intelligent Veillance (Minsky et. al) to be a fascinating article. Even in the first section, where the authors discuss the “society of mind,” I caught myself asking questions about the implications of their ideas. They write, “Natural intelligence arises from the interactions of numerous simple agents, each of which, taken individually, is ‘mindless,’ but, collectively, give rise to intelligence” (Minsky et al., 2013, p.13). This makes sense, and in many cases, more minds on a task can lead to better, more diverse and sometimes even unpredictable ideas and solutions. But how do the notions of groupthink and hive mind (with their generally negative connotations) factor into this quote? Oftentimes, additional people on a task can lead to mindless agreement and blind following as a way to finish the task quickly by the path of least resistance. The authors apply their concept of the “society of mind” to modern computing and the rise of distributed, cloud-based computing across the internet, going so far as to quote the slogan of Sun Microsystems: “The network is the computer” (p. 13).  Since computers often reflect natural human bias in their programming, are computers subject to the same negative aspects of groupthink? 

The section of the article on the “Cyborg Age” also caught my attention, where the authors write, “Humanistic intelligence is intelligence that arises because of a human being in the feedback loop of a computational process, where human and computer are inextricably intertwined” (Minsky et al, 2013, p. 15). Machines have acted as extensions of our bodies and senses ever since their inception, but now we’ve become reliant on them to the point of wanting to incorporate them as wearable devices, and even possibly implement them into our biological makeup. This idea brought to mind the many depictions of cyborgian technology in popular media, such as (in order of realism) Will Smith’s prosthetic machine arm in i, Robot (replaced, enhanced limb), Doc Ock’s tentacle arm things in Spiderman (added, enhanced limbs), and Wolverine getting his bones fortified with adamantium nanotechnology in X-Men. There are countless other examples of this, and obviously our imaginations can sometimes take us further than science. But while these types of cyborgian innovations hold tremendous potential for the human race, when will this kind of technological advancement end? Maybe it’s just the sci-fi/comic book nerd in me, but I hope it doesn’t take a destructive cyborg supervillian in 20+ years to make us realize we need to pump the brakes on these technological extensions and enhancements to our human bodies. 

I also enjoyed contemplating the Society of Intelligent Veillance, and how we are now subject to “both surveillance (cameras operated by authorities) and sousveillance (cameras operated by ordinary people)” (Minsky et al, 2013, p. 14) in our everyday public activities. We are in a modern, living panopticon, enforced and perpetuated through our own insatiable internet use. So many of the videos we see on the news and social media now come from citizen journalism: people with smartphones catching an ugly encounter in a brand-name restaurant, or a racist incident on a train platform, etc. An especially chilling line from this section is, “If and when machines become truly intelligent, they will not necessarily be subservient to (under) human intelligence, and may therefore not necessarily be under the control of governments, police…or any other centralized control.” Who will these super-intelligent and capable machines answer to?

The answer, according to the level-headed Johnson & Verdicchio (2016), is computer programmers and engineers. They write, “To get from current AI to futuristic AI, a variety of human actors will have to make a myriad of decisions” (Johnson & Verdecchio, 2016, p. 587). The authors discuss how AI is often misrepresented in popular media, news coverage, and even academic writing, because of 1.) confusion over the term “autonomy” (machine autonomy vs. human autonomy); and 2.) a “sociotechnical blindness” that neglects to include human actors “at every stage of the design and deployment of an AI system” (p. 575). This is useful reasoning to keep in mind when becoming fearful about artificially intelligent cyborgian supervillians. It’s the type of reassuring logic we need to maintain faith in the positive development and incorporation of AI in our digital age. 

 

References:

Johnson, D. G., & Verdicchio, M. (2017). Reframing AI Discourse. Minds and Machines, 27(4), 575–590. https://doi.org/10.1007/s11023-017-9417-6

Minsky, M., Kurzweil, R., & Mann, S. (2013). 2013 IEEE International Symposium on Technology and Society (ISTAS 2013): Toronto, Ontario, Canada, 27 – 29 June 2013. Piscataway, NJ: IEEE.