Category Archives: Week 3

Syntax and Predictability Features Unpacked

Annaliese Blank

29 January 2019

In the Boden piece for this week, she starts by discussing the syntax of AI. I find myself a bit confused with this section. A question I developed from this was, Is the syntax capabilities of AI responsible for the word search and word predictability functions of Google and other search data bases? I find myself a bit confused as to a clear concrete definition of this syntax and what it really does. I’d also like to know how it actually works internally in a system. I also thought about this a lot because since Google is such a trustworthy search data base for millions of users, this capability of word predictability and word translate has marketed Google better than other competitors. From a personal stand point, I can honestly say google predictor has helped me conduct easier searches and helped me narrow down what I needed online. It’s highly functional and fairly a new feature, and I’d like to understand more about how it operates in a language I could understand better. Boden explains it basically as, “weighted search and data mining…data mining can find word patterns unsuspected by human users. Long used for market research on products and brands, it’s now being applied to big data; huge collections of texts or images…” (Boden, pg. 63). This got me thinking how is the syntax feature able to do this predictability without human emotion or human thought. Perhaps an enormous algorithm that tests relatable words that could relate to the first word you search? Would like to know more about this since we all do google searches all the time.

In the Alpaydin piece for this week, similar topics are discussed. To expand on what I wrote in the Boden piece, Alpaydin breaks down the supervised learning aspect to this when models are established with inputs and outputs. He says its similar to a linear regression that is produced in statistics, which is based off predictability. So when we use AI for online searches, he says we can think of this as supervised learning. He says, “machine learning implies getting better according to a performance criterion, and in regression, performance depends on how close the model predictions are to the observed output values in the training data” (Alpaydin, pg. 38). In other words, does this mean the models themselves train and learn and improve the more we search and input new information? This got me thinking more about the linear algorithms and how they relate or change in terms of predictability searches online. I’d like to expand more on this generalization concept.  Does generalization solely rely on compression?

Since these are related to my inquiry, I tried to look for clarity in the Johnson and Verdicchio piece. They cover a lot of material in their case study. A few things they mentioned that helped me narrow down my ideas was, “If an artefact is able to acquire new patterns of behavior by means of proper training, then the systems autonomy may increase over time” (Johnson & Verdiccho, pg. 583). They later say, “Autonomous computational artefacts have a certain kind of unpredictability that is related to their autonomy. Because of their unpredictability derives from the limitations of human users and observers, it’s important to remember that autonomous computational artefacts are still bounded by their programming – even when they learn and their embodiment” (Johnson & Verdiccho, pg. 583). This got me thinking all of this is possible due to programmed embedded instructional based algorithms that pre-determine and alter search results and predictability functions in AI systems. I guess in this own way, could this be considered programmed human behavior, since it is based on input and can be changed?

To further this, I found a great article by writer, Peter Sweeney, for Medium Corporation, where he writes about this confusion that I noted. He provides us with a model that says AI is made up of machine learning and deep learning. Within this machine learning, he says this is where the predictive feature exists. The deep learning is where the simplifying process occurs. These two combined make up the AI blue bubble in the diagram. He says, “prediction is the essence of intelligence. And it will remain so until a more dominant technology teaches us otherwise” (Sweeney, pg.1).

https://medium.com/inventing-intelligent-machines/prediction-is-the-essence-of-intelligence-42c786c3e5a9

Alpaydin, E. (2016). Machine learning: the new AI. MIT Press.

Boden, M. A. (2016). AI: Its nature and future. Oxford University Press.

Deborah G. Johnson and Mario Verdicchio, “Reframing AI Discourse,” Minds and Machines 27, no. 4 (December 1, 2017): 575–90.

The Mystery Behind Instagram Recommendation System

Machine learning is a kind of new data-centric approach which relies on data to enable artificial intelligence capable of behavior as intelligent exhibited by humans. Ranking is an application area of machine learning which train on pairs of instances and have the outputs for the two in the correct order (Alpaydin, 2016, P81). Instagram is now adding one recommendation system which suggests posts for the audience based on those that have been liked by other accounts they follow. The recommendation system relies on machine learning based on each audience’s past behavior and its aim is to create a personalized feed based on how they interact with other accounts.

The new section, “Recommended for You”, although is similar to advertisements, is clearly labeled so as not to be confused with audience’s own home feed. In which, it contains the suggested posts which was recommended according to the algorithm. Three main factors determine what audience might see in their Instagram feed, namely interest, recency and relationship. Beyond these core factors, there are other factors which more or less influence the ranking system, such as frequency, following, and usage habit.

Like Apple’s Siri, Instagram Recommendation System is a rule-based personal assistant which can focus on broadening users’ access to content and switch from chronological feed to algorithmic one based on both topical relevance and personal relevance. Such relevance is related to weighted search and data mining, which are two of the prominent natural language processing applications of information retrieval (Boden, 2016, P63). Certain posts are assessed statistically and weighted by relevance, while data mining can find patterns unsuspected by human users. After data analysis, such posts are ranked according to their weight and relevance, which can be reflected by statistics and then they can be recommended to each user according to their ranks.

In the Instagram recommendation system, the algorithm collects all the data of the example observations and analyzes it to discover the relationship which might not be observed by humans. The input representation includes both the attributes of each posts based on keywords and hashtag and the attributes of each audience, such as their ike, comment, share, and other attributes which reflect audience’s interaction with certain topics and certain posts. These inputs are then recorded by the system as data and sample and ranked according to relevance, therefore calculate the values to estimate by using the sample and data generate the output which has a numerical score that is a measure of how much the system believes that a particular audience will enjoy a particular post.

This isn’t the first and only time that Instagram has offered recommended content. One of the previous Instagram recommended content is subjective. You’d have to head to the Explore section to see the recommended posts and videos. Since they won’t be pushed to your home feed, the chances to see the recommendation posts depend on audience’s subjective choices totally. Like the change of switch from chronological feed to the algorithmic one and the introduction of ads, these are all based on machine learning and aim at broadening more content to uses and avoid missing interesting or crucial posts customized for each user.

 

Reference:

Alpaydin, E. (2016). Machine learning: the new AI. MIT Press.

Boden, M. A. (2016). AI: Its nature and future. Oxford University Press.

Kaplan, J. (2016). Artificial Intelligence: What everyone needs to know. Oxford University Press.

Josh, C. (2018). How Instagram’s algorithm works. Retrieved from https://techcrunch.com/2018/06/01/how-instagram-feed-works/

Kaplan, J. (2016). Artificial Intelligence: What everyone needs to know. Oxford University Press.

Sarah, P. (2018). Instagram will now add ‘Recommended’ posts to your feed. Retrieved from https://techcrunch.com/2017/12/27/instagram-will-now-add-recommended-posts-to-your-feed/

 

Use AI to Predict the Future

Many years ago, I read a book called Predictive Analytics written by Eric Siegel who is the founder of Predictive Analytics World and executive editor of The Predictive Analytics Times. The book introduces many cases studies showing how to use quantitative data and computation to predict human’s behavior. What impressed me most is a study that presents the relationship between stock price and public anxiety degree reflected in social media. The researchers collected netizens’ words and judged each world by machine learning. They try to get the average degree of public anxiety, then to predict the future stock price. The conclusion shown in the picture was drawn that the higher public anxiety degree is, the lower the stock price will be after two days.

Note: The dotted line is the change of anxiety degree and the black line is the change of stock price.

In the book, the author just tells us the use to machine learning very basically, not the hidden principle. So machine learning was a kind magic in my mind until I read this week’s articles, which de-blacks the box of AI and MI.

Today, utilizing machine learning to predict human behavior has become very commonplace in both academic and marking field. It is possible because the world has regularities. Things in the world change smoothly. We are not beamed from point A to point B, but we need to pass through a sequence of intermediate locations. (machine learning, 41). So we are able to get a general model or pattern through learning from huge amounts of data and predict the future.

However, prediction can also cause many problems. For example, the police now use machine learning to get a demographic pattern of criminals so they are likely to watch those people more than others, which causes some discrimination problems.

Today’s AI has much more abilities than we imagine. In the past, we regarded language, creativity and emotion as intelligence that only human beings belong to, but now Al has, too and sometimes they are stronger than us. For instance, AI technology can generate many ideas that are historically new, surprising, and valuable in designing engines, pharmaceuticals, and various types of computer art. (AI Its Nature and Future, 68)

Owing to AI’s strong ability, many people are afraid of it. We need to regulate AI carefully and always remember AI was created and designed by humans and human actors and human behavior are always the most important part of AI systems.

reference:

Margaret A. Boden, AI: Its Nature and Future (Oxford: Oxford University Press, 2016).

Ethem Alpaydin, Machine Learning: The New AI. Cambridge, MA: The MIT Press, 2016.

Deborah G. Johnson and Mario Verdicchio, “Reframing AI Discourse,” Minds and Machines 27, no. 4 (December 1, 2017): 575–90.