This was such an interesting topic to further dive into not only because it perfectly explains what I’d describe as one of today’s multi-used yet still “black-boxed” phenomenon of pattern recognition especially as it is applied to computer vision and images. Karpathy’s article doesn’t only highlight and break down the functions and uses of Convolutional Neural Networks but he has managed to depict through his findings how something so computerized can still be very much so human in terms of the societal biases it brings into play.
“In machine learning, the aim is to fit a model to the data”, explains Alpaydin (Alpaydig, 2016, 58). Computer don’t just know what to do. Someone, a human, has to feed them with directions and instructions in order for them to actually do something. The computer will follow whatever set of instructions is made for it by the human and execute the commands it has to. This means, that this human that has all of their opinions, biases, beliefs, experience, etc. is to encode into a non-human thing, the ability to execute commands based on human characteristics, capabilities, ways of knowing and understandings. Karpathy’s “experiment” shows exactly how there is biases in algorithms, especially his own, a topic I really looked into during my undergrad (one of Dr. Sample’s very looked into topic) and through my research on uses of ML and NLP in IPAs focusing on speech, language recognition and more.
Karapthy explains how “a ConvNet is a large collection of filters that are applied on top of each other”. In these convolutional neural networks, “an aritifical neuron, which is the building block of a neural networks takes a series of inputs and multiplies each by a specified weight/number/characteristic and then sums those values all together (CrashCourse, #35, 2017) To break it down, artificial neural networks, have artificial neurons that basically rake numbers in and spit more numbers out (CrashCourse, #34, 2017). You have the input layer, the hidden layers and the output layer. In the hidden layers, is pretty much where it all happens. It is where the computer sums the weighted inputs, the biases are applied and the activation function is also applied as this is computed for all the neurons in each layer (CrashCourse, #34, 2017). The deeper the neural net, the “weaker” or “stronger” the AI is. The NN can learn to find their own useful kernels/inputs and learn from those. The same was the ConvNets use stored information, banks of these neurons to process image data.
As you run through them, convolutions happen over and over again as they run small filters and slide them over the image spatially to dig through the different layers of an image in order to find different features. This operation is repeated over and over again “detecting more and more complex visual patters until the last set of filters is computing the probability of entire visual classes in the image” (Karpathy, 2015). This is the part where we have come in and told the AI how to use these filters, when, where, what do we want out of the, etc. We train the ConvNets to know what to keep and what to emit by telling it what is in a way, good or bad, pretty or ugly, etc. Practice makes perfect, is a great saying to apply here as these neural networks will learn through re-inforced learning and by trial and error. The more data points you have the more information you can collect, which means that the more data you have the less uncertainty you also have about the classification, layering and choices made. However, since not all data points are equal and can’t be measure appropriately, the ML model can identify where the highest uncertainty is and ask the human to label the example, learning from those. Through active learning, the model is constantly synthesizing new inputs creating layer after layer until it reaches the wanted result and outcome (Dougherty, 2013, 3).
For face recognition, the input layer is the image captures which is stored as pixels, defined by a color and stored as a combination of three additive primary colors, RGB, as we saw in our previous lessons as well. (Alpaydin, 2016, 65; CrashCourse #35). With biometrics we then get the ability to recognize and authenticate people by using their characteristics both behavioral and physiological. Of course, this also helps with training computers to recognize mood and emotions and not just one’s identity which trains them to learn, pick up and adapt to a human’s or their user’s mood and feelings.
During the classification process, during the segmentation and labelling part, the image is separated into regions that are used for each particular task. The foreground which entails the objects of interest and the background which is everything else that will be disregarded. Labelling the objects then comes into play which obviously makes it easier for future use to immediately categorize or extract whatever needed from an image but ironically, we can even say that labelling in many cases, is foreshadowing the biases that can be found in algorithms. The following feature extraction is when characteristic properties of the objects come into play and distinguishes them/places them in a different category from objects they either share similarities or differences with and so forth… (Dougherty, 2013, 4-13). Further playing a role and testifying to how biases are created even in tech exactly because it is basically a reflection of societal biases, issues and human systems of classification.
I couldn’t stop thinking about how much Karpathy’s experiment reminded me of how if a few years ago (and by few I mean even 2-3 years ago) if you Googled “beautiful girls” for a few scrolls the only photos with be those of generic (pun-intended – “generated”) white women because the algorithms identified as beautiful (honestly, not much has changed now either). A computer doesn’t know what is “pretty”, “ugly”, “good”, “evil”. Humans have inputed and labelled recognizable patters and standards of beauty further bringing to the surfaces, the racisms and biases that are very much so present in our world but also the underrepresentation of minority groups and BIPOC in tech. Even in Karpathy’s results, one can see the obvious majority of who are in these selfies.
Based on his explanations of what was categorized as a good and bad image, I’d definitely would like to ask him what and how those distinctions where made. Also, how is a selfie of Ellie Goulding (famous singer) there if he supposedly through out and separated photos with either too many or not enough likes compared to others and people with too many or not enough followers as others?
Based on his worst selfies, one of the criteria is “low lighting”, however, is it just the low lighting that is categorized as bas or is dark skin also included in that? “Darker photos (which usually include much more noise as well) are ranked very low”. This also speaks to the issue of Snapchat or instagram filters and their inability to pinpoint and find features on people with darker skin in order to apply the filter on them.
P.S. Check back in the future for a more updated list on cool articles and readings about biases in algorithms! Need to do some digging and go through my saved material and notes from previous years!
References
Ethem Alpaydin, Machine Learning: The New AI. Cambridge, MA: The MIT Press, 2016.
Crash Course Computer Science, no. 34: Machine Learning & Artificial Intelligence
Crash Course Computer Science, no. 35: Computer Vision
Crash Course AI, no. 5: Training an AI to Read Your Handwriting
Geoff Dougherty, Pattern Recognition and Classification: An Introduction (New York: Springer, 2012). Excerpt: Chaps. 1-2.
Andrej Karpathy, “What a Deep Neural Network Thinks About Your #selfie,” Andrej Karpathy Blog (blog), October 25, 2015, https://karpathy.github.io/2015/10/25/selfie/.