In recent years, the application of more complex AI is increasing. However, it is dangerous and also unreliable to make decisions that are based on unexplainable techniques, due to fact that people prefer to adopt a technique that they could fully trust. Better interpreted the machine can assist them to make more reasonable predictions and correct the discriminations that exist in the model, and also provide a more explainable model. AI relatively shares more concepts of philosophy than any other science technique, because it involves more scenes of consciousness (McCarthy, 2012). However, AI is after all complex algorithms computed by humans, it is more like a representation of intelligence, or a deep explanation of intelligence, but not intelligence itself, not even close to self-awareness.
Using one sentence to conclude what I have learned in this class, is that data and information are used to feed AL and further develop ML, and output more data and information. The fun fact is that human-trained machines so hard so it could replace us do the majority of tasks, and leaves us being the boss. However, with the rapid development of AI/ML, and less transparency of this technology, and more reliance on them, it will soon become hard to provide then actual reliable pieces of evidence. We are creating a huge black box that we think we understand because we feed them the data collected by us and we have the ability to analyze the outcome. Humans became even more confident because we think that we fully understand the concept and the algorithms since its build by us. But the problem is, do we? Does the outcome extracts from AL are more accurate and justice? Or we just more use to disguise the truth by so-called absolute accuracy just because the information was formed by a machine?
What I have concerned about is fully presented in the documentary “Code Bias”. A facial recognition system created by an African American female is unable to accurately recognize dark skin race, especially females. It is ironic that people are planning to entirely rely on this technique. As Cathy O’Neil in the video said, algorithms are using historical data to make a prediction of the future. It’s even more true when it comes to Deep Learning. Looks like the future is all being controlled by the group of people who collects the data and knows the code.
Resources:
Artificial Intelligence (Stanford Encyclopedia of Philosophy). (2018, July 12). Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/artificial-intelligence/#PhilAI
C. Lipton, Z., & Steinhartd, J. (2018, July 27). Maak kennis met Google Drive: één plek voor al je bestanden. Troubling Trends in Machine Learning Scholars. https://accounts.google.com/ServiceLogin?service=wise&passive=1209600&continue=https://drive.google.com/file/d/1fPOFpwirqp0oLM5qu4swseQW3dt_FsoW/view&followup=https://drive.google.com/file/d/1fPOFpwirqp0oLM5qu4swseQW3dt_FsoW/view
Coded Bias | Trailer | Independent Lens | PBS. (2021, March 22). Independent Lens. https://www.pbs.org/independentlens/videos/coded-bias-full-film/
McCarthy, J. (2012). The Philosophy of AI and the AI of Philosophy. Professor John McCarthy. http://jmc.stanford.edu/articles/aiphil2.html