Before this course, artificial intelligence, in my mind, is a kind of untouchable mysterious domain for ordinary people without a technological background to involve in. Even some of my friends studying in data science or information system don’t know the rules behind the computer or the algorithm. We all know technology is dominant, and AI will change the world, but we don’t know how it will change our world and lives. We absorb information from media stories, so we put AI in an overstate status. Individuals don’t see how those things work and their principles. They choose to understand it with those misleading terms, so some are afraid of emerging technologies like DL and ML. The reasons that those terms appear are pretty complicated. It may because the big companies intend to maintain their monopolies, or media agencies trying to take profits from it, or people have so many fantasies about futures and artificial intelligence. However, actually, we are still in the weak AI phase and still far from strong AI like what is performed in movies. Through this course, we learned how to de-blackbox it and convey concepts of the domain acceptably.
According to Dr. Irvine, we will de-blackbox it from four perspectives- ‘the systems view,’ ‘the design view,’ ‘the semiotic systems of view,’ ‘the ethics and policy view.’ From those four perspectives, we studied the software, the hardware, programming systems, semiotic systems like Unicode, and why it’s presented in specific ways. Furthermore, for ‘the dependencies for Socio-technical systems’ and considering ethics and policies, how should those techniques be regulated to adapt to human society.
In this week’s readings, we can see both political and academic institutions are making efforts to avoid the pessimistic predictions about AI. For example, the EU involves in promote General Data Protection Regulation (GDPR). It has made seven principles to regulate future AI. Also, universities are studying how to get better predictions from AI algorithms by adjusting their parameters.
However, I believe that the effects of AI depend on human society’s rules. In research of MIT Lab and documentary Code Bias, the high error rate in people of color’s facial recognition may intensify the bias towards the minority. Moreover, in another case, the algorithm itself may be racist. The health-care costs of different races may have distinguished differences. The former case will contribute to the inequities, and the inequalities contribute to the latter one. That will easily trap us into a vicious circle and leads to more severe social injustice. With AI or without AI, the predictions will be full of biases; after all, ML is based on human knowledge and incidents. I certainly expect a better future with more intelligent AI and believe our efforts will work to some extent. Still, I also think that what we should do to improve the quality of the results of programs is to improve our society first.
Brandom, R. (2018, May 25). Everything you need to know about GDPR. The Verge.
Gelman, A. (2019, April 3). From Overconfidence in Research to Over Certainty in Policy Analysis: Can We Escape the Cycle of Hype and Disappointment? The Ethical Machine.
Irvine, M. (n.d.). CCTP-607: Leading Ideas in Technology: AI to the Cloud. Retrieved April 17, 2021, from https://drive.google.com/file/d/1Hk8gLXcgY0G2DyhSRHL5fPQn2Z089akQ/view
Lipton, Z. C., & Steinhardt, J. (2018). Troubling Trends in Machine Learning Scholarship. ArXiv:1807.03341 [Cs, Stat]. http://arxiv.org/abs/1807.03341
Vincent, J. (2018, July 26). The tech industry doesn’t have a plan for dealing with bias in facial recognition. The Verge. https://www.theverge.com/2018/7/26/17616290/facial-recognition-ai-bias-benchmark-test
Vincent, J. (2019, April 8). AI systems should be accountable, explainable, and unbiased, says EU. The Verge. https://www.theverge.com/2019/4/8/18300149/eu-artificial-intelligence-ai-ethical-guidelines-recommendations