As this course nears its end, this week’s reflection looks at the bigger picture. While we’ve discussed an array of topics from ‘Big Data’ to NLP, we have consistently seen deep-rooted issues in our current developments in AI/ML. Included in these issues is failure to distinguish between explanation and speculation, failure to identify the sources of empirical gains, the use of mathematics that obfuscates or impresses rather than clarifies, e.g. by confusing technical and non-technical concepts, as well as misuse of language, e.g. by choosing terms of art with colloquial connotations or by overloading established technical terms (Lipton). This course has aimed at deblackboxing, or providing clarity on the technologies that are seemingly purposefully explained in a confusing matter. While this week’s readings aimed at these issues and possible solutions (like XIA), this topic begs the question: where do we start?
As students studying Computer Science/similar topics at the degree level, securing an internship/full-time role at a company working on developments in AI/ML is an accomplishment. There is such great competition, that new hires most likely would not dare try to challenge their bosses/company ethically. As time passes, complacency grows and the thankfulness for a paycheck keeps people from speaking up until finally, the damage has already been done. How do we teach students at the university level how to not only right the wrongs of current AI/ML development practices, but also implement them at their future places of work?
If, like the readings mention, we utilize some sort of independent governance to insure sound practice, what incentivizes companies to hire and pay these companies? Why would they want to be more restricted in the work they do when they have been getting away with more cost-effective, profit-driving techniques currently? With laws put in place like section 240 or more irrelevant rulings like NYT v. Sullivan, it is clear that there is very little accountability from the government in terms of the influence technology has on individual’s lives. Without some sort of ruling/law from the government demanding better practice or immense social pressure (that will not happen any time soon because of the blackboxing of our technologies), it does not seem likely our technologies will ethically improve any time soon.
As we’ve seen from articles like this, it is clear that large companies with ethics boards do not seem to have a lot of impact on the workings of companies. Instead, they are a pretty line on a reportings PowerPoint/get-out-of-jail-free card. Clearly, people’s hopes of AI/ML becoming more ethical in its development isn’t working. So, to round back to my initial question, where do we go from here?