Practice Makes Perfect? AI Bias and ML Fairness

While the words “Artificial Intelligence” may conjure up alarmist imagery of a dystopian future (as evident in Hollywood movies like Blade Runner, or shows like Westworld), perhaps the real concerns are two-pronged: 1) AI bias and machine learning fairness and 2) The affordances and capability of technology in misleading the public. With the prevalence of surveillance technology applications like Amazon Rekognition, it is now easier than ever for law enforcement and businesses to track and identify individuals. If Alexa and the Echo are Amazon’s ears of surveillance, Rekognition is now the eyes, but can we always trust what they are seeing?

Studies have shown that AI is less able to discern and identify POC, especially women, marginalizing them and potentially putting them in harm’s way as a result of misidentification. The ACLU’s perspective on Rekognition is that, “the rights of immigrants, communities of color, protesters and others will be put at risk if Amazon provides this powerful surveillance system to government agencies”. This technology can be used to target other minority communities as well, due to existing societal or police bias. Human bias can also find its way into the deep learning process as a lot of ML fairness depends on the paradigms of training – which is done by humans, not as many believe, conjured by magic.

With the advent of new media technology, deep fakes are also a rising ethical issue that may have a political impact as well. An early example of this is the viral video “Golden Eagle Snatches Kid” – a humorous, harmless fake. However, this escalates when it depicts people of political significance, espousing polarizing views. A lot of “fake news” that floats around on Facebook, Twitter or other social media platforms have now evolved from Photoshop to video, making it more believable as the viewer has seen it with their own eyes. This can pave the way for ethical and political implications for elections, which may have consequences for entire nations and snowball into having a global impact.

So how do we work towards preventing these ethical violations? Practice makes perfect and machine learning fairness will only further develop with the faces the algorithms practice on. The more they practice, they better they will learn to recognize, which opens up another pandora’s box…what are the ethical implications of where they get the data?!

References:

https://www.wsj.com/articles/deepfake-videos-are-ruining-lives-is-democracy-next-1539595787

https://www.nbcnews.com/tech/internet/facial-recognition-s-dirty-little-secret-millions-online-photos-scraped-n981921 

https://www.perpetuallineup.org/findings/racial-bias