Synthesis: Inside The Black Box – AI

AI is a great leap forward but in most ways, we are still in its infancy. Like a tool which we find in our grandfather’s workshop, we may find a use for it but we don’t fully know how to use it properly. There are many mysteries concerning AI and how they work but until we truly open the black box we also won’t have control over them.

There are two things I want to touch upon. First, how these technologies work. Second, the implication of their working.

AI, like most of my field of psychology, works on statistical probabilities of a phenomenon occurring based on data from the past. This works on the foundation of three things. First, the understanding that if something has happened before, it will most likely happen again. Second, the past causes the present and the future to happen. Third, that information is interconnected, creating relationships between phenomenon that is measured. This leads to the creating of models to determine how and why phenomena occur in the first place. This is where AI and scientific thought diverges in a lot of ways. AI uses information fed into it to create models which, for the most part, cannot be understood by those who create them. This means, for the most part, the models being created are focused on the end result rather than the journey to get there. This means though that as long as we believe the AI is working, it doesn’t matter what type of biases it may be used to create the model to determine the answer. Data that is fed into a system directly relates to the results we get out, and so if the data set is biased then the results to are biased. The problem then with AI is that there is no oversight into the biases of data which leads to an overconfidence in the validity of the results until someone who is hurt by the systems speaks up. Ultimately we need to look to history to show that humans are wrought with bias, and that when systems are being used this way that there may be hundred people suffering in silence for every one that speaks up.

The second is the implication of these working. It’s a detachment from the method, the exclusion of the human element that makes us so confident in the results of the process. A type of no-holds-barred event where as long as AI gets at the answer, we don’t care about the method. We are just at the tip of the iceberg with AI, with most of the functions of AI on their way. Though most things may be helpful and benign, it’s important to understand that AI will be used a great sorter of things. Just as resources get distributed unevenly, so will the functionality of AI. Choice will become the luxury and that we will be faced with the facsimile of options. For most that will work but there will always be those left in the wake of the oncoming wave. The way AI work presupposed a sense of expertise and knowledge of you, but in truth the way AI has to work is  just like how data works, it has to flatten and categorize in imperfect containers to be able to create proper results. Just like the machines themselves, we will need to operate within a set of harder set parameters. Life is messy, and so are people but by making determinations based on these set parameters will further confine those who are at the bottom. What we may improve on in life we may lose in freedoms. AI isn’t all bad, and not everything it does is an existential crisis but it’s important to have these conversations about AI and it’s implications before we get there. Given the choice, people may chose to live a life without it.

AI is a mystery and is only getting more mysterious. The future I guarantee will at least be getting more interesting. The more you know, the less you may understand but learning about AI is important to be able to make choices about our technologies in the future.