A Call for Regulation or Education? Reframing AI Discourse

The leading issues within AI discourse stem from a lack of transparency and information surrounding emerging technologies. In tackling some of the core categories within AI, it is useful to start by looking at media representation, which is where the general public go to get their knowledge on a given topic. 

From here, we can ask: what information do we actually have concerning a topic (Deep Neural Networks, Natural Language Processing, etc)? Where is the information coming from and who is the intended audience? Is it an article that is meant to advertise a product rather than to detail out accurate information?

During this course, we have covered key concepts that help de-mystify AI systems and also lead to more informed questions that can help us move forward in both our analysis and our understanding of the current climate surrounding these topics.

Reframing AI Discourse

One of the important distinctions to be made is that although AI is meant to “do the sorts of things that minds can do,”  the intelligence (or program) does not have an ability to think for itself (Boden, 2016, pg. 1).

Data and Representation

  • Everything depends on computable data, the context requires that it be defined as a certain data type
  • Deep Learning requires large amounts of data, where a human can recognize patterns more easily and accurately due to context
    • Deep learning is not deep knowledge or accumulated knowledge, but rather deep layers used within the network

Autonomy of Computational Artefacts vs. Autonomy in Humans 

  • AI systems are not autonomous. Machines are not like humans in that they do not have their own interests and freewill to act as they choose, debunks the myth that machines or robots will decide to take over human lif
  • The more important question to focus on: Who is creating AI systems (especially autonomous weapons + other tools for destruction) and has control over these artefacts?

Limitations of Neural Networks / Machine Learning 

  • Although Deep Neural Networks can be very useful in picking up pattern in large amounts of data, one of the main issues is that they don’t tell the whole picture. They can only select data based off of set parameters and rules – which does not translate into human problem solving or decision making.
  • A ConvNet is a large collection of filters that are applied on top of each other, they will learn to recognize the labels that we give them
  • Context is important when comparing what a neural network can do vs. a human brain
  • Algorithms are designed by humans, therefore a computer or AI system is not inherently biased. This is a big theme because a lot of media exists on discussing how AI is evil or bad — which takes the attention off of the systems we have in place and the algorithms we design. At the same time, when it comes to regulation — a big issue is that we don’t know all of the steps involved in the process of a deep neural network making a classification. This is due to the complexity of the system and hidden layers. 
  • Machine Learning is only one method of learning within AI Systems, and therefore should not be the only focus when looking at ethical implications of AI systems

The EU’s Set of Guidelines for Trustworthy AI tackles many of the ethical issues we have discussed throughout this course: privacy and data governance, transparency, human agency, bias, environmental well-being, and accountability. At the same time, these guidelines expose the problems implementing the regulation of AI. In learning about the ways AI is designed and programmed, it’s clear that some of these regulations are still too vague to be effective or implemented without becoming censorship. They are also very general guidelines which continues the cycle of blanket terminology and generalizations used by tech companies to “explain” their products, services, and policies.

Given this current example, we can see there needs to be more discourse surrounding the ethics of AI that is open and inclusive to creators and users alike. The ethical implications continually point to a need to understand what we are creating and the consequences of those creations to society as a whole. Software engineers and developers are not the only ones who need to be concerned with learning how emerging technology works — additionally, the method of teaching engineers only the mechanics and technical aspects of the process is in itself an ethical issue. Education, even more than regulation, will be necessary in order to move forward in creating systems that are safe and ethical. The more the information remains in the hands of a few corporations, the more likely we are to have a digital divide, and without the resources to teach / inform people about how AI systems work (due to IP issues etc) + how to develop alternative systems – we are stuck making guidelines without the means to implement them.