Challenges of (not)Fitting into the Deep Learning Model

Gary Marcus offers a very detailed critique of the field of AI in “Deep Learning: A Critical Appraisal”. What caught my attention regarding this article was the fact that it was presented as an intentional self-introspective snapshot of the current state of deep learning. It looks not only at how much as been accomplished but also how it has failed and what that presents as different approaches to deep learning in the future.

He says, “deep learning currently lacks a mechanism for learning abstractions through explicit, verbal definition, and works best when there are thousands, millions or even billions of training examples, as in DeepMind’s work on board games and Atari. As Brenden Lake and his colleagues have recently emphasized in a series of papers, humans are far more efficient in learning complex rules than deep learning systems are (Lake, Salakhutdinov, & Tenenbaum, 2015; Lake, Ullman, Tenenbaum, & Gershman, 2016).” (p. 7)

As we have mentioned many times before, deep learning struggles to offer outputs that accurately reflect complex human concepts that are difficult to represent as computational, as a set of yes and no answers. It can go from translation of languages (I never get tired of using this very convenient example) to more abstract concepts such as justice.

Referring to my personal favorite example of open-ended natural language, Marcus says, “In a problem like that, deep learning becomes a square peg slammed into a round hole, a crude approximation when there must be a solution elsewhere.” (p. 15)

Another observation present in Marcus’ analysis refers to the approach of a real world taken as a ‘set in stone’ reality: “deep learning presumes a largely stable world, in ways that may be problematic: The logic of deep learning is such that it is likely to work best in highly stable worlds, like the board game Go, which has unvarying rules, and less well in systems such as politics and economics that are constantly changing.” (p. 13)

Not only the world and our knowledge of it is constantly changing, but our representation of that reality through data is most of the times inaccurate at best, skewed at worst. To what extent and what are the different ways in which we can see the impact of such flawed outputs? Sternberg presents two aspects to take into consideration:

  • What exists in the data might be a partial representation of reality:

Even that partial representation might not be entirely accurate. For example, the famous case of Kodak’s film being unable to efficiently capture non-white tones of skin is also present in facial recognition systems. Other controversial cases were those that mistook pictures of Asians as ‘blinking’ and identified black people as gorillas. The social cost of a mistake in any AI system being used by the police for decision-making is high and more likely to present results that are less accurate with minorities since they were underrepresented and misrepresented in the data-set: “This also calls for transparency regarding representation within the data-set, especially when it is human data, and for the development of tests for accuracy across groups” (Sternberg 2018, October 8).

  • Even if the data does represent reality quite truthfully, our social reality is not a perfectly-balanced and desired state that calls for perpetuation: 

As an example, gender and racial biases present in binary terminology is, after all, based on statistics present in the off-line world, well-documented in history. However, here Sternberg offers an optimistic idea, “our social reality is not a perfectly-balanced and desired state that calls for perpetuation”. This meaning that we are putting so much value on the data and deep learning in this process, giving it a deterministic characteristic, when in reality, these are not the ideas, concepts and human values we should be preserving or basing our technology on.

In regards to that, Sternberg criticizes the absolute faith in the outcome of these systems regarding them as more objective than humans: “Sexism, gender inequality and lack of fairness arise from the implementation of such biases in automation tools that will replicate them as if they were laws of nature, thus preserving unequal gender-relations, and limiting one’s capability of stepping outside their pre-defined social limits” (Sternberg 2018, October 8).

Marcus’ premise agrees with Sternberg, focusing more on the problem of thinking about deep learning as the only tool available to understanding and digitalizing the world when, in reality, this tool might not fit every problem we want to fix: “the real problem lies in misunderstanding what deep learning is, and is not, good for. The technique excels at solving closed-end classification problems… And some problems cannot, given real world limitations, be thought of as classification problems at all.” (p. 15)

These ideas are not entirely new to me. However, what I found the most thought-provoking about Marcus’ article is the proposal to see deep learning as this set box in which every human problem/though must be filtered through, but to understand that we have to develop other hybrid ways in which we can analyze these problems beyond classifications instead of trying to make the square leg fit into the round hole.

 

References: