Computers can’t think, and they’re not conscious.

This post is just an extended link to John Searle’s talk at Google in 2015, which is an excellent response to the two most common opinions about artificial intelligence:

  1. A fancy enough computer program will rise up, make friends with all the other computer programs, and maliciously destroy humanity.
  2. A fancy enough computer program will rise up, make friends with all the other computer programs, and become a benevolent super-being.

Searle’s main point is that, though computers can replicate many feats of human thought, there is no reason to believe they are actually thinking or being creative, each of which would be necessary for the above scenarios. Nor will plentiful enough, or powerful enough, features of a computer program necessarily constitute a consciousness. Instead, creating a true artificial intelligence will depend on complete knowledge of how the brain creates conscious thought, which we’re not even close to having.

So, don’t worry: self-driving cars will never spontaneously become sentient and decide to drive us all off cliffs. They will never experience you, or anything at all.

In this post, I’ll first try give a clearer explanation of Searle’s opening point, which is a little technical; the remainder of Searle’s talk is accessible and important. Then I’ll give a few pull-quotes, and link to a bunch of related articles I’ve come across.


At the beginning of his talk, Searle makes some distinctions between usages of the words “objective” and “subjective”. Why? As he argues, the misuse of these words lead to confusion in the discussion of AI. Consciousness, as many will say, isn’t objective: so it can’t be studied by science, and logic doesn’t apply to it.

But consciousness is only subjective as a matter of being, not as a matter of knowledge. When something is subjective as a matter of being, it is defined by its relationship to an observer. Our experiences are subjective as a matter of their being, but their contents and causes are objective as a matter of knowledge.

The word to describe anything that is a matter of being is “ontological”, and the word to describe anything that is a matter of knowledge is “epistemological”. Facts can be objective or subjective, in both senses. Consider your experience of color:

 

The flower exists (epistemologically objective) independently of our experience (ontologically objective). Our internal vision of the flower is a fact of nature (epistemologically objective), but is part of our experience (ontologically subjective). Our opinion of the flower’s look is also part our experience too (ontologically subjective), but is not a true fact of nature (epistemologically subjective).

Searle’s first main point is that the study of consciousness should not be dismissed as futile or wrong-headed simply because it deals in ontologically subjective material. Neuroscientists are already beginning to understand how to reconstruct our internal experiences:


Some gems from the talk:

“‘So, could a machine think?’ Well, human beings are machines. ‘Yes, but could you make an artificial machine that could think?’ Why not? It’s like an artificial heart. The question ‘can you build an artificial brain that can think?’ is like the question ‘can you build an artificial heart that pumps blood?’. We know how the heart does it, so we know how to do it artificially . . . we have no idea how to create a thinking machine because we don’t know how the brain does it . . . so we have two questions: ‘could a machine think?’ and ‘could an artificially made machine think?’ The answer to question one is obviously yes. The answer to question two is, we don’t know yet but there’s no obstacle in principle.”

“The sense in which *I* carried out the computation is absolutely intrinsic and observer-independent . . . when my pocket calculator does the same operation, the operation is entirely observer-relative. Intrinsically, all that goes on is a set of electronic state-transitions that we have designed so that we can interpret [the result] computationally. And again . . . for most purposes it doesn’t matter. When it matters is when people say ‘well, we’ve created this race of mechanical intelligences, and they might rise up and overthrow us’, or attribute some other equally implausible psychological interpretation to the machinery.”

“Turing machines are not to be found in nature. They are to be found in our interpretations of nature.”


Related articles/references, in order of relevance: