Computers can’t think, and they’re not conscious.

This post is just an extended link to John Searle’s talk at Google in 2015, which is an excellent response to the two most common opinions about artificial intelligence:

  1. A fancy enough computer program will rise up, make friends with all the other computer programs, and maliciously destroy humanity.
  2. A fancy enough computer program will rise up, make friends with all the other computer programs, and become a benevolent super-being.

Searle’s main point is that, though computers can replicate many feats of human thought, there is no reason to believe they are actually thinking or being creative, each of which would be necessary for the above scenarios. Nor will plentiful enough, or powerful enough, features of a computer program necessarily constitute a consciousness. Instead, creating a true artificial intelligence will depend on complete knowledge of how the brain creates conscious thought, which we’re not even close to having.

So, don’t worry: self-driving cars will never spontaneously become sentient and decide to drive us all off cliffs. They will never experience you, or anything at all.

In this post, I’ll first try give a clearer explanation of Searle’s opening point, which is a little technical; the remainder of Searle’s talk is accessible and important. Then I’ll give a few pull-quotes, and link to a bunch of related articles I’ve come across.


At the beginning of his talk, Searle makes some distinctions between usages of the words “objective” and “subjective”. Why? As he argues, the misuse of these words lead to confusion in the discussion of AI. Consciousness, as many will say, isn’t objective: so it can’t be studied by science, and logic doesn’t apply to it.

But consciousness is only subjective as a matter of being, not as a matter of knowledge. When something is subjective as a matter of being, it is defined by its relationship to an observer. Our experiences are subjective as a matter of their being, but their contents and causes are objective as a matter of knowledge.

The word to describe anything that is a matter of being is “ontological”, and the word to describe anything that is a matter of knowledge is “epistemological”. Facts can be objective or subjective, in both senses. Consider your experience of color:

 

The flower exists (epistemologically objective) independently of our experience (ontologically objective). Our internal vision of the flower is a fact of nature (epistemologically objective), but is part of our experience (ontologically subjective). Our opinion of the flower’s look is also part our experience too (ontologically subjective), but is not a true fact of nature (epistemologically subjective).

Searle’s first main point is that the study of consciousness should not be dismissed as futile or wrong-headed simply because it deals in ontologically subjective material. Neuroscientists are already beginning to understand how to reconstruct our internal experiences:


Some gems from the talk:

“‘So, could a machine think?’ Well, human beings are machines. ‘Yes, but could you make an artificial machine that could think?’ Why not? It’s like an artificial heart. The question ‘can you build an artificial brain that can think?’ is like the question ‘can you build an artificial heart that pumps blood?’. We know how the heart does it, so we know how to do it artificially . . . we have no idea how to create a thinking machine because we don’t know how the brain does it . . . so we have two questions: ‘could a machine think?’ and ‘could an artificially made machine think?’ The answer to question one is obviously yes. The answer to question two is, we don’t know yet but there’s no obstacle in principle.”

“The sense in which *I* carried out the computation is absolutely intrinsic and observer-independent . . . when my pocket calculator does the same operation, the operation is entirely observer-relative. Intrinsically, all that goes on is a set of electronic state-transitions that we have designed so that we can interpret [the result] computationally. And again . . . for most purposes it doesn’t matter. When it matters is when people say ‘well, we’ve created this race of mechanical intelligences, and they might rise up and overthrow us’, or attribute some other equally implausible psychological interpretation to the machinery.”

“Turing machines are not to be found in nature. They are to be found in our interpretations of nature.”


Related articles/references, in order of relevance:

 

2 thoughts on “Computers can’t think, and they’re not conscious.

  1. A variation on #1 is actually quite possible. Remove the “maliciously” part, and it’s not too hard to imagine an AI that algorithmically determines humans are bad and tries to demolish us. This is actually much scarier than a computer that has consciousness — if its programming is such that it is allowed to run without checks, and it has capabilities to actually end humans (access to nuclear launch codes, to name an extreme example), then we would be pretty doomed.

    • Agreed, but this issue is definitely more on the engineering side of things. It’d be interesting to hear Searle’s opinions on whether or not he thinks *non-conscious* AI is an existential threat. The “malicious AIs are nowhere near” point was basically just motivation to talk about the computation vs. consciousness issue, which touches the more philosophical/ethical side of things. I don’t think he came down one way or the other on your point here.

      That being said, I’m a little skeptical of the view that non-conscious AI will pose a serious existential threat merely by being given an increase of computing power or complexity. I think one overly-simplistic but approximately valid way to think about this is that modern AIs are given a move set, an objective, some learning algorithms, and little else. That lets them do cool stuff like this https://www.youtube.com/watch?v=rbsqaJwpu6A. So to use your example, how could an AI launch nukes? By one of two ways:

      1. We add it to the move set

      2. We let the AI add moves to its move set (a move in itself)

      #1 is easy to avoid, so it’s #2 that people are worried about. But I say they shouldn’t be, at least not right now. Think about how non-specific the move “add a new move to your move set” is. First, there’s no way we could tell the AI *all possible* moves it could add (putting aside for the moment that we’d still have to teach it how to code them). So the AI would have to actually suss out new potential moves and *then* make the decision to add them. In my view, this requires creativity, intention, rationality, understanding: things that current deep learning approaches do not involve at all. So we’re back to cracking consciousness for #2 to become a worry.

Leave a Reply

Your email address will not be published. Required fields are marked *