Join with a free account for more service, or become a member for full access to exclusives and extra support of WorldSupporter >>

Chapter 12: Evolution of the Machines

Is there something special about human beings that enables us to think, see, hear, feel, and fall in love? Something that gives us a desire to be good, a love of beauty, and a longing for something beyond? Or are all these capacities just the products of a complicated mechanism inside our bodies? In other words, am I just a machine? And could the machines what we create then also one day do all those things and more? In other words, could there be machine consciousness (MC) or artificial consciousness (AC)? If there could, we may have some kind of moral responsibility for our creations. We may also find that their existence changes our views of our own consciousness.

Are minds like machines?

Ever since the Ancient Greeks, the idea that we are machines has existed. In the seventeenth century, Descartes argued that the human body was a mechanism but that no mechanism alone was capable of speech and rational thought – for that, res cogitans or thinking-stuff was needed.

Among those who rejected his dualism was Gottfried von Leibniz who came up with his famous allegory of the mill. Imagine a machine whose construction enabled it to think, feel, and perceive. Imagine, then, that the machine were enlarged while retaining the same proportions, so that we could go inside it, like entering a windmill. Inside we would find only pieces working upon one another and never anything to explain the perception. From this he concluded that to explain perception, we must look to a simple substance rather than to the workings of a machine, which can never have the unity that consciousness does. Similarly as the mill, if we now look inside our brain, we can see the cogs and beams and walls that make the brain so, but not anything that seems to give us consciousness.

Nowadays, with all we know of our anatomy and psychology, the question is not so much ‘Am I a machine?’ but ‘What kind of machine am I?’, and, for our purposes here, ‘Where do “I” fit in?’ and ‘Where does consciousness fit in?’ These answers we can seek in both biological research or try to mimic it using artificial intelligence.  

In biology, science has successively explained more and more of the mechanisms of perception, learning, memory, and thinking, and in so doing has only amplified the ancient open question about consciousness. That is, when all these abilities have been fully explained, will consciousness be accounted for too, or still be left out?

From the artificial direction, better machines have been developed, leading to the obvious question of whether they are conscious already, or whether they could be one day. If machines could do all the things we do, just as well as we do them, would they be conscious? How could we tell? Would they really be conscious, or just zombies simulating consciousness? Would they truly really understand what they said and read and did, or would they just be acting as if they understood? Is there a difference? We arrive at the same question that echoes through the whole book – is there something extra that is left out?

Are there mind-like machines?

From the fourth century BC the Greeks made elaborate marionettes, and later complete automatic theatres, with moving birds, insects, and people, all worked by strings and falling-weight motors. These machines mimicked living things in the sense that they moved like them, but it was not until much later that the idea of thinking machines became possible.

During the eighteenth century, automata became immensely popular, with the most famous including a flute-playing boy, a duck with a digestive system, and the earliest chess-playing machine, the ‘Turk’. Automata continued to fascinate and frighten, and in 1818, Mary Shelley captured this fear in her novel about Frankenstein’s monster. But soon the technology began to be used for more scientific purposes.

Now there was a lot of development with small machines, most of which you already know from previous courses, so I’m not going to go over all of them. If you want to read about it, check pages 305-307. I’m just going to skip to the development of AI.

Although computers rapidly became faster, smaller, and more flexible, initial attempts to create AI depended on a human programmer writing programs that told the machine what to do using algorithms that processed information according to explicitly encoded rules. This is now referred to – usually by its critics – as GOFAI (pronounced ‘goofy’), or ‘Good Old-Fashioned AI’. So, realistically looking, these machines were far from conscious. One problem for GOFAI is that human users treat the processed information as symbolising things in the world, but these symbols are not grounded in the real world for the computer itself. So for example, a computer might calculate the stresses and strains on a bridge, but it would not know or care anything about bridges; it might just as well be computing stock-market fluctuations or the spread of a deadly virus. Similarly, it might print out plausible replies to typed questions without having a clue about what it was doing. Because such machines merely manipulate symbols according to formal rules, this traditional approach is also called rule-and-symbol AI. It’s like a three year old kid shouting ‘f*ck the monarchy’ every time it sees a picture of the royal family – not because it understands what it means but because it heard its father say it all the time.  

Searle had a theory with two versions: ‘Strong AI’ and ‘Weak AI’. According to Strong AI, a computer running the right program would be intelligent and have a mind just as we do. There is nothing more to having a mind than running the right program. According to Weak AI, computers can simulate the mind and simulate thinking, deciding, and so on, but they can never create real mind, real intentionality, real intelligence, or real consciousness, only as-if consciousness. This is like a meteorologist’s computer that may simulate storms and blizzards but will never start blowing out heaps of snow.

What are recent developments in computing?

Connectionism

The 1980s saw the bloom of ‘connectionism’, a new approach based on artificial neural networks (ANNs) and parallel distributed processing. Part of the motivation was to model the human brain more closely, although even ANNs from the twenty-first century are extremely simple compared with human brain cells. The many types of network include recurrent, associative, multilayered, and self-organising. The big difference from GOFAI is that ANNs are not programmed: they are trained. To take a simple example, imagine looking at photographs of people and deciding whether they are male or female. Humans can do this easily (although not with 100% accuracy) but cannot explain how they do it. So we cannot use introspection to teach a machine what to do. With an ANN, we don’t need to. In supervised learning, the system can be shown a series of photographs and for each one produces an output: male or female. If this is wrong, the synaptic weights are adjusted and the network is shown the next, and so on. Although it begins by making random responses, a trained network can correctly discriminate new faces, as well as ones it has seen before.

ANNs are useful for many purposes, including recognising handwriting, controlling robots, mining data, forecasting market fluctuations, and filtering spam, and may soon be used in many more applications like self-driving cars. The connectionist–computational debates continue, but so does the gradual movement from understanding cognition as manipulation of static symbols towards treating it as a continuous dynamical system that cannot be easily broken down into discrete states.

Cognition

The machines described so far are all disembodied, confined inside boxes and interacting with the world only through humans. When first put to work, controlling robots could, at most, carry out only a few simple, well- specified tasks in highly controlled environments, such as in special block worlds in which they had to avoid or move the blocks. This approach seemed sensible at the time because it was based on an implicit model of mind that was similarly disembodied. It assumed the need for accurate representations of the world, manipulated by rules, without the messiness of arms, legs, and real physical problems. We might contrast this with a child learning to walk. She is not taught the rules of walking; she just gets up, falls over, tries again, bumps into the coffee table, and eventually walks. By the same token, a child learning to talk is not taught the rules either; in the early days she pieces together fragments of sounds she hears and gestures she sees, parses words wrong, and eventually makes herself understood.

The connectionist approach is far more realistic than GOFAI, but still leaves out something important. Perhaps it matters that the child has wobbly legs, that the ground is not flat, and that there are real obstacles in the way, and not a specialised lab environment.

Andy Clark tried to put brain, body, and world together again. ‘Fortunately for us’, he said, ‘human minds are not old-fashioned CPUs trapped in immutable and increasingly feeble corporeal shells. Instead, they are the surprisingly plastic minds of profoundly embodied agents’. What he means by ‘profoundly embodied’ is that every aspect of our mental functioning depends on our intimate connection with the world we live in. Our ‘supersized’ minds and our powers of perception, learning, imagination, thinking, and language are all created by brains interacting with bodies and their environments, both physical and social.

This view of the real world is a lot easier that many we’ve had so far. It provides the very constraints and feedback that make perception, intelligence, and consciousness possible. Human intelligence is not just ‘recognition intelligence’: it is about using understanding to make autonomous real-time decisions. Creating machines this way means constructing real, physical, autonomous agents that move about in the real world, working from the bottom up rather than the top down. There is no point in a driverless car recognising a white van slowing down quickly unless it can assess the current situation and take evasive action. This approach is sometimes called situated robotics, or behaviour-based (as opposed to knowledge-based) robotics.

Intelligence representation

Traditional AI assumed that intelligence is all about manipulating representations, yet experiments have shown this doesn’t have to be the case. For instance, Rodney Brooks and his colleagues at MIT spent many years building robots with no internal representations. Brooks’s ‘creatures’ can wander about in complex environments such as offices or labs full of people and carry out tasks such as collecting rubbish. They have several control layers, each carrying out a simple task in response to the environment. These are built on top of each other as needed and have limited connections enabling one layer to suppress or inhibit another. This is referred to as ‘subsumption architecture’ because one layer can subsume the activity of another. Brooks’s robot Allen, for example, had three layers:

  1. the lowest prevented him from touching other objects by making him run away from obstructions but otherwise sit still
  2. the second let him wander around without crashing into things
  3. the third made him explore by looking for distant places and trying to reach them.

Correction signals operated between all three. Such a creature’s overall behaviour looks intelligent to an observer but, says Brooks, ‘It is only the observer of the Creature who imputes a central representation or central control. The Creature itself has none; it is a collection of competing behaviours’. There are many other theories like this.

All this is highly relevant to understanding consciousness. Along with GOFAI goes the idea that conscious experiences are mental models or inner representations of the world. Although intuitively plausible, this idea is problematic. For example, it is not clear how a mental model can be an experience, nor why some mental models are conscious while most are not.

Doing away with representations may solve some problems, but it raises others. In particular, the nonrepresentational approach has difficulties dealing with experiences that are not driven by continuous interaction with the outside world, such as reasoning, imagining, and dreaming. On representational theories, it is easy to think that when you dream of drowning in huge waves, your brain is constructing representations of sea, water, and waves, and simulating death; but if there are no representations, what could it be doing?

What is the Turing test?

Turing asked himself the question, ‘can machine think?’ To figure this out, he proposed a test for a machine to see if it thought like humans do. Turing chose something particular in manner of testing for his test: whether a computer could hold a conversation with a human.

First he described ‘the imitation game’, which was already a popular parlour game. The object of this game is for an interrogator or judge (C) to decide which of two people is a woman. The man (A) and the woman (B) are in another room so that C cannot see them or hear their voices and can only communicate by asking questions and receiving typed replies. A and B both try to reply as a woman would, so C’s skill lies in asking the right questions. Turing goes on: ‘We now ask the question, ‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, ‘Can machines think?’

Turing provides a critique of his own test. He points out that it neatly separates the intellectual and physical capacities of a person and prevents beauty or strength from winning the day. On the other hand, it may be too heavily weighted against the machine. A human pretending to be a machine would always fail if tested on arithmetic, and he wonders whether this test is unfair because machines might be able to do something that ought to be described as thinking but that is very different from what a human does. He concludes, though, that if any machine could play the game satisfactorily, we need not be troubled by this objection.

Could a machine be conscious?

In other words, is there (or could there ever be) ‘something it is like to be’ a machine? Could there be a world of experience for the machine? What we really mean to ask is whether an artificial machine could be conscious; whether we could make a conscious machine. This question is much more difficult than the already difficult question posed by Turing. When he asked ‘Can a machine think?’, he could cut through arguments about definitions by setting an objective test for thinking. This just doesn’t work for consciousness. Many people have a strong intuition that there is nothing arbitrary about it. Either the machine really does feel, really does have experiences, and really does suffer joy and pain, or it does not. This intuition may, of course, be quite wrong, but it stands in the way of dismissing the question ‘Can machines be conscious?’ as merely a matter of definition. Second, there is no obvious equivalent of the Turing test for consciousness. If we agree that consciousness is subjective, then the only one who can know whether a given machine is conscious is the machine itself, and so there is no sense in looking for an objective test.

The problem becomes clearer if you try to invent a test. An enthusiastic robotbuilder might, for example, suggest that her machine would count as conscious if it cried when pricked, replied ‘yes’ when asked whether it was conscious, or pleaded with people not to turn it off. But the sceptic would say, ‘It’s only got to have an audio recording and a few simple sensors inside it. It’s only pretending to be conscious. It’s a zombie behaving as if it’s conscious’, with which we are back to the zombie/zimbo discussion.

Given these difficulties, it might seem impossible to make any progress with the question of machine consciousness, but we should not give up so easily. We may be sure that better and cleverer machines will continue to be built, and that people will keep arguing about whether they are conscious.

Are conscious machines impossible?

There are several plausible – and not so plausible – ways to argue that machines could never be conscious. Here are the main objections;

  • The soul. Consciousness is the unique capacity of the human soul which is given by God to us alone. God would not give a soul to a human-made machine, so machines can never be conscious.
  • Biology. Only living, biological creatures can be conscious, therefore a machine, which is manufactured and non-biological, cannot be.
  • Machines will never… There are some things that no machine can possibly do because those things require the power of consciousness, which it does not have. Machines will never fall in love or want to look pretty or put in extra effort because of social context.

What is the Chinese Room?

Searle proposed the Chinese Room as a refutation of Strong AI – that is, the claim that implementing the right program is all that is needed for understanding. It is most often used to discuss intentionality and meaning with respect to AI.

‘Suppose that you are locked in a room and given a large batch of Chinese writing. Suppose furthermore that you know absolutely no Chinese, either written or spoken. Inside this room, you have a lot of Chinese ‘squiggles’ and ‘squoggles’, together with a rule book in English. People outside the room pass in two batches of Chinese writing which are, unbeknownst to you, a story, in Chinese of course, and some questions about the story. The rule book tells you which squiggles and which squoggles to use to send back in response to which ‘questions’. After a while you get so good at following the instructions that from the point of view of someone outside the room your ‘answers’ are as good as those of a native Chinese speaker.  Then, the outsiders give you a story and questions in English, which you answers these just as a native English speaker would – because you are a native English speaker. So your answers in both cases are indistinguishable. But there is a crucial difference. In the case of the English stories, you really understand them. In the case of the Chinese stories, you understand nothing.’

So here we have you, locked in you room, acting just like a computer running its program. You have inputs and outputs, and the rule book to manipulate the symbols, but you do not understand the Chinese stories. The moral of the tale is this: a computer running a program about Chinese stories understands nothing of those stories, whether in English or Chinese or any other language, because you have everything a computer has, and you do not understand Chinese.

I’m not going to go over the responses to this story because just as the Mary or zombie thing, there is no solution to it. Just think about it, I suppose.

What is the magical X?

 Suppose that humans have some magic ingredient ‘X’, by virtue of which they are really conscious. If we wanted to make a conscious machine, we might then proceed by finding out what X is and putting it into a machine, or we might build a machine in such a way that X would naturally emerge. The machine would then, theoretically at least, be conscious.

AI researcher Igor Aleksander tackles phenomenology ‘as the sense of self in a perceptual world’ and starts from his own introspection to break this down into five key components or axioms He then uses these as criteria for a conscious machine. These components have spanned many theories and they are:

  1. 1 Perception of oneself in an ‘out there’ world.
  2. Imagination of past events and fiction.
  3. Inner and outer attention.  
  4. Volition and planning.
  5. Emotion.

Does it like me?

When Tamagotchis hit the playgrounds in the mid-1990s, children all over the world starting caring for mindless little virtual animals, portrayed on tiny, low- resolution screens in little hand-held plastic boxes. These young carers took time to ‘wash’ and ‘feed’ their virtual pets, and cried when they ‘died’. But as quickly as the hype caught on, it also vanished again.

The Tamagotchi had thrived on children’s caring natures, but then largely fizzled out, perhaps because the target hosts quickly became immune to such a simple trick. More recently, people have got just as hooked on using their phones to find and fight battles with 3D animals lurking in real environments, with stories of players falling off cliffs and wandering into former concentration camps in search of the Pokémon GO creatures. We humans seem to adopt the intentional stance towards other people, animals, toys, machines, and digital entities on the flimsiest of pretexts. This tactic of attributing mental states to other systems can be valuable for understanding or interacting appropriately with them, but is not an accurate guide to how those other systems really work. For example, consider the wall-following robots whose useful behaviour emerged from a couple of sensors and some inherent bias. Or consider the equally simple robots that can gather pucks into heaps. They roam around with a shovel-like collector on the front which either scoops up any pucks they bump into or drops them when it has too many. In consequence, after some time, the pucks are all collected into piles. Observers readily assume that the robots are ‘trying’ to gather up the pucks. In reality, the robots have no goals, no plans, no knowledge of when they have succeeded, and no internal representations of anything at all.

So does it like you? The harsh truth is; probably not. It probably doesn’t even realise you exist.

 

Image

Access: 
Public

Image

Image

 

 

Contributions: posts

Help other WorldSupporters with additions, improvements and tips

Add new contribution

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.

Image

Spotlight: topics

Image

Check how to use summaries on WorldSupporter.org

Online access to all summaries, study notes en practice exams

How and why would you use WorldSupporter.org for your summaries and study assistance?

  • For free use of many of the summaries and study aids provided or collected by your fellow students.
  • For free use of many of the lecture and study group notes, exam questions and practice questions.
  • For use of all exclusive summaries and study assistance for those who are member with JoHo WorldSupporter with online access
  • For compiling your own materials and contributions with relevant study help
  • For sharing and finding relevant and interesting summaries, documents, notes, blogs, tips, videos, discussions, activities, recipes, side jobs and more.

Using and finding summaries, study notes and practice exams on JoHo WorldSupporter

There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.

  1. Use the menu above every page to go to one of the main starting pages
    • Starting pages: for some fields of study and some university curricula editors have created (start) magazines where customised selections of summaries are put together to smoothen navigation. When you have found a magazine of your likings, add that page to your favorites so you can easily go to that starting point directly from your profile during future visits. Below you will find some start magazines per field of study
  2. Use the topics and taxonomy terms
    • The topics and taxonomy of the study and working fields gives you insight in the amount of summaries that are tagged by authors on specific subjects. This type of navigation can help find summaries that you could have missed when just using the search tools. Tags are organised per field of study and per study institution. Note: not all content is tagged thoroughly, so when this approach doesn't give the results you were looking for, please check the search tool as back up
  3. Check or follow your (study) organizations:
    • by checking or using your study organizations you are likely to discover all relevant study materials.
    • this option is only available trough partner organizations
  4. Check or follow authors or other WorldSupporters
    • by following individual users, authors  you are likely to discover more relevant study materials.
  5. Use the Search tools
    • 'Quick & Easy'- not very elegant but the fastest way to find a specific summary of a book or study assistance with a specific course or subject.
    • The search tool is also available at the bottom of most pages

Do you want to share your summaries with JoHo WorldSupporter and its visitors?

Quicklinks to fields of study for summaries and study assistance

Field of study

Follow the author: Emy
Work for WorldSupporter

Image

JoHo can really use your help!  Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world

Working for JoHo as a student in Leyden

Parttime werken voor JoHo

Statistics
1827