Inspirations: Artificial Intelligence

  • AI_1

    Global Panorama (CC / Flickr)

    AI has reached the point where it has an impact on society.

  • It may be impossible for a machine to emulate human intelligence, but they can do things that humans can’t.
  • A poorly designed AI that can redesign or replicate itself may be dangerous.
  • Properly designed AIs offer enormous benefits.

In January this year, the Future of Life Institute published an open letter on the development of artificial intelligence and attached a document with specific recommendations. Their concern is that AI is beginning to impact on society, and its impact will increase as AI gets more sophisticated. They call for research to be focuses not only on making AI more capable, but also on maximizing the societal benefit of AI’.

So far, so sensible.

Hawking’s warning

However, Professor Stephen Hawking, perhaps the letter’s most famous signatory, had recently given us a dire warning. He is concerned that ‘development of full artificial intelligence could spell the end of the human race’ because ‘it would take off on its own and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded’. That point comes up just past 4min into a wider interview, which includes his enthusiasm for the limited AI that we have so far:

It’s an alarming sentiment from the nearest thing we have to a celebrity cyborg though, contrary to some of the more sensational reports in the media, Hawking did not say the rise of the Terminator is inevitable. More recently, BBC Radio 4’s Analysis offered a characteristically balanced exploration of the issue in a half hour program to explore the issue.

Science fiction has offered many visions of AI over the decades, from the sagacious Mike in Robert A Heinlein’s The Moon is a Harsh Mistress to the homicidal HAL 9000 in Arthur C Clarke’s 2001: A Space Odyssey. As AI emerges from fiction into reality, the Future of Life Institute has chosen an opportune time to consider where it might take us.

Can machines think?

AI_2

David Smith (CC / Flickr)

The word ‘intelligence’ is often bandied around alongside words like ‘sentient’ and ‘self-aware’ in discussions of computers that make decisions, animals we share the earth with and hypothetical aliens we may never encounter. The starting point is that if humans are intelligent, artificial intelligence must be whatever appears human. That’s the principal behind the oft-quoted gold standard of machine intelligence, the Turing test: if you can’t tell a machine from a human in a conversation, the machine is intelligent.

Yet Alan Turing conceived the test that bears his name because he found it impossible to pose the question of ‘can machines think?’ in a meaningful way, let alone to answer it. The problem, he asserted, ‘should begin with definitions of the meaning of the terms “machine” and “think”‘, which he was unable to provide. He proposed the ‘imitation game’, a name that implies it was intended to test a machine’s ability to imitate a human rather than its equivalence to a human.

Arguments about what human intelligence is and how to measure it have been raging for decades, so it’s no surprise that there is no consensus of what true artificial intelligence might look like.

If we don’t know what human intelligence is, let alone how it works, is the creation of a human-like AI even possible?

In the podcast, Nick Bostrum, director of Oxford University’s Future of Humanity Institute, suggests reverse engineering human thought processes without fully understanding them. If a human and a computer respond to the same input with the same output, does it matter whether they got there by the same process?

AI_3

Emilie Ogez (CC / Flickr)

That question begs: is there any input that all humans will respond to in the same way? The only one I can think of is that most people will say ‘ouch’ if you kick them. No AI could replicate that unless it was equipped with pain receptors, or some simulacrum of them.

If there is a wide range of human responses to a possible input, how is it possible to conclude that an AI’s response is fundamentally different to a human’s? Brian Christian was voted the most human-like human by a Turing test that pitted humans against computers, but he had to study before the test to achieve that. The fact that a human had to game the test in order to be recognised as human by other humans shows how difficult it is to come up with a working definition of what human intelligence looks like.

Why make a machine that can think?

Perhaps more importantly, what use is an AI whose abilities are limited to pretending to be a human in a conversation? Margaret Boden, Professor of cognitive science at the University of Sussex, says ‘if you want a human intelligence, there’s a very easy way of getting it, namely get more human beings’. We don’t need artificial intelligence to do what seven billion biological intelligences can already do. We need AI to do what we can’t do.

AI_4

gordontour (CC / Flickr)

Computers are capable of performing massive calculations quickly, without making mistakes. They can follow a set of rules to the letter without being distracted or biased. That makes them much better at dealing with large amounts of numerical information than a human can be, and also much better at precise and repetitive tasks.

We already have AIs that can do such things, albeit with two important constraints. The first is that they are amoral. The systems that run Stephen Hawking’s voice synthesiser, a hedge fund’s trading algorithm or autonomous weapons like the Russian Wolf-2 or the British BAe Mantis have no concept of what they are actually doing, so they can’t decide whether or not to do it.

The second constraint is that they can’t replicate themselves.

Those constraints are fundamental when we think about the risks of AI that Hawking described in his interview.

It knows not what it does

Like anyone science fiction reader, I grew up with Asimov’s laws of robotics, the first of which is that no robot may harm a human being. The problem is that to obey the law, a robot must be able to recognise a human being when it sees one. Asimov’s robots had positronic brains, which functioned like the human brain but with better computational skills. Asimov’s computer scientists didn’t even have to learn to code. They just told their computers what they wanted, and as long as their phrasing was sufficiently precise, they got it.

AI_5

Steve Jurvetson (CC / Flickr)

None of the projections for AI in the foreseeable future involve it developing such conceptual skills, so it’s going to be restricted to following rules laid out by its programmers. An AI doesn’t know whether it’s guiding a missile into a wedding party or performing life-saving surgery. For that reason, the cautionary tale of HAL 9000 is more informative than the aspirational character of Asimov’s Daneel Olivaw. HAL has no quarrel its human crewmates per se. It is simply following its programmed priorities to the letter when it decides they’re more of a hindrance than a help.

If HAL sounds far-fetched, remember the day in 2012 when Knight Capital switched on their shiny new trading algorithm and it lost them $440 million in the 45min it took anyone to notice and switch it off. The difference between HAL and the Knight Capital algorithm is that the latter was never put in charge of a system where it could kill anyone, not in the nature of the problem that Clarke predicted five decades earlier.

The evolutionary conundrum

Knight Capital’s trading algorithm had one saving grace: it could be switched off. The AI that Hawking warns of would be left to its own devices. Once again, science fiction got there first: in Greg Bear’s Hammer of God, earth is destroyed by robots programmed to replicate themselves using every scrap of matter they can get their pincers on without regard for what they may be dismantling in the process. We may be nowhere near being able to invent such a device, but it’s a cautionary tale for any system programmed to follow an instruction without being limited.

AI_6

Simon Liu (CC / Flickr)

Hawking’s concern is that while a human has to spend two decades growing to maturity and another to reach the forefront of any area of human knowledge, an AI could spring into existence fully formed. Bostrum describes such an AI as the ‘last invention humans will ever make’ and says it’s ‘critical to get it right on the first try’ because the consequences of getting it wrong will be a very powerful loose cannon.

The danger lies is that an AI could develop itself faster than any human operator can keep up with it, so its constraints will have to be hardwired into it. Science fiction does, however, warn of a further problem as soon as a system is able to replicate itself. In Greg Benford’s Across the Sea of Suns, self-replicating machines have run into the problem that replication cannot be flawless. There are always minor mistakes. Most mistakes are deleterious and lead to that machine functioning poorly. A few will make it function better.

If that sounds familiar, it’s because it’s the heart of Darwin’s theory of evolution by natural selection, and it leads to the same conclusion Darwin drew. Anything that makes a system better at replicating itself will, in time come to dominate the population. Failing to replicate a constraint programmed into it for safety’s sake may be exactly the mistake that would enable it to dominate. Benford’s machines were never programmed to attack any biological intelligence they encountered. They were the products of a history of conflict with biological species. Machines that stifled potential threats in the cradle of their own planet left more descendants than machines that waited to be discovered.

For slow-replicating mammals like us, evolution is a process that may take hundreds of thousands of years. For an AI, the generation time could be measured in seconds.

Artificial angels or artificial demons?

If all that sounds alarming, it’s worth remembering that the process is entirely under human control. We don’t know whether the more alarming speculations are technically possible and even if they are, they can only happen if they’re allowed to.

AI_7

Army Medicine (CC / Flickr)

They certainly shouldn’t detract from the potential of AI if used in the ways proposed by the Future of Life Institute. The mechanical applications, such as prosthetics and microsurgery are enticing enough. Perhaps more fundamental is that AI may elevate scientific knowledge to a level that the human mind cannot attain on its own. Many areas of research are currently better at collecting data than analysing it. It’s extremely difficult to conceive of a multidimensional dataset, let alone analyse it in any meaningful way. For example, the National Health Service here in the UK has a central repository of every test and consultation a patient ever has. Suppose an AI could trawl that dataset and identify who would benefit from what interventions to prevent disease in twenty years’ time?

Neil deGrasse Tyson frequently makes the point that as a chimpanzee lacks the cognitive equipment to perform long division, there may be physical properties of the universe that are equally opaque to the human mind. There is no reason to assume that even a Newton or an Einstein could grasp everything there is to grasp. The only way that our species could ever access such properties is through the indirect medium of an AI that is able to bootstrap its own intelligence beyond anything its human designers could conceive, let alone achieve. We have no idea what may be possible if we access the fundamentals at that sort of level. We can’t, by definition.

But if anyone ever tries to build such a machine, let’s hope they remember to give it an off switch.

Advertisements
Tagged with: , , , , ,
Posted in Inspirations, Wednesday Pontification

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow Cockburn's Eclectics on WordPress.com

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 453 other followers

Goodreads
Flickr
Lindisfarne Causeway

Goswick Sands

Fields of Green before Gold

Face Fly

Lumps of Rope

Delivering the Fish

Kittiwake Neighbours

Bamburgh Across the Sea

Brownsman Island Ruin

Inner Farne Islands

More Photos
%d bloggers like this: