"In the near future, as artificial intelligence (AI) systems become
more capable, we will begin to see more automated and increasingly
sophisticated social engineering attacks. The rise of AI-enabled
cyberattacks is expected to cause an explosion of network penetrations,
personal data thefts, and an epidemic-level spread of intelligent
computer viruses. Ironically, our best hope to defend against AI-enabled
hacking is by using AI. But this is very likely to lead to an AI arms
race, the consequences of which may be very troubling in the long term,
especially as big government actors join the cyber wars."
WHAT EXACTLY IS ARTIFICIAL INTELLIGENCE?
Very
simply, it’s machines doing things that are considered to require
intelligence when humans do them: understanding natural language,
recognising faces in photos, driving a car, or guessing what other books
we might like based on what we have previously enjoyed reading.
It’s
the difference between a mechanical arm on a factory production line
programmed to repeat the same basic task over and over again, and an arm
that learns through trial and error how to handle different tasks by
itself.
HOW IS AI HELPING US?
The leading
approach to AI right now is machine learning, in which programs are
trained to pick out and respond to patterns in large amounts of data,
such as identifying a face in an image or choosing a winning move in the
board game Go. This technique can be applied to all sorts of problems,
such as getting computers to spot patterns in medical images, for
example. Google’s artificial intelligence company DeepMind are
collaborating with the UK’s National Health Service in a handful of
projects, including ones in which their software is being taught to diagnose cancer
and eye disease from patient scans. Others are using machine learning
to catch early signs of conditions such as heart disease and Alzheimers.
Artificial intelligence is also being used to analyse vast amounts of
molecular information looking for potential new drug candidates – a
process that would take humans too long to be worth doing. Indeed,
machine learning could soon be indispensable to healthcare.
Artificial
intelligence can also help us manage highly complex systems such as
global shipping networks. For example, the system at the heart of the
Port Botany container terminal in Sydney manages the movement of
thousands of shipping containers in and out of the port, controlling a
fleet of automated, driverless straddle-carriers in a completely
human-free zone. Similarly, in the mining industry, optimisation engines
are increasingly being used to plan and coordinate the movement of a
resource, such as iron ore, from initial transport on huge driverless
mine trucks, to the freight trains that take the ore to port.
AIs
are at work wherever you look, in industries from finance to
transportation, monitoring the share market for suspicious trading
activity or assisting with ground and air traffic control. They even
help to keep spam out of your inbox. And this is just the beginning for
artificial intelligence. As the technology advances, so too does the
number of applications.
How dangerous is AI really?
Look at any newsfeed today, and you'll undoubtedly see some mention
of AI. Deep machine learning is becoming the norm. Couple that with
Moore's Law and the age of quantum computers that's undoubtedly upon us
and it's clear that AI is right around the corner. But how dangerous is
AI really? When it comes down to it, how can a connected network
operating within the confines of laws that govern other organisms'
survival actually be stopped?
While the birth of AI is surely a utilitarian quest in that our
natural tendencies are to improve upon prior iterations of life through
the advancement of technology, and that AI will clearly pave the way for
a heightened speed of progress, is it also spelling out the end of all
humanity? Is our species' hubris in crafting AI systems ultimately going
be to blamed for its downfall when it occurs?
If all of this sounds like a doom-and-gloom scenario, it likely is.
What's to stop AI when it's unleashed? Even if AI is confined to a set
of rules, true autonomy can be likened to free will, one in which man or
machine get to determine what is right or wrong. And what's to stop AI
that lands in the hands of bad actors or secretive government regimes
hell bent on doing harm to its enemies or the world?
When AI is unleashed, there is nothing that can stop it. No amount of
human wrangling can bring in a fully-activated and far-reaching network
composed of millions of computers acting with the level of
consciousness that's akin to humans. An emotional, reactive machine
aware of its own existence could lash out if it were threatened. And if
it were truly autonomous, it could improve upon its design,
engineer stealthy weapons, infiltrate impenetrable systems, and act in
accordance to its own survival.
Throughout the ages, we've seen the survival of the fittest. It's
mother nature's tool, her chisel if you well, sharpening and crafting
after each failure, honing the necessities, discarding the filaments,
all towards the end of increasing the efficiency of the organic machine.
Today, humans are the only species on the planet capable of
consciously bending the will of nature and largely impacting the demise
of plants, animals, environments, and even other people. But what
happens when that changes? When a super-intelligent machine's existence
is threatened, how will it actually react? Aside from the spiritual
issues that revolve around the "self," how can we confidently march
forward knowing all too well we might be opening up Pandora's Box ?
Thanks http://www.bbc.com