ai-threat

 

At the World Economic Forum in Switzerland last week, industry leaders were getting excited about the impact Artificial intelligence will have in the near future on just about every aspect of life.

 

They see a fourth industrial revolution in which machines take over work currently done by people and do it better, faster and cheaper. They talk of self-driving trucks, machines that learn to assess your loan application and your insurance claim and to ask you if you want fries with your burger.

 

They describe the havoc this will cause as "disruption". They see it as inevitable and, ultimately, beneficial. They may be right.

 

The question is "beneficial to whom?"

 

Not to the workers whose jobs and dignity are taken away. Not to the communities which break down as people become isolated and are deprived of day to day interactions that give them a common sense of identity. Not to the governments that struggle to keep up, barely able to monitor never mind control or tax or protect. Not the poor who lack access to the cyber-world. Not to even established big business, which is destabilised and distracted and made increasingly dependent on technology it is poorly equipped to manage.

 

Revolutions always cause pain to the many for the benefit of the few. In this case the few are the venture capitalists, based mostly in the US, who fund the technology that scientists and engineers create. These are very rich men and they will be the ones most likely to get richer from this "revolution" in the short term.

 

For the existing global companies, the ones you and I work for, buy products and services from and have our pensions invested in, artificial intelligence and robotic process automation are more like a game of Russian Roulette: the lucky ones will survive it.

 

Global companies are now locked in the equivalent of an "Arms Race" over artificial intelligence that drives all participants to divert resources into a technology that promises mass destruction in order to prevent their competitors being the first to "change the game".

 

If the leaders gathered at the World Economic Forum in Davos are right and these technologies are unstoppable, it seems to me that the only way to make them truly beneficial is for our governments to put in place ethical frameworks to control their development and use.

 

Artificial Intelligence will be writing rules about who gets access to health care, to insurance, and to banking. They'll be writing rules that determine, in the event of an unavoidable car crash, which vehicles to keep safe and which to sacrifice. They'll be using algorithms to prioritise the response of emergency services. They'll be capturing and analysing every purchase you make, every web site you visit, every tweet you send, every movie you watch or book you read, to find a pattern that let's them predict and try to influence what you do next. They'll be controlling power-generation, manufacturing and advanced weaponry.

 

Robots, physical or virtual, will perform most of the work we currently get paid for doing. Increasingly, the robots themselves will shape how that work is done, in the interest of efficiency and cost-reduction, until the work becomes something we can no longer do.

We should not allow these changes be driven by the question "What will make already rich men even richer?" but by the question "What will benefit the most number of people and help us live in the kind of lives that we wish for?"

 

The European Union recently approved a draft report on civil law rules on robotics and artificial intelligence. It made headlines for recommending the introduction of kill-switches for robots, the implementation of something similar to Asimov's three laws of robotics and for raising the possibility of "electronic personhood." For me, the most interesting part of the report was where it considered

"a set of tensions or risks relating to human safety, privacy, integrity, dignity, autonomy and data ownership."

and recommended a guiding ethical framework

"in the form of a charter consisting of a code of conduct for robotics engineers, of a code for research ethics committees when reviewing robotics protocols and of model licences for designers and users"

based on:

"human dignity and human rights, equality, justice and equity, non-discrimination and non-stigmatisation, autonomy and individual responsibility, informed consent,privacy and social responsibility"

These are political and social issues that commerce, even when well-intentioned, is not equipped to decide upon.

 

When asked to consider the likely impact of this change in technology Steven Hawkins said:

If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.

I agree that the Fourth Industrial Revolution is coming.

 

I believe that now is the time for us to insist that it is not driven solely be venture capitalists, but is guided by our elected representatives using ethical frameworks that will protect us from the havoc greed would cause and share any benefits in line with the values of our society.