Machine learning and artificial morality

Machine Learning

After the appearance in the 1940s of the first computers that could manage complex calculations, Alan Turing and other computation scientists asked whether someday machines would be able to think similarly to humans. It was the birth of artificial intelligence and the start of a dizzying software development that would reach milestones such as the 1996 victory of the Deep Blue computer against world chess champion Gari Kaspárov.

But the intelligence of those extremely powerful computers was nothing like human intelligence. Deep Blue’s success was based on very precise programming and a huge calculation power that allowed it to analyse all possible situations before each movement to decide the one that was most successful in probabilistic terms. This is a very useful strategy for solving some types of problems, but not much for other situations in which the rules are not as defined as in chess. This artificial intelligence lacked versatility, creativity, intuition…

«It is impossible to consider all the scenarios an autonomous car can find»

The situation turned around in 2012 when the first computer algorithms appeared using a different strategy, machine learning, which, together with big data, is responsible for the so-called «new wave of artificial intelligence». Machine learning is based on a different approach: programmers design algorithms to, for example, recognize cats in photographs, but then start giving them millions of photographs with and without cats so that they can check if they are correct and, when they make a mistake, modify their own code lines to become more and more precise.

Machine learning is being forcefully incorporated into any area where there is a massive amount of data to allow this training, such as genomic analysis, economic analysis, transport management, or the analysis of human behaviour based on our digital footprint. Within the medical field, a paradigmatic example is the analysis of radiographies, in which it is predicted that artificial intelligence algorithms based on machine learning will soon make fewer mistakes than the most expert radiologists. It is inevitable to wonder what impact artificial intelligence will have on jobs, to what extent they will be able to overcome cognitive abilities that we considered exclusive to humans and what decisions in our daily lives we will end up delegating to machines, for convenience or because they will be smarter than us.

Experts explain that machine-learning algorithms will be excellent for the specific functions programmed for them and that they will undoubtedly surpass us in specific tasks, but that they will hardly be able to acquire a «general human intelligence» like that of our brain, programmed to perform an indescribable number of tasks at the same time. Even so, it does produce a bit of dread, especially given that, once the algorithms are trained, we actually lose control over them: they change and improve without us knowing what is happening within their lines of code. It is a black box with which, for example, a poker program has learned to make lanterns without anyone having explained, and some fear potential properties, intelligences, or unintended or undesired emerging behaviours.

Will a program searching for Linkedin profiles for a specific job be racist? Will an algorithm advising you on your finances recommend an illegality in order to maximise your profits? It sounds like science fiction, but in reality they are very plausible scenarios, so several voices are starting to suggest measures to contain artificial intelligence and to take ethical considerations into account in artificial decision-making.

Let us take as an example the autonomous car that Iyad Rahwan, from the MIT Media Lab, uses to ask us as a society what moral norms a machine should follow. The idea is as follows: within a few years there will be autonomous cars armed with much better peripheral vision than ours and with faster anticipation capacity when a child suddenly crosses the street. But if avoiding running over that child involves a swerve that causes the car to run over an older man who is waiting on the sidewalk, what should the car do? What if swerving to avoid the child means hitting a wall and endangering the physical integrity of the passenger? What if there are four children crossing and there is only one occupant in the car? But even more fundamental: who should decide?

In our current driving we make instant decisions without time to reflect, and we call them «accidents». But in the future these millisecond decisions will be taken by the autonomous car based on a series of instructions. Again, who establishes them? If the driver does, he will clearly choose to protect himself, and if the car companies do, it will end up being the same, because buyers will acquire the vehicle of the brand that best protects them. The most logical thing would be to agree on these moral decisions as a society, thinking that one day you can be a pedestrian and the next day a passenger, and that they are common to all autonomous vehicles.

«Machine learning is being forcefully incorporated into any area where there is a massive amount of data to allow this training»

I recommend you to go to their website and do the Rahwan Moral Machine test. The general scenario is always the same: an autonomous car carrying passengers suffers a brake failure and must decide immediately between two situations. Here are some examples: 1) run over and kill two athletic boys and two athletic girls who are in a crosswalk in your lane or swerve a little and run over two overweight men and two overweight women who are crossing the same crosswalk in front of them; 2) run over two young girls and two elderly women who are crossing with a red light or hit a fence and sacrifice the two men and two elderly people inside the car; 3) what if there were four elderly people crossing completely legally and there were four children in the autonomous car without brakes?; 4) what if two criminals are crossing and the car has four cats? And so on with up to thirteen random situations aimed at checking how much relative importance we attach to saving more or less lives, to protecting passengers or passers-by, women or men, young or old, people with a higher or lower social status, or even humans or pets.

A summary of your preferences is shown at the end of the test, allowing you to compare them with the average of all those who took the test. Truly interesting. You will also be asked for your political and religious views, income, or educational level. The data will be used in the studies by Rahwan’s team, which will soon be published. But again, the conceptually powerful thing is that it is impossible to consider all the scenarios an autonomous car can find. We can give it a series of basic instructions, but the final decision to run over one person or another will be taken autonomously by an artificial intelligence that we will ask to learn and evolve on its own, without knowing what and how it is thinking.

© Mètode 2017 - 95. Online only. The scam of pseudoscience - Autumn 2017

Writer and science communicator, Madrid. He is the host of El cazador de cerebros (La 2).