Responsibility, planning and collective growth for transforming digitally

A CULTURE OF RESPONSIBILITY IS REQUIRED FOR DIGITAL TRANSFORMATION ON A HUMAN SCALE

[This article has been published in Engineering Group’s Ingenium Magazine on February, 27th, 2018. The Italian original text is available here.]

“The whole question thus comes down to this:
can the human mind dominate what it has created?”
 (Paul Valéry)

The exponential speed with which digital transformation is advancing, and its being present in every human activity, has increased the debate on the coexistence between man and technology and the possible limits to the development of the latter, a confrontation that is slowly coming out of the closed circles of insiders.

A concrete case is offered by the renewed impulse to the development of Artificial Intelligence.

Apart from a narrow group of scientists and technologists – very authoritative in their field – who hypothesize a dark future because it contemplates the possibility that machines will take control of decisions and, consequently, of human actions, the awareness is increasing that today the main risk lies not in control of the machine over man – a hypothesis that is always fascinating for those who love science fiction – but rather in the control of man over man himself through technology. Manipulation is the real risk we run in the immediate future.

In various fields (finance, trade and information), the activities in which algorithms make decisions in the place of men are increasing, and these algorithms sometimes operate on data sets and learning models that reflect stereotypes and patterns of development of a dominant part of the population (as in the case of algorithmic bias, that is, when human prejudices are introduced into the operation of algorithms that reproduce discriminatory behavior, which I have already mentioned in previous articles). Evolution of this risk of manipulation lies in the possibility that even the same basic information is manipulated, as Marco Caressa wittily highlighted in a recent article.

Not only manipulation, but also slow homologation to machines

When we surf the web we are sometimes forced to show that we are bots. In many of our actions we respond mechanically to the instructions given to us by a machine: wearing wristbands and control sensors we are increasingly a whole with software applications that teach us how to behave; habitats and work spaces themselves are adapting to the needs of robots. This is a problem of today to which attention must be paid and that requires us to make decisions on governance of this phenomenon. But in the future we will have to live increasingly with Artificial Intelligence, which will become the extended space in which we will move: step by step we are coming close to a man equipped with artificial extensions up to the point that machines will know and condition us, even in our emotions.

We are facing a revolution in an often unconscious way. For many centuries human beings have been at the center of the world, of history. Today it is no longer like this: it is technique that is at the center of the universe, of our history. Paraphrasing the words of Umberto Galimberti: “technique is no longer a means to reach an end, but the end itself. And to achieve this end, technique requires efficiency and consequently removes what is redundant, what is unusual … in other words, it removes humanity.”

A problem of identity arises

We can also see this from another point of view, through a concrete example. If, in interacting with another algorithm, an algorithm offers us with benefits by freeing us also from emotional involvement, it may be natural to want to adopt it. In this way we will be freer, but from what and for what? Man is a doer, he is used to doing, and precisely this doing indicates his value. What will happen when this value is lost because it will be machines that do?

A new problem of responsibility also arises

According to what technique wants it is important what is done, not how it is done. Who is responsible for the results of technique? Faced with an operational difficulty, many of us have felt the need to answer, at least once in life: “The computer’s to blame”. Today we are beginning to receive this new answer: “The algorithm’s to blame”. And also: who is responsible for the unexpected effects of what could not be predicted? Today we are entering the era of unpredictable effects!

We must begin to understand the new role of man in the era of Digital Transformation and at the same time look for solutions to new ethical problems posed by technology, knowing that this is not at all easy or immediate.

There are those who propose a solution that places limits on some developments with laws and regulations. But does this make sense? If we delegate the answer to Kevin Kelly, in his book “What Technology Wants” (from which I will use some quotes below), he tells us that technology has become a force independent of human will. And that it has an inescapable future; “if the tape of time were to be rewound”, many of the technologies we have known will emerge again. Technique is never disconnected from the history of nature but, rather, becomes its natural development: it is the continuation of nature by other means. In this sense, the relationship that binds man and technology is not given by mere instrumental use, but is a real convergent co-evolution that tends to incessant and inevitable progress.

Kelly does not hesitate to answer the question: “What does technology want?” It wants what we want and above all the increase of human possibilities and therefore of our opportunities: in fact it wants to expand itself, like any other living system. Expanding life and mind, technology extends the benefits of progress to the entire planet, allowing it to continue playing that game started four billion years ago which involves matter, mind and nature.

What then becomes the role of man when technique is at the center of history?

The evolution of new technologies is inevitable, we cannot stop it. But the character of each of them depends on us”. There are therefore spaces of freedom within progress: man remains responsible for the fact that technology, evolving autonomously, proceeds towards good instead of evil. For example, if we are increasingly connected, the quality of this connection is not a foregone conclusion; it could respect or not respect privacy, it could serve to increase democracy or totalitarianism. The solution is to realize that the progress of technology is unavoidable and at the same time become aware of what technology wants. In this way we can better prepare ourselves and our children for the future that will come. The aim is to direct technological tendencies so that they express themselves at their best and we can take full advantage of them. Responsibility for the future therefore remains in the hands of man.

We must best use the space of freedom we have

Here it is opportune to refer to the thinking of Luciano Floridi, when he reminds us that in this new era we have many ingredients for development at our disposal: “The digital age is the age of planning, not of creation”.  

Internet, born of a research initiative, has developed as we know it today thanks to different businesses growing without rules and now we have to face the problems that derive from them every day. We are still experiencing the consequences of that choice. If we had really learned the lesson, we would understand that today we have to face Digital Transformation on a planning basis. Like humans, digital systems must also be able to walk on two legs that move together. We must place the political/social leg alongside the business leg that is already developing Artificial Intelligence as we did in the past with the Internet.  This does not mean setting limits to the development of technology, which would moreover be useless, but placing ethical governance of this transformation alongside development.

I would like to conclude with some quotes from Kelly who makes us see the role of man in man-technology co-evolution in he who plays an infinite game.

In my professional career I have always tried to replace the abused war metaphor (battle, army, enemy, etc.) with that of the game, which I consider richer and more meaningful. Taking a step forward, the traditional finite game – which has precise rules, one or more opponents and ends when it is won – can be countered with the infinite game which is played just to keep it alive. In this way it never ends, because it continues to be played with the same rules of the game and has no winners.

“Finite games are more successful because they produce action – just think of wars or sport – while infinite games can also be boring. We can imagine hundreds of much more exciting stories about two guys fighting than two guys who are at peace with each other. The problem with all the hundreds of exciting stories about the two guys fighting each other is that they all end up in the same way, that is, with the death of one (or both) of them, unless they change idea at some point and start collaborating. On the other hand, the story centered on peace, even if boring, has no end: it can lead to thousands of unexpected stories; maybe the two guys become partners and create a new city or discover a new element or write a fantastic work. They create something that will become the basis for future stories: in short, they are playing an infinite game.”

“The goal is to keep playing: to explore every possible way to do it, to include all the games, all the possible players, to widen the notion of games, to use everything, seize nothing, sow improbable games throughout the universe and, if possible, go beyond everything that came before.”

Along the way we generate more choices, more opportunities, more connections, more diversity, more unity, more thought, more beauty and more problems: this all adds up to more benefits, in an infinite game that is worth playing. This is what technology wants.

To start playing, it is necessary to increase awareness through free and open information, with training and dissemination, through an idea of innovation that really looks to the future and not to the past, and undertakes concrete actions.  A great creative effort is needed to find new tools that make it possible to develop a collective intelligence, a concept that is today very close to the operating itself of technology. If there were to be conscious, free and responsible men to teach machines to learn, then we could see collective growth within full man-machine co-evolution.

Technology offers us enormous potential to build our future, provided we use it well. We have to build a culture of responsibility towards technology so that the men of the future are not “all muscle and little brain.”

Leave a Reply

Your email address will not be published. Required fields are marked *