Skip to content Skip to footer

On Roboethics and the Robotic Human

Instead of trying to come up with an algorithm that captures the essence of independent thought, wisdom or ethical judgment in robots, let’s preserve and encourage such thinking in ourselves.

(Image: Robot ethics via Shutterstock)

In the preface to a piece in the Tyee “Roboethics’ Not Science Fiction Anymore,” Emi Sasagawa asks three interesting questions: Can a robot love? Can it think? How about kill? The author relates that “in 2010 Michael Anderson, a computer science professor at the University of Hartford, and Susan Anderson, a philosophy professor at the University of Connecticut, programmed what they called the first ethical robot.” The Andersons determined what an acceptable behavior would be in a given particular situation based on the sum of all decisions, and then created an algorithm that yielded a “general ethical principle.” My impression after reading this piece was that whatever philosophical wisdom or scientific acumen the Andersons possess, their account of what goes into ethical reflection and judgment leaves much to be desired. However, there are two interesting trajectories of thinking that can be explored here – both of which are very relevant to our present world.

The first has to do with the limits of artificial intelligence. The question here is whether we can program a machine to be ethical, or to acquire the sort of philosophical and practical wisdom needed to make sound moral decisions. I will claim in what follows that ethical thinking and decision-making is not something calculative, mathematical or reducible to an algorithm. Nor can wisdom – the basis of ethical reflection – be reduced to a series of 0s and 1s. Ethical reflection, discernment, insight and the accumulation of wisdom over time through experience cannot be merely procedural or strictly rule-oriented affairs. To be an ethical being presupposes the capacity to reflect upon one’s experiences, to imagine different possibilities, to grasp the past and the future within the present, and to think from the perspective of another. From a philosophical perspective, there are good reasons to doubt whether human intelligence, emotion or ethical thinking and judgment can be quantified, measured or reduced to Boolean logic as if the latter were like any other “thing” or event. I will say more about these doubts in a moment.

It is probable that strategic battle decisions would become increasingly automated. No muss, no fuss, no mass destruction; just the clean, cold logic of the autonomous machine “judging” who should live or die.

Meanwhile, here is a second, much more disturbing trajectory of thinking also in the form of a question. What if, in virtue of our deification of technology, we are actually beginning to see ourselves and the world in more limiting terms? In other words, what if the plausibility of reducing human intelligence, emotion or ethical thinking to a sequence of 1s and 0s rests on the fact that we ourselves are gradually becoming more and more intellectually compromised, one-dimensional beings – more like programmed robots reducible to an algorithm, than creative, reflectively independent and ethically-oriented beings? If this is a credible thesis, then it may go some way to explaining why it now seems okay for philosophers and scientists to combine the word “ethical” with the word “robot,” shamelessly unaware that this pairing would be considered an oxymoron in any age where human moral reflection and action were seen as presupposing authentic existential depth.

When we look at how utterly dependent we are on technologies we don’t really understand how our language skills, vocabulary, attention span and critical thinking ability have noticeably deteriorated, or how everything around us appears potentially capable of being “digitized” (reduced to a single binary code). It seems that we now share more with the programmed robot than we do with the being whose reach, as Robert Browning said, “should exceed its grasp.”

On Roboethics: The Morally Wise Robot?

Let me begin with whether robots can kill, since whether we should or should not kill another person is ultimately a moral question. Unmanned and remotely operated Predator drones (Telerobots as they are sometimes referred to) have, in the last five years, killed more than 2,400 people. However, since Predator drones are robots programmed and remotely controlled by human soldiers, it would be more accurate to say they are the proximate not the ultimate cause of death. Given this, moral accountability and the bestowal of praise or blame continues to remain with the human soldier-pilot. Recently, however, the UN hosted a debate between two robotics experts on the efficacy and necessity of “killer robots.” In a report on the debate, the BBC described the latter as “fully autonomous weapons that can select and engage targets without any human intervention.” Although such robots do not presently exist the authors assure us that “advances in technology are bringing them closer to reality.”

After the debate, a poll was taken and it was revealed that only five of 115 states would ban killer robots. Given these numbers, it is difficult not to conclude that such a disturbing possibility will soon become a depressing reality. If so, it is probable that strategic battle decisions would become increasingly automated. No muss, no fuss, no mass destruction; just the clean, cold logic of the autonomous machine “judging” who should live or die. Any fan of the original Star Trek series will recall the episode “A Taste of Armageddon” where two planets, Eminiar VII and Vendikar, have been in a perpetual war conducted by unemotional, strategically thinking computers. The destructive potential, the untidiness of war is swept away so long as citizens of both planets are “robotically” willing to be “vaporised” when a theoretical strike occurs. War is thereby entirely normalized, unwinnable and, in effect, perpetual. War Is Peace.

But this is science fiction . . . right? Well, the doublethinking mindset of the Star Trek fictional characters is really not that far from the Bush and Obama administrations’ willingness to engage in endless wars, and use propagandist terms such as “surgical precision” to describe a drone program that killed 178 children. Low risk, low cost drone wars are not “wars” with a beginning, middle or end. They are the product of a reductive, irrational militaristic mindset that holds two contradictory theses: 1. “Terrorism can be defeated by drone attacks.” and, 2. “There will always be terrorists who hate us for our freedoms so we need to institute a continuous drone program.” It is possible for the warmongering types to simultaneously hold these contradictory perspectives together because they know that the drone program will actually create the very terrorists that both can and cannot be defeated. Moreover, even if the notion of “surgical warfare” is an absurdity on par with “clean coal,” you can bet your next paycheck that the Department of Defense, the weapons industry and the CIA are working full-time on the next generation of “precision” killer drones.

Of course, the relation between war, defense money and artificial intelligence (AI) research is nothing new. We know the internet and the creation of computer networks began with President Eisenhower’s creation of the Advanced Research Projects Agency (ARPA) in 1958, in response to the USSR’s Sputnik launch. In the 1950s, an offshoot agency of the Department of Defense (DARPA or Defense Advanced Research Projects Agency) funded AI research and a program called CALO – an artificial intelligence project that brought together different AI technologies for the purpose of creating a “cognitive soldier’s assistant” that could “learn and organize.” AI researchers have continued to elaborate alternative paradigms of computational intelligence and create new evolutionary algorithmic models that better capture the intuitive or unconscious elements of human thinking and understanding.

Self-understanding, self-reflection, empathy and the capacity to think from the perspective of another are just not reducible to numbers.

To be fair, AI research has not been entirely oriented by militaristic concerns. Many of us are familiar with the idea of drones and robots that do dangerous work or accomplish repetitive assembly line tasks and manufacturing jobs much more quickly and efficiently than humans. Of late, we have seen truly incredible advances in robotics systems that can be programmed for technological research and as domestic helpers or health care providers for the elderly or disabled. ASIMO (Advanced Step in Innovative Mobility) and FRIEND (Functional Robot arm with user-frIENdly interface for Disabled people) are two such examples. Robots can walk, talk, hear, see, remember, recognize faces, solve simple problems, sense environmental changes, respond to touch and even interact at a basic social level (e.g. Kismet) simulating facial expression and emotion. For many in the field of robotics and AI research, these advances point the way towards the inevitable reality of independently thinking, emotional and even ethically discerning machines.

However, the problem is that we are still very far from knowingall there is to know about the complex integrated circuitry of our own human brain with its billions of neurons. Moreover, even if it were possible to completely map the brain and nervous system, it does not follow that we could then simulate the reflective, inter-subjective, linguistic and creative complexities of the ordinary human being as it moves through space and time, with and among others of its own kind. To achieve this end, programmers would not just have to build a thinking robot, but an entire world where such robots could interact, become conscious of themselves and their differences and similarities with other robots and be capable of acquiring insight and wisdom. Perhaps there are computer geeks out there who are already working on a super-algorithm that would symbolically recreate the world, human insight and wisdom – but if so, they have their work cut out for them.

Of course, it is true that a lot of what we humans accomplish does not really require a high level of philosophical reflection, self-reflection or critical consciousness. At the same time, even the most sophisticated robot cannot do many of the things we would consider routine. For example, with proper programming in place, a robot might be able to knock on a door, walk into a room, say “Hello,” place a box on a table, and then wave goodbye. But the same robot could not walk into an adjoining room of people, listen to an ongoing conversation and make perceptive judgments about how to appropriately interact with or respond to this or that person. Nor could it, like the fictional Robocop or “killer robot,” make wise determinations or judgments about what sort of response would be proportionate in a specific context of a battle or other situation where the difference between friend and foe, aggressor and victim, is ambiguous or not entirely apparent. Why not?

From an admittedly philosophical perspective, I would claim that it is because computers and robots do not have the capacity to imagine or reflect upon their own experience, or at a second level, reflect upon themselves as beings that have undergone particular experiences. Self-understanding, self-reflection, empathy and the capacity to think from the perspective of another are just not reducible to numbers. A computer or robot may be knowledgeable – it may know a myriad of facts and statistics, but insight, empathy and wise ethical judgment presuppose much more than mere knowledge of the facts. These latter assume that we have the capacity to experience our own experience at a reflective level, and then imagine different possible ways of construing and evaluating such experiences according to a range of principles or perceived goods that are contingent upon a particular history, culture or upbringing.

We experience the world, and we experience our self as a self that is distinctfrom this world and others, yet intimately relatedto the latter. Additionally, we are able to think about our lived experiences – and even think about what it means to have such experiences. It is precisely these sorts of reflections and meta-reflections that give meaning and depth to moral experience and help us acquire philosophical and practical wisdom over time. Philosophical and everyday practical wisdom are the product of human experience, reason and insight, and the capacity to stand back from our experiences and continuously order and evaluate them according to a complex web of interrelated customs, conventions, precepts and principles. This “wisdom capacity” is what ethical thinking and decision-making require. Such wisdom is possible not only because we undergo a whole lifetime of different experiences, but because we have very unique and distinctive reasoning and intuitive powers that enable us to think about the latter. Over a lifetime, we are able to embody and develop the wisdom and the imaginative capacity to think from the perspective of another – capabilities that are at the heart of morality – and something that no presently existing computer could ever hope to embody.

It is the uniqueness, unpredictability, coarseness, ambiguity and often opaque quality of experience that makes the acquisition of wisdom and ethical acumen difficult for us and impossible to collapse into any kind of numerical arrangement.

Aside from straight-forwardly moral thinking capabilities, the examples of human experiential, linguistic, reflective and creative capacities are many and varied: our power to use language in innovative and interpretive ways through creative metaphor or story-telling; our ability to formulate unusual, unorthodox or original ideas; our capacity to critically engage and thoughtfully interact with others; our sense of our body as that which enables us to identify and relate to ourselves and others; our growing understanding of duty and moral responsibility towards family, friends or communities of interest; our experience of theoretical and practical insight when contemplating what has been, what is or what might be; our sense of wonderment when confronted with the vastness of the universe or the beauty, power and diversity of nature; our experience of sadness, shame, loss, exhilaration, joy, titillation, belonging, regret, loneliness, alienation; our ability to grasp a moral paradox, or the contours of an ethically ambiguous situation; our experience of existential anxiety about what we should do with our life in the future; our capacity to laugh at ourselves, appreciate satire, irony, a clever witticism or a good joke; our ability to be moved to tears by witnessing a simple gesture of kindness. Our capacity to experience and then reflect upon our experiences, capabilities and affective comportments are what enable us to acquire wisdom – both philosophical and practical.

It is crucial to recognize that wisdom and ethical insight are not “additive” or logical step-by-step “processes” or procedures. They do not advance as a consequence of strict adherence to self-evident laws in a straight continuous line. Nor can they be learned programmatically as a set of rules, facts, algorithms, theorems or mere data. Indeed, it is the uniqueness, unpredictability, coarseness, ambiguity and often opaque quality of experience that makes the acquisition of wisdom and ethical acumen difficult for us and impossible to collapse into any kind of numerical arrangement. Wisdom, ethical understanding and the exercise of good judgment are acquired through experience and practice, by a constant back and forth attentiveness and reflection upon the specific and particular contexts and their relation to more general or universal ideas and principles.

Additionally, wisdom is possible because from birth to death we are always responding, mediating, questioning, evaluating and developing our understanding within a complex and intricate network of background evolving meanings which are gathered over time, expanded and deepened through diverse applications in relations with others. It is only because we are always in the midst of sustaining and co-creating a “world” with others that our individual human experience has qualitative and temporal depth. It is highly unlikely that we could reduce these relational, temporal, creative and unpredictable aspects of human experience to a binary code of 0s and 1s. We would literally have to create a whole “world” using as our only tools mathematical formulae and complex algorithms.

A zombie culture that mindlessly reduces human differences and complexities to banal maxims, or increasingly tolerates empty, superficial forms of communicative interaction, will probably be much easier to replicate with 1s and 0s.

At this point, some will claim that a new generation of quantum computers may be able to achieve all of this. We should not, of course, rule any possibility out. However, if the standard of intelligence were not a question of whether computers can independently “think,” but whether they will ever be able to think ethically, experience insight or acquire practical wisdom, then these latter might be rather more difficult to achieve. At the same time, if wisdom werethe benchmark and if a robot were able to gather and deepen its self-understanding through practice and constant experience with others, understand itself as a temporal, embodied being who remembers and anticipates, and is capable of grasping the experience of what it is to meaningfully experiencea world, then it would – as androids in Philip K. Dick’s novel Do Androids Dream of Electric Sheep, (popularized in the movie Blade Runner) – be almost indistinguishable from a human being. As things stand now, this state of affairs seems to be closer to fiction than reality.

On the Robotic Human

Perhaps, however, the more worrisome issue is not the question of whether computers and robots will eventually be capable of independent thought, morality or wisdom – of becoming more like human beings – but, rather whether we human beings are gradually patterning ourselves after the programmed robot! Is there not something rather disturbing about the way human relations, experience and meaning can be so easily reduced to mathematical models of decision-making or “game-theory” and replicated in “virtual” digitized worlds? Is it possible that these deeper human orientations are perceived as easier to replicate because our “really existing world” has become a place where banality abounds and where it is commonplace to see more and more people spending their limited and valuable time playing video games and mindlessly texting others about the trivial details of their last hour of existence? In the last 20 or so years it has become normal and acceptable to reduce deep moral issues and complex, multifaceted geopolitical situations to simplistic formulas and “either-or” bifurcations: You’re either with us or with the terrorists. Informed, thought-provoking debate has been largely displaced by shrill, loudmouthed boors who are paid enormous sums of money to advocate intolerant, sexist, xenophobic and warmongering attitudes on television news and especially talk radio. A zombie culture that mindlessly reduces human differences and complexities to banal maxims, or increasingly tolerates empty, superficial forms of communicative interaction, will probably be much easier to replicate with 1s and 0s.

Humanities subjects such as literature, art, music, history and philosophy are not superficial extravagances. They help students become critical thinkers and make democratic political culture possible.

However, it is not just that on an individual level we have become thoughtless techno-junkies, or tolerant of that which we should simply not tolerate. At a moral and institutional level, we are doing everything possible to diminish self-reflection, critical and creative thinking and the acquisition of wisdom. How so? Firstly, our critical and moral thinking life is fundamentally compromised every time we reduce what we most value about the world and what we most need for survival to statistical relations or business-oriented cost-benefit analyses. Every time workers are reduced to mere “resources,” citizens to mere consumers, economic and other forms of well-being to abstract indicia, we are making it that much easier to rewrite human thinking, action and inter-subjective experience in the language of mathematics – a language that necessarily excises ambiguity, self-reflection and empathy.

At an institutional level, it is a depressing fact that departments of history, arts, philosophy, classics and humanities are either being defunded, corporatized or entirely eliminated. Funding for higher education is increasingly directed to the “core” or “career-oriented” disciplines that prepare students to live in a competitive, technological culture where ubiquitous consumption and disposability are the order of the day. Less important is the study of history which allows us to avoid the mistakes of the past and come to terms with “who we are” based on “who we have been”; the study of philosophy which helps us to become critical thinkers, and instructs us in the virtues of communicative and persuasive competence, justice and critical self-understanding; the study of literature and classics which provide us with an unfathomable wealth of wisdom about the world and the human condition bequeathed by great thinkers and writers of the present and past; the study of music and art which helps us to tap into our creative capacities, realize our unique talents, and articulate new visions and imaginative possibilities. Humanities subjects such as literature, art, music, history and philosophy are not superficial extravagances. They help students become critical thinkers and make democratic political culture possible. They also impart practical and philosophical meaning and significance to our daily lives and relations with others. A life where only the so-called “core disciplines” of math and science mattered would be, in a profound sense, intellectually and practically impoverished.

What is often under-appreciated is that the totalitarian reduction of human beings to isolated “things” or apathetic masses is exactly the sort of technical, instrumentalizing perspective that makes human gulags and factories of death possible.

It is also evident that research in schools of engineering, commerce, agriculture, criminology, sociology, psychology and computing science is less focused on public good, and increasingly directed and programmed by narrow, private corporate interests and funded by for-profit corporations. Think of how technology, math and sciences have been promoted and consecrated above the humanities, traditional forms of knowing, great literature, art and philosophy. Think of how teachers are more and more encouraged to measure success or failure by narrow standards that are not grounded in the sort of spontaneous reflection and insight that honors creativity, civic mindedness, solidarity, and community, but in the business-efficiency model of instrumental reasoning. When teachers are forced to “teach to the test,” they are no longer looked upon as responsible for creating the conditions of possibility for students to become informed citizens who are critical, multi-dimensional beings capable of acting in ethical and socially responsible ways. Instead, they are pressed to manufacture student-consumers who see themselves as akin to simple “survivalists” in a competitive, technologically-oriented world that discourages plurality, spontaneity, imagination, creativity, independent thinking and wisdom. In such a world, it is much easier to imagine how teachers might one day be replaced by robots – robots teaching children how to become better robots.

What are the political consequences here? From a totalitarian mindset it would certainly be less messy if human beings could just devolve into programmable objects reducible to a simple algorithm. Autocratic governments, military hawks, religious extremists and profit-seeking corporations would love nothing better than to be able to do away with plurality, complexity, independent thinking, wisdom and creativity and have us faithfully follow their limited, narrow or bigoted “programs.” What is often under-appreciated is that the totalitarian reduction of human beings to isolated “things” or apathetic masses is exactly the sort of technical, instrumentalizing perspective that makes human gulags and factories of death possible. Indeed, the very same instrumentalizing approach is evident in the push to reduce citizens to consumers who must obey the inexorable laws of the market, or in the proliferation of propaganda and public relations manipulation, which was turned into an industry by Edward Bernays.

However, even in areas we do not typically associate with autocratic thinking and propaganda, we can notice how easy it has become to reduce the human condition to something quantifiable, measurable and “thing-like.” In their theoretical and fundamentalist zeal, biologists, neuro-physiologists, psychologists and sociologists have provided much of the groundwork here: reducing the human person to a set of genes, or human thinking and emotional experience to neuronal, electro-chemical signalling in the brain, or human action to formulaic indices that promise to give us a better understanding of human physiology or psychology, and more accurate ways of assessing risk or predicting behavioral outcomes.

At one level, scientific “method” and the reductive scientific approach to human phenomenon are understandable and necessary. Scientific thinking and understanding, as Archimedes, Pascal, Kepler, Galileo, Newton, Faraday, Maxwell, Darwin, Max Planck Alexander Fleming, Marie Curie, Einstein, Heisenberg, Neils Bohr, Carl Sagan, Stephen Hawking and Neil deGrasse Tyson all eloquently demonstrate, can be an essential antidote to ignorance, superstition, fundamentalism and even the irrational prejudice presupposed in many of our current perspectives about evolution and climate change. Moreover, the best sorts of scientific theories are powerful precisely because they methodically explain a great deal with simplicity and economy.

The problem is not that science is reductive or can be easily perverted by charlatans, industry apologists and corporate shills – though it often is. The problem is that the reductive method of science is considered by many to be the only reliable means for grasping meaning or truth about the world, or the human condition. When this occurs, scientific thinking becomes dogmatism – and devolves into another form of fundamentalism. The reality is that there are truths about human experience that transcend the domain of scientific method. These truths, discovered through historical thinking and awareness, traditional and ethical forms of knowing, art, philosophy, poetry and literature, go beyond the domain and control of scientific method, yet they speak to us in profound and truthful ways about who we are, what we should do, and what we can hope for.

To be sure, it is not only contemporary scientists and social scientists, but also many philosophers who claim that human beings and even moral thinking can be approached in a quantitative and reductive way. The “moral” philosophy of utilitarianism inaugurated by Jeremy Bentham (1748-1832) reduces human beings to objects that experience pleasure and pain. Bentham elaborated a moral philosophy based on measuring happiness by numbers: The greatest happiness of the greatest number becomes the measure of right and wrong. Bentham reduced morality to a hedonistically-oriented method of calculating and ranking the value of pleasures and pains. This metaphysical-moral perspective leads to ugly and unconscionable consequences. Thus, any sort of human ignominy – for example, capital punishment – can disguise a repulsive desire for vengeance behind the abstraction of numbers: Capital punishment is “morally” acceptable because it results in the greatest pleasure and happiness for the greatest number. A counterfeit philosophical theory (the reduction of humans to beings wholly oriented by a measurable pleasure-pain index) is thereby used to ground a dehumanizing “moral” theory. To this day, utilitarianism is upheld as a powerful and persuasive moral philosophy! Indeed, many Europeans and North Americans believe utilitarianism is the only practical “moral” basis upon which to found a political society. However, if this were really true, then there would be no need to write a constitution or bill of rights that enshrines rights or goods not based on numbers, but principles – intrinsic goods such as life, liberty and the pursuit of happiness!

There are also many politicians and economists who argue that human health and happiness are reducible to economics and cost-benefit analyses. The latter would have us believe that “who we are as thinking and experiencing beings” is finally something that can and must be measured, quantified or mathematized. However, in reducing the human condition to a thing that can be circumscribed by an algorithm, by cost-benefit analyses or by utilitarian calculation, we inevitably rob it of its unique capacity for ethical judgment, insight, experiencing and reflecting upon its own experience, and acquiring practical wisdom – the very capabilities that differentiate us from mere “things” or calculating machines.

The long and short of the above is that instead of trying to come up with an algorithm that captures the essence of independent thought, wisdom or ethical judgment in the next generation of robots, maybe we should worry more about whether we are doing everything possible to preserve and encourage such thinking in ourselves so that we will never have to rely on so-called “intelligent” machines to tell us what we ought to do.

We’re not going to stand for it. Are you?

You don’t bury your head in the sand. You know as well as we do what we’re facing as a country, as a people, and as a global community. Here at Truthout, we’re gearing up to meet these threats head on, but we need your support to do it: We must raise $18,000 before midnight to ensure we can keep publishing independent journalism that doesn’t shy away from difficult — and often dangerous — topics.

We can do this vital work because unlike most media, our journalism is free from government or corporate influence and censorship. But this is only sustainable if we have your support. If you like what you’re reading or just value what we do, will you take a few seconds to contribute to our work?