Barbara A. Sasso
In the two hundred years since Mary Shelley first published Frankenstein, racism and bias against people of different religions and nationalities have radically evolved. However, with the recent rise of neo-Nazism, it seems that the fuse of tribalism and hatred is an easy one to reignite. Hatred isn’t only racial, as often it divides members of the same community. What causes bias? Is tribalism so inborn, it cannot be erased?
We are not born to hate others who are different; we are taught hatred, much as the Creature in Frankenstein learns loathing for himself and violence towards others from the human community who abandon him and seek to harm him. And economic or national fears can trigger racism on larger scales. But it is not inborn.18
A simple rat experiment illustrates this point, since rats, like humans, tend to socialize in tribal groups. If a rat is raised only with white rats, it will not go the assistance of a black rat in trouble, although it will rather altruistically, attempt to help another white rat. However, if a rat is raised among a polychromatic assortment of rats, it will go to the aid of any rat in trouble. To that rat, its tribe is a rainbow.19 As a human example, multicultural Hawaii exhibits less racism.20
Seeing each other as one, colorful human tribe is critical as we sprint forward in developing artificially intelligent technologies. For example, recently San Francisco banned new facial recognition technology because it has been shown not to distinguish female faces or brown-skinned faces with much accuracy.21 The technology is not biased, but the people who programmed it did not prevent their own biases from flowing over the algorithms. Another example is the plethora of female voices for virtual assistants such as Siri, Cortona, and Alexa. They were created female because marketers discovered consumers prefer female voices.22 Is this stereotype of compliant secretaries something we want to amplify in our society as virtual assistants become more prevalent? Do the female voices convey power?
In class, students will read articles on bias and racism, both in our computer programs, and in ourselves, and then will discuss the dangers of augmenting racial and gender biases, rather than working to erase them, as we create our robotic assistants.
Empathy and compassion, acceptance and patience, altruism and charity – these are all traits that humans can also learn, and the more prevalent they are in a society, the more they are reflected in the individuals in a society.23 There is something to learn from rats – and perhaps from our humanoid computers, should we choose to program them in a way that might be beneficial to society. In a beautiful TED talk, Chris Milk suggests using virtual reality to help us see all humans as members of the same tribe. Alan Turing also believed that robots might “have initiative, have a sense of humour, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream…learn from experience…be the subject of its own thought…do something really new.”24 Can robots save us from self-destruction? Might they be more humane, if not more human, than we are?