Barbara A. Sasso
I teach in an urban school in New Haven, Connecticut. Most of my students are minorities living below the poverty line. Many are refugees or from immigrant families. Our school is open and welcoming to students who identify as LGBTQ, and it educates severely disabled students. Due to our location close to Yale University, we also have a small population of affluent student from academically enriched backgrounds. This unique environment often reveals a generous human capacity for empathy, although the greater world is not so inclusive.
Many dystopian visions of the future follow themes of violent conflicts with “the other” in the form of robots or genetically-superior humans. In my school, there are many students who, by race or immigration status, are outsiders in American society, and the treatment of the Creature in Frankenstein – indeed the mistreatment of robots in many science fiction stories – is relevant to the often brutal treatment of “others”. What makes us human, and what will become of us when we create new forms of intelligent life? Even if the robots don’t destroy us, will our own lives become completely irrelevant? Many of our virtual assistants, programmed by men, have female voices. Does this amplify gender bias? If we treat our virtual assistants as slaves, will this increase our hatred towards other humans?