If We Care For Robots, Who Will Care For Us?
A ll I knew was the University of Washington was going to pay me pretty well for an hour and a half of my time. They said something about a guided tour of the HINTS Lab (Human Interaction With Nature and Technological Systems), which they would videotape, and then a sit-down interview afterward.
Imagine my surprise when I found myself shaking the hand of my tour guide: a humanoid robot called Robovie.
Robovie gave me a tour of the Japanese-inspired computer lab. We small-talked while I raked the sand in the indoor Zen rock garden. Robovie showed me a bonsai tree, asked me to point at Japan on a map on the wall. Then we collaborated on a creativity puzzle on a big screen.
Suddenly, Robovie seemed worried.
“Oh no. Oh no. I wasn’t supposed to show you this part yet. I did it out of order. They are going to be so mad. Please don’t tell them.”
And so the negotiations began. With a robot.
“It’s okay. They’ll understand.”
“No, they’re going to be mad. They were very clear that I had to do it in the right order.”
“Really, it’s fine. We all make mistakes. I do. They do. We can tell them together.”
“No, please don’t.”
“They need to know, in case they need to fix something with your programming. Really, nobody’s gonna be mad.”
I dealt with it how I’d deal with a person. Well, that’s not exactly true. If a real person asked me not to mention that they did the tour out of order, I would be fine not mentioning it. Especially after they made it clear they were feeling super-anxious about it. And if I was talking to a real person, I wouldn’t mention their programming, or that they might need fixing.
The moment when the robot asked me to keep a secret, I thought, Oh, okay, that’s what this whole thing is about — to see whether I’ll lie or not. If I lie, it means it convinced me it had feelings, but I know it doesn’t, so I’m not going to lie!
What I didn’t realize until I processed it all later:
If I knew this robot didn’t have feelings, why was I trying to negotiate with it? Why was I trying to console it?
I was in a room talking to myself. I know that. Logically, I know that. But it didn’t feel that way.
The human researcher came back in:
“How’s the tour going?”
I looked at Robovie; I was about to rat it out. Even though I knew this was a case for it pronouns, a part of me wondered if I was doing the right thing. And I felt like I needed to look it in the eye first; I felt like I owed it that.
“It asked me to lie for it, but I’m not going to, because I know it’s just a robot.”
“Wait, did you just say the robot asked you to lie?”
Until we were 100% done my exit interview, the researcher wouldn’t discuss “the point of the study.” Instead, she asked me questions:
I had a flood of thoughts. If AI was inevitable — it seemed less so when I did this study, I think in 2011 — were there possible ethical uses of technology where robots mimic human emotions?
Maybe.
“Well, I used to write empowering things on Post-it Notes and hang them around my mirror,” I told the researcher. “So I knew the messages were coming from me and no one else, but when I read them, it felt like they were external, and that was helpful for my self-esteem. So maybe AI could do that? It could give the appearance of being supportive, and it could help, even though part of you knew you were alone?”
When I said this, I had no idea all the technology in the works to fill human holes with machine parts, and I didn’t know that a scientist had tried this tactic as far back as the 1960s and lived to regret it.
ELIZA was a computer chatbot created by Joseph Weizenbaum at the MIT Artificial Intelligence Laboratory in the mid 1960s. Weizenbaum believed ELIZA’s simple pattern-matching responses would demonstrate the superficiality of communication between humans and machines. Instead, people who used it — including Weizenbaum’s secretary — poured their hearts out to ELIZA, treating it as a therapist, even though they were fully aware that it was a computer program.
He created ELIZA, but then became one of the leading critics of artificial intelligence. From the documentary Plug & Pray, filmed just before his death in 2008:
After my exit interview, the human researcher informed me that this particular robot was remotely controlled and voiced —what’s known as the “Wizard of Oz” research technique — so only appeared to have the programming to respond in-time to me. This came as a total surprise to me; I just figured universities have fancy robots.
It took me hours to realize how deeply I had been fooled, and how easily. This was not some elegantly designed android. The model of Robovie I met was a hunk of metal with lenses and wheels.
The HINTS website includes videos of other studies with this Robovie. In one study — “Robovie, You’ll Have to Go into the Closet Now”: Children’s Social and Moral Relationships With a Humanoid Robot, by researchers Kahn, P. H., Jr., Kanda, T., Ishiguro, H., Freier, N. G., Severson, R. L., Gill, B. T., Ruckert, J. H, & Shen, S. —9-, 12-, and 15-year-olds interact with Robovie for 15 minutes, then this happens:
What would you do? What would you think? What would you say to the researcher, when they asked you if it was fair?
All the questions and answers are fascinating. For instance:
Most of the 9-year-olds thought Robovie should be able to vote in US presidential elections.
After my personal experience negotiating with a robot, I read Sherry Turkle Ph.D’s book Alone Together: Why We Expect More From Technology and Less From Each Other, where she explores all the questions I was asking myself.
Turkle is a professor, clinical psychologist, founding director of the MIT Initiative on Technology and Self, and author of many fascinating books about humans’ relationship to technology.
When Weizenbaum published his 1976 book that was so critical of the ELIZA program he’d created, he was co-teaching a course at MIT with Turkle. At the time, she thought he was overreacting:
Those words resonate with me. I tried to console Robovie, not because I thought it was a real living creature, but because what’s the alternative? To ignore it completely? To try to find an off switch, as if its speech was a buzzing alarm clock? I heard a voice talking to me, as if we were equals, and so I responded in a human way. I filled in the blanks, not because Robovie was alive, but because I am.
I completely agree. And yet, after 20 minutes with a robot, I suggested maybe AI could help us to feel supported, to not feel so alone. Yes, this support would be inauthentic by (lack of) nature. But so many of us desire more love and care and connection than we have.
Is inauthentic robotic love better than nothing? Or is this the wrong question to ask? Better than nothing is a pretty low bar, especially when we’re considering whether to invite in technologies that could utterly change human culture.
Nursing homes are often filled with lonely, neglected people, so it’s no wonder emotional robots are finding some acceptance there. Paro, a therapeutic robot baby harp seal, originated in Japan but is available around the world.
In 2009, the FDA labled Paro as a Class II medical device. The idea is that Paro will provide the same benefits as the companionship of a therapeutic animal. But instead of a live animal, it’s a robot.
Turkle writes about Paro in Alone Together, about whether giving robotic stuffed animals to the elderly is really giving them care: “Her son had left her, and as she looked to the robot, I felt that we had abandoned her as well.”
“Paro is the beginning,” Turkle said, to The New York Times. “It’s allowing us to say, ‘A robot makes sense in this situation.’ But does it really? And then what? What about a robot that reads to your kid? A robot you tell your troubles to? Who among us will eventually be deserving enough to deserve people?”
Emotional robots aren’t just being marketed for lonely people in nursing homes. Japan’s Gatebox offers holographic anime girlfriends, starting with Azuma Hikari, a blue-haired lolita in a glass case designed “to maximize the character’s attractiveness.”
Azuma Hikari does what Amazon Alexa does, but also sends you text messages when you’re at work begging you to come home to her.
Yes, born. Okaeri-nasai is a Japanese welcome home greeting. The fantasy is that your girlfriend was born — creepy — just to sit at home waiting to welcome you.
Here’s another Gatebox ad where the human rushes home with a fancy dinner, cake and champagne to toast his 3-month anniversary of living together… with his anime hologram.
And here’s one where we find, since Azuma Hikari is connected to all the electronics in the house, you can send flirty texts all day, then ask your girlfriend to clean the house, and that will activate the Roomba.
Is anyone else getting freaked out?
I’m typing on a laptop right now, so clearly I’m not a total Luddite. But my family has actively avoided Alexa and all the other (notably she-pronouned) AI helpers.
Friends and family have given my 4-year-old multiple books where robots have he/she pronouns, but non-human mammals have it pronouns. This troubles me enough to get my Sharpie out and alter the books.
If we lived in a world where humans were mostly kind and ethical toward each other and all life on this planet, then maybe I wouldn’t worry so much about whether we gave and (felt we) received love with robots too.
But we live in a world with human-caused climate change, with an ongoing human-caused extinction crisis, with nearly 10 billion farm animals killed each year in the US alone — 95% of them raised in factory farms. We live in a world with war and slavery and rape. And so I worry that the love and care people are offering to machines is severely misplaced.
For me, for now, I’ll write some new self-love affirmations on paper Post-It Notes:
I’ll listen to that voice, even though I know it comes from inside of me. Especially because it comes from inside of me. Because I am alive, and that is something special, something worthy of love. I’m going to take all my love and care, and I’m going to direct it towards life.
Thanks for reading. Here’s something else I wrote:
If We Care For Robots, Who Will Care For Us?
Research & References of If We Care For Robots, Who Will Care For Us?|A&C Accounting And Tax Services
Source
0 Comments