May 19, 2008
Reproachful Car Voices Make Drivers Angry

A Stanford professor provides evidence that automated computer car voices shouldn't nag or criticize.

That computer masquerading as a person, seemingly residing somewhere in your car, might be interested in more than mere facts. As it gets to know your voice, your facial expressions (from an onboard camera) and your style, it could adapt its conversation to your mood, just as a human passenger would. If the computer behind the synthetic voice sensed that you were tense, as the car's sensors were silently warning the computer that your driving was becoming erratic, the voice might attempt to calm you down. It would project just the right tone and employ the perfect turn of phrase.

In tests of volunteers driving automobile simulators in the lab, researchers put their subjects into stressful situations and tested out potential responses from the voice. For example, some drivers received a reproachful warning: "You're not driving very well and you need to pay more attention."

"Well, you won't be shocked to learn that people got angry and actually drove worse," laughed Nass as he told the story. As the voice ratcheted up its rhetoric ("You really need to be more careful!"), the driving deteriorated further. Finally, when the voice began insisting that the drivers pull over to the side of the road, they responded by getting into accidents.

In a 2007 study, Nass, doctoral student Helen Harris, and undergraduates Kyle Davis, David Diaz and Brooke Sullivan searched for ways to help people control their emotions in the car in a study called "Car-tharsis." In a frustrating situation, a soothing voice from the car might sympathize with your predicament: "Don't worry. There will be a chance to pass the truck." The unspoken message? You don't need to get upset. Or if you got cut off in traffic, the car might simply do the yelling for you: "Learn to drive!" or "You idiot!"

Maybe the car should answer questions by playing excerpts of songs that encapsulate what the computer wants to say. In that case I expect people who think they are going to be late for an appointment due to a traffic jam should hear "Don't worry, be happy".

Depressed people want depressed cars. I can believe this from personal experience. When I was depressed as an adolescent I used to like to listen to Neil Young's album On The Beach. It was more depressing than me and it made me feel better by comparison.

Depressed drivers drive better when their car speaks as if it, too, were feeling down. "If you're in a really bad mood, do you want a bouncy person around?"

Drivers can trust a local AI built into their car. But they don't trust a centralized Borg AI talking to them through their car.

Drivers feel more engaged with the computer voice if they believe the computer is installed in their car, as opposed to a wireless connection to a distant computer. As a result, they disclose more information to the in-car computer and drive faster.

As we design computer systems to make us do what their designers decided are the best behaviors from us we are effectively designing computer systems to manipulate us. I suspect that the first AIs deployed into widespread use will therefore possess enormous skills for manipulating humans. The ability to automate efforts to manipulate us will make us more manipulated and controlled by computer systems.

Share |      Randall Parker, 2008 May 19 02:51 PM  Robotics Cars

Karl said at May 20, 2008 12:34 AM:

You get enough nagging at home, why would you want it in your car?

Brad said at May 20, 2008 7:48 AM:

I would suggest that manipulating human behavior is an evolutionary process. The programmers have been religious, political, military, enconomic persons or philosophies, the method of imprinting has kept pace with technology. The latest write code, the next will be at the level of programming DNA or genetic manipulation. The beauty, is if can be called that, of this evolution is that there will be no way to be aware of it as it will be hardwired into humans. Sounds wonderful doesn't it?

martin said at May 20, 2008 8:26 AM:

This is how I feel about Microsoft. They make it hard for the average user to customize it for function, but they shove in your face how easy it is to customize for appearance. Then, the programs second guess you.

Sometimes I would like to line up the entire microsoft development staff and take a giant paper-clip with googly eyes and shove it in their ears, one by one, while yelling "Yes, I am sure!!"

celebrim said at May 20, 2008 9:13 AM:

I would suggest that intelligence is a large collection of practical skills. It involves interfacing with the world to achieve some result. Humans will evaluate something else as intelligent if they percieve that it is taking actions which seem appropriate to its goals.

One of the key skills in interfacing with the world is the ability to 'manipulate' people. In computers, we might call this the 'C3-P0' skill set. It's the ability to get people to do what you need them to do to achieve your goal by whatever means. Since people are evolved nomadic hunter-gathers, some of these means can be esoteric but are essential to intelligent interaction with people. We should not be surprised by the fact that we need to teach machines about emotions and emotion changing behavior in order to get them to intelligently interact with us. The old trope about the first generation AI's be really smart but lacking emotions is plain nonsense. Not only must an intelligent AI have an understanding of our emotional state in order to pass the Turing Test, but a truly intelligent AI must have its own emotional states (although they need look nothing like ours since an AI isn't an evolved nomadic hunter-gather) if it is to produce intelligent goal driven behavior.

comatus said at May 20, 2008 11:09 AM:

People who need on-board nagamatics have simply failed to learn to speak Car. My vehicle notifies me when we're on the edge of its longitudinal traction envelope, with a disconcerting and unmistakable "snick" tone. The limit of lateral acceleration is communicated by a thump-and-shudder "bump stop" sensation--which can, in an emergency, be driven through. It even lets me know when it's low on fuel: a small orange needle moves inexorably toward "E". Hotels and restaurants are clearly marked in the directory "AAA" in the glovebox (just push one button!), and alternate routes easily found on an annotated, reusable folded-paper display at my left elbow, in a file paradoxically named the "map pocket."

And all this subliminal, extra-verbal visceral interface was programmed in, in 1963, in South Bend, Indiana. God, the Americans were smart. How I miss them.

You might expect cars to be soulless automatons. What's distressing is how many soulless automatons are driving.

OC Domer said at May 20, 2008 11:53 AM:

I just wish my car would be honest with me. If I plug an address into my navigation system, it will compute my route and estimated drive time. While I don't blame it for not being able to accurately account for traffic delays, I do get annoyed when the car won't acknowledge that my average freeway speed is above the speed limit. But my biggest pet peeve is the "low fuel" light. My "low fuel" indicator comes on when, according to my trip computer, I have at least 50 miles of range left. Then I can drive down until my "range" is zero and I still have at least 20 more miles beyond that before my gas runs out. What I don't know is many miles I still have left, because I haven't pushed it beyond that 20 miles past zero mark. Why can't my car be honest with me about how much fuel I really have left? I'm a big boy - I can handle the truth. And it makes a difference. I drive a lot of miles every month, and the constant nagging and uncertainty caused by low fuel indicator probably means that I'm stopping at the gas station one or two times a month more frequently than I really need to, which is just a hassle.

bristlecone said at May 20, 2008 2:30 PM:

"I suspect that the first AIs deployed into widespread use will therefore possess enormous skills for manipulating humans. The ability to automate efforts to manipulate us will make us more manipulated and controlled by computer systems."

That's a frightening comment.

Now if you'll excuse me, I'm off to my cabin in the woods to live deliberately.

Kent Gatewood said at May 20, 2008 6:41 PM:

I've been rejected for assimilation by Borg cubes three times. The new high AI world will just have to get along without me.

Post a comment
Name (not anon or anonymous):
Email Address:
Remember info?

Go Read More Posts On FuturePundit
Site Traffic Info
The contents of this site are copyright