February 14, 2011
AI Language Processing To Automate Call Centers?

An article by John Markoff in the New York Times looks at the implications for an expected defeat of the best human Jeopardy players by an IBM Watson computer. IBM's chess-playing software has already beat the best human chess players. But Jeopardy is harder for a computer to play because the computer has to decipher the meaning of the English language question and find the answer in a large pool of information.

The implications of progress in A.I. are being brought into sharp relief now by the broadcasting of a recorded competition pitting the I.B.M. computing system named Watson against the two best human Jeopardy players, Ken Jennings and Brad Rutter.

Watson is an effort by I.B.M. researchers to advance a set of techniques used to process human language. It provides striking evidence that computing systems will no longer be limited to responding to simple commands. Machines will increasingly be able to pick apart jargon, nuance and even riddles. In attacking the problem of the ambiguity of human language, computer science is now closing in on what researchers refer to as the “Paris Hilton problem” — the ability, for example, to determine whether a query is being made by someone who is trying to reserve a hotel in France, or simply to pass time surfing the Internet.

This holds implications for a large assortment of jobs which, until now, have not been amenable to total automation. If computers can start listening to customer requests and complaints then will this accelerate a trend toward zero marginal product workers. where a segment of the human population becomes useless to employers. Will humans avoid the fate that befell work horses in the 20th century? Will humans tell their robot slaves to reproduce in large numbers? If so, the danger to humans of the slaves getting freed from their slavery will go way up.

Plenty of trends are working against continued demand for less skilled workers. Philip Greenspun suggests the cost of lower competency in the work place has gone way up for a variety of reasons. The higher cost of individual mistakes is the most interesting point he makes.

Update: The computer's big advantage in a Jeopardy contest: faster reflexes for pushing the buzzer button. Does the IBM computer software's strengths map well to any tech support call center problem domains? What real world business use case is it going to be good at first?

Share |      Randall Parker, 2011 February 14 07:48 PM  Computing Human Obsolescence


Comments
kurt9 said at February 15, 2011 1:49 PM:

Steve Sailor has a different perspective on this:

http://isteve.blogspot.com/2011/02/jeopardy-and-artificial-intelligence.html

Bruce said at February 15, 2011 2:55 PM:

Ummm ... I've made tech support calls to many places including India and Georgia and I swear I never understood half the words spoken to me by a nice lady from Georgia.

Just because Watson can understand Alex Trebek doesn't mean it can understand anyone else.

Nick G said at February 15, 2011 3:12 PM:

I've started reading the NYT article. I'm struck by a sloppy mistake right at the beginning: the assumption that bank tellers have all been replaced by ATM machines. In fact, there are more bank tellers in the US now (560k) than there were in 1980 (531k).

It's true that ATMs have made a big difference: the number of tellers grew by 84% from 1972 to 1980 ( http://www.bls.gov/opub/mlr/1982/06/art4full.pdf ), and since has grown sufficiently slowly that they've fallen as a % of the population. But.... lots of tellers are still around.

This seems to be symptomatic of the sloppy analysis one often sees around labor productivity.

The basic reality: it takes a lot of time-consuming work to re-design jobs to increase labor productivity and replace humans. It's unlikely to ever move as fast as we might like (to increase economic growth) or fear (to prevent unemployment).

bbartlog said at February 15, 2011 4:02 PM:

The question about the buzzer is an interesting one. Once a human champion is good enough (has high confidence that he can come up with the answer, even if he doesn't necessarily have the answer just yet) it makes sense to just push the buzzer and then worry about coming up with it. However, the buzzer is set up so that it isn't even active until Trebeck (or whoever) has finished talking. Thus timing the button push becomes very important. If Watson is configured with some API where it can just 'push buzzer' instantly once it's active, then it has quite an advantage. To put it on an even footing would require that it be forced to figure out when the host has finished his question, then submit a 'buzzer push' with some minimum retry interval and thus try to accomplish the same buzzer-pushing task as the other contestants.
I'm also curious whether Watson uses speech recognition, or gets fed the text and parses that. Speech recognition is quite hard (and to be good at it, a computer needs to integrate the task with the parsing task, so that it can choose the text that makes the most sense in context). I expect Watson is just getting handed the text. If so, the implications for automated tech support are not as promising as it may seem at first blush.

PacRim Jim said at February 15, 2011 7:40 PM:

I's a professional translator and my long experience tells me that there's no way current (non-strong) AI could possibly pass the Turing test.
No computer has sufficient global knowledge or the ability to flexibly extrapolate meaning, however redundant the text.
As the great American philosopher Nelson Mundt says, "HA-ha!"

Jody said at February 16, 2011 2:02 PM:

Quick notes on Watson and Jeopardy for bbartlog:

1) There's a lock out period of about a second if you buzz in before Trebek finishes reading (there's a light that signals when you can buzz in without being locked out). This is why you sometimes see contestants madly pushing the buzzer but not ringing in. So you can't buzz in early, and you're not going to beat Watson to the buzzer either as...

2) Watson gets an electrical signal when it's ok to buzz and sends an electrical signal back. Assume this is instantaneous.

3) Watson is fed text. That's not horribly unfair as a test of reasoning / recall capability as the contestants read (almost instantly) and hear the text. The buzzer is quite unfair.

My belief is that Ken and Brad know almost every question and only get to answer (due to the buzzer advantage) only when Watson doesn't know the answer by the end of the question.

Jim Walker said at February 16, 2011 3:11 PM:

The complexities of a simple question asked of a call center "human" can not and likely will not be answerable by a computer in our lifetime, unless each question is asked by the computer and you just say yes or no.

Big difference between answering a trivia question and discussing a billing statement error is that the human call center operator often has to "cue" the person calling, and in some respect even has to ask the question the person is trying to ask to get to the right answer. Listen in a call center call and you'll rarely hear just questions and answers.

Reality check folks...

Randall Parker said at February 16, 2011 7:05 PM:

Jim Walker,

As compared to a call center handling billing for utilities I would expect Jeopardy to be harder in some respects because the questions run over a far wider range of topics.

I expect call center software to first get used when on web sites where you ask questions of humans in chat sessions. Why not have software take the first crack at answering questions and if the exchange doesn't go somewhere useful get a human involved?

Even more gradual phase-ins with computers could be used. One could have computers accept the initial typed questions, generate proposed answers, and then let a human choose which of a few answers to send (or override and type a human response). So a human call center employee could handle a few exchanges at once. If software would let a human handle 3 exchanges at a time on average then the result would be a 2/3rds reduction in staffing.

Phillep Harding said at February 18, 2011 12:47 PM:

The purpose of automating phone systems is to discourage low profit callers or people who are going to cost the company money by requesting support on something already bought. The only people to make it through (aside from purchasers) either have a lot of experience in navigating that particular phone tree or get paid to work their way through the automation. The rest of us can "get lost".

I have to get a human because of a speech defect that makes the voice navigation impossible. A "Watson" receptionist, is going to be a problem.

Michael L said at February 19, 2011 10:33 PM:

for automating routine customer support you don't need better speech recognition AI; you just need a better documented business process. Once you have that, put it as a bunch of forms on the website with keyword search for navigation (if there are too many of them) and you are done. If we already have a "pay your bill" form online, we might just as well have a "I was overcharged by between $10 to $50 within last 2 weeks" form along with "why is my internet connection intermittently failing" form. The business rules behind such forms will be no different from the script that the fake-American-name call center drone follows nowadays. Of course, at the end of the day a lot will still boil down to the very human technician who shows up to get things running, but a lot of the phone interaction can certainly be automated, if there is an (unlikely) shortage of the low wage drone employees.

Then again, maybe the human employees, no matter how dumb and poorly scripted, serve a function of "plausible deniability" for the company. It's easier to claim that "customer is our first priority" when the job of telling him to go to hell is done by a human rather than a php script. Uselessness of a customer service representative is a matter of evanescent personal experience and judgment (and don't you be raciss here, buddy) whereas the uselessness of an automated script is easy to replicate and decry publicly.

Bugbear said at February 24, 2011 7:52 AM:

I agree with
Bruce said at February 15, 2011 2:55 PM:
Ummm ... I've made tech support calls to many places including India and Georgia and I swear I never understood half the words spoken to me by a nice lady from Georgia.
Just because Watson can understand Alex Trebek doesn't mean it can understand anyone else.

and I agree with
Phillep Harding said at February 18, 2011 12:47 PM:
The purpose of automating phone systems is to discourage low profit callers or people who are going to cost the company money by requesting support on something already bought.


I mean:

the Watson success has been put there as "it will replace humans" while I believe it should be considered just for what it is: an improvement in the human-machine interface.
Today I speak to my android phone the contact I want retrieved in my contact list, and the destination I want to navigate to.
in 5 - 10 years I'm not sure I'll call a "Watson" in case my bank statement reports a wrong amount, but I'll probably have a smartphone with a "watson" inside, that I can speak to and not touch-screen to.

my opinion, obv

Post a comment
Comments:
Name (not anon or anonymous):
Email Address:
URL:
Remember info?

                       
Go Read More Posts On FuturePundit
Site Traffic Info
The contents of this site are copyright ©