What’s With Watson’s Weird Bets? And Other Questions About IBM’s Jeopardy-Playing Machine

Geek Culture

Jeopardy Panel at RPIJeopardy Panel at RPI

Members of IBM's Watson team spoke before, after and during breaks in Tuesday's Jeopardy broadcast as seen on RPI's 56-foot-wide screen. Image: Kathy Ceceri

Tuesday night at Rensselaer Polytechnic Institute the computer guys from IBM had to explain to an auditorium full of Jeopardy and computer geeks how their supercomputer Watson responded “What is Toronto????” in the category of “U.S. Cities.” And what was with all the odd dollar amounts on the Daily Double wagers?

It all has to do with the fact that Watson is, well, he’s Not Human.

For three nights this week the IBM computer named after the company’s founder is a featured player on the popular trivia game show. At RPI’s EMPAC facility in Troy, NY Tuesday, a panel including Chris Welty and Adam Lally of IBM’s Watson team told a packed crowd that came to watch the show’s broadcast on a 56-foot-wide HD video screen that the methods Watson uses to come up with its Jeopardy responses are very different than those used by champions like Ken Jennings and Brad Rutter.

Watson’s brain has been packed full of facts and definitions from sources like Wikipedia and WordNet. Then the machine weighs its sources according to their proven accuracy during practice games. Of course, sometimes the computer needs a little help weeding out inappropriate responses.

“We had to put in a profanity filter after one bad incident,” Lally said with a laugh.

When looking for possible responses, Watson ranks choices according to the number of connections between the words in the clue. So for the Final Jeopardy answer involving airports named for World War II heroes and battles, the computer looked for cities with airports whose names had a link to the conflict. Categories have been found to be a less reliable way to narrow down responses; that’s why “Toronto” ranked higher than “Chicago” (the correct response) for a category that should have excluded cities in Canada.

As for the weird wagers — Watson bet $1246 on one Daily Double and $6,435 on the other — Lally said the Watson team did include game theorists, who programmed winning strategies (like searching the board’s higher-priced clues early for hidden Daily Doubles) into Watson’s game plan. But though human players usually pick round numbers when naming their Daily Double wager, there’s no reason why they have to — so the team decided to let Watson choose its own amount based on its own algorithms.

Although on Tuesday night it looked like Watson had an advantage over his human opponents when it comes to “buzzing in,” Welty said that wasn’t the case. Watson is programmed to wait until the light next to host Alex Trebek is lit, alerting players that they are allowed to press the buttons that signal their desire to respond (buzzing in early gets a player locked out momentarily). Human players get a split second head start, he said, because they are listening for Trebek to finish reading the clue rather than waiting for the light.

Despite Watson’s big international gaffe, Welty pointed out that the machine still won with about $25,000 to spare. And though winning is nice, the ultimate goal of the Watson project isn’t to develop computers that can beat humans at their own game. Rather, the game is a means for developing a computer that can communicate better with its flesh and blood counterparts. And by that measure, Watson is a success.

As Welty noted, “Watson is better than any machine before at processing natural language.”

RPI is posting streaming video of its Jeopardy panels (minus the show itself) on its website. The two-game match wraps up tonight, Wednesday; check the Jeopardy website for details.

Liked it? Take a second to support GeekDad and GeekMom on Patreon!
Become a patron at Patreon!