November 04, 2010

Auto-Argumentation

Oh. This. This is beautiful.

The result is the Twitter chatbot @AI_AGW. Its operation is fairly simple: Every five minutes, it searches twitter for several hundred set phrases that tend to correspond to any of the usual tired arguments about how global warming isn't happening or humans aren't responsible for it.

It then spits back at the twitterer who made that argument a canned response culled from a database of hundreds. The responses are matched to the argument in question -- tweets about how Neptune is warming just like the earth, for example, are met with the appropriate links to scientific sources explaining why that hardly constitutes evidence that the source of global warming on earth is a warming sun.

I need about six of these. I need the "race realist" bot, the "men are just smarter" bot, the gun nut bot, the anti-vax bot, the libertarian bot, and the evangelical/creationist bot. Then, once I don't have to keep repeating myself on those topics, I'll probably need about six more, since I'll have time again to spend on evaluating and formulating new sets of arguments.

Would it do any good? Looking for some kind of external metric, it's hard to say. I don't know that these kinds of arguments ever convince anyone. I do know, however, that they rarely change and their proponents are generally dogged. This means that chatbots like this can waste plenty of time that these people would otherwise spend trying to spread misinformation and information-free jingoism. That, in itself, would be worthwhile, as would freeing up more of the time of the people who currently go out of their way to combat this kind of nonsense.

So, who's in?

5 comments:

Ben Zvan said...

You're part way there. From the Gizmodo article:

"As can be the case with these sorts of things, many of the deniers don't know they've been targeted by a robot and engage AI_AGW in debate. The bot will continue to fire back canned responses that best fit the interlocutor's line of debate—Leck says this goes on for days, in some cases—and the bot's been outfitted with a number of responses on the topic of religion, where the arguments unsurprisingly often end up."

D. C. said...

Makes sense. A lot of the "argumentation" on the Net resembles an Eliza session, so I'd expect a 'bot to do quite well in dealing with the predictable regressions to the subject of fixation.

Now, y'all do realize that at some point there will be a bot war with two (or more!) of these going at each other, right?

Heather M. Rosa said...

I'm skeptical that reason works when one's opponent is working from a platform of emotion. Jingoism touches deep feelings, not logic. Good luck. And don't waste too much time on it. Life has better things to offer.

Stephanie Zvan said...

My second favorite part of this is that it uses up more of their time than mine. My first favorite is that it shows that denialists are not competent to administer a Turing test. I mean, come on, "AI" is right there in the name.

D. C., I'm waiting for that to happen, but I'm not sure it will unless the denialist bot is programmed for a certain number of random responses. These arguments tend to go:

(1) Strong denialist statement
(2) Rebuttal of misinformation
(3) Trained response
(4) Rebuttal of further misinformation
(5) Uh...unrelated strong denialist statement
(6) See steps 2-4
(7) Uh...name calling or other unrelated deflection, such as religion

Jason Thibeault said...

I got hit with the AI_AGW bot when fighting with a libertarian anti-AGW douchebag recently. Seriously, it tweeted ME, even though I'm on the side favored by reality and sanity. You know, the one that prefers to look at the evidence we have showing the anthropogenic global warming hypothesis to be absolutely correct, instead of the side that would rather interpret the evidence in context of how they wish the world to be. This tells me that the AI needs to be tweaked to recognize when people are arguing FOR science rather than against it.

Still though. XKCD was onto the same idea very recently. http://xkcd.com/810/