He hasn't served in any military campaigns. He's never won a medal. Heck, he's not even alive. But to millions of people, Sgt. Star, the Army's recruitment chatbot, is one of the most recognizable soldiers in America. And he's gathering a lot of personal data.
But, what does he do with all the information that he collects from would-be recruits? Recently, civil liberties organization Electronic Frontier Foundation (EFF) decided to conduct a background check on Sgt. Star.
"While Sgt. Star may seem innocuous enough," the EFF reports, "he provides a window into how other agencies may use the technology for far more invasive purposes."
Like Captain America before him, Sgt. Star was created to be a super-solider, albeit a virtual one. In the aftermath of the 9/11 attacks, the Army was experiencing a 40-percent year-over-year increase in traffic to the chatrooms on its website, goarmy.com. The annual cost to continue staffing the live chatrooms was estimated to be as much as $4 million.
Unveiled to the public in 2006, Sgt. Star currently does the work of 55 human recruiters. Over the last five years, he has answered 10.5 million questions in 2.8 million chat sessions. That means, on average, that Sgt. Star is engaging more than 1,550 people each day.
So, what did the EFF learn about this tireless public servant? Quite a bit, according to their report:
The Army does not have a privacy policy specific to Sgt. Star, but officials say they are not tracking cookies or retaining personal information beyond IP address traffic to goarmy.com. But Sgt. Star is recording the questions he's asked, which the Army acknowledged in an interview … are often more personal than what someone would ask a real recruiter.
Sgt. Star is built on technology developed for the FBI and CIA more than a decade ago to converse with suspects online. Some food for thought: Sgt. Star engaged in almost 3 million conversations over the last five years, and those were people who actually wanted to talk to him. How many people could two CIA and FBI covert bots converse with over 10 years? What happens to conversations that aren't relevant to an investigation, and how do the agencies weed out the false positives, such as when a chatbot misinterprets a benign conversation as dangerous?
For all his character quirks, a user would never mistake Sgt. Star for human—that's just not how he was designed. That can't necessarily be said for other government bots. Military, law enforcement and intelligence agencies have employed virtual people capable of interacting with and surveilling the public on a massive scale, and every answer raises many, many more questions.