“Google uses information to improve our services and to develop new products, features and technologies that benefit our users and the public,” the new Google policy says. “For example, we use publicly available information to help train Google’s AI models and build products and features like Google Translate, Bard, and Cloud AI capabilities.”
Fortunately for history fans, Google maintains a history of changes to its terms of service. The new language amends an existing policy, spelling out new ways your online musings might be used for the tech giant’s AI tools work.
Previously, Google said the data would be used “for language models,” rather than “AI models,” and where the older policy just mentioned Google Translate, Bard and Cloud AI now make an appearance.
The practice raises new and interesting privacy questions. People generally understand that public posts are public. But today, you need a new mental model of what it means to write something online. It’s no longer a question of who can see the information, but how it could be used. There’s a good chance that Bard and ChatGPT ingested your long forgotten blog posts or 15-year-old restaurant reviews. As you read this, the chatbots could be regurgitating some humonculoid version of your words in ways that are impossible to predict and difficult to understand.
One of the less obvious complications of the post ChatGPT world is the question of where data-hungry chatbots sourced their information. Companies including Google and OpenAI scraped vast portions of the internet to fuel their robot habits. It’s not at all clear that this is legal, and the next few years will see the courts wrestle with copyright questions that would have seemed like science fiction a few years ago. In the meantime, the phenomenon already affects consumers in some unexpected ways.
The overlords at Twitter and Reddit feel particularly aggrieved about the AI issue, and made controversial changes to lockdown their platforms. Both companies turned off free access to their API’s which allowed anyone who pleased to download large quantities of posts. Ostensibly, that’s meant to protect the social media sites from other companies harvesting their intellectual property, but it’s had other consequences.
Twitter and Reddit’s API changes broke third-party tools that many people used to access those sites. For a minute, it even seemed Twitter was going to force public entities such as weather, transit, and emergency services to pay if they wanted to Tweet, a move that the company walked back after a hailstorm of criticism.
Lately, web scraping is Elon Musk’s favorite boogieman. Musk blamed a number of recent Twitter disasters on the company’s need to stop others from pulling data off his site, even when the issues seem unrelated. Over the weekend, Twitter limited the number of tweets users were allowed to look at per day, rendering the service almost unusable. Musk said it was a necessary response to “data scraping” and “system manipulation.” However, most IT experts agreed the rate limiting was more likely a crisis response to technical problems born of mismanagement, incompetence, or both. Twitter did not answer Gizmodo’s questions on the subject.
On Reddit, the effect of API changes was particularly noisy. Reddit is essentially run by unpaid moderators who keep the forums healthy. Mods of large subreddits tend to rely on third-party tools for their work, tools that are built on now inaccessible APIs. That sparked a mass protest, where moderators essentially shut Reddit down. Though the controversy is still playing out, it’s likely to have permanent consequences as spurned moderators hang up their hats.