The European Union just announced that it will soon roll out a new online age-verification app.
“Our European age verification app is technically ready and soon available for citizens to use,” President of the European Commission Ursula von der Leyen said of social media platforms at a press conference on Wednesday. “This app will allow users to prove their age when accessing online platforms, just like shops ask for proof of age for people buying alcoholic beverages.”
The app, a Europe-wide measure of government-approved age verification, would be a significant step towards an eventual EU-wide social media ban for minors.
Countries around the world have been undertaking various stages of regulatory action, inspired by a landmark Australian bill that went into effect in December 2025, effectively banning kids under 16 from social media platforms. That regulatory momentum is particularly strong in Europe, where at least 15 governments across the continent, including the United Kingdom, have taken some kind of government action.
At the press conference on Wednesday, von der Leyen said that Ireland, Spain, France, Cyprus, Denmark, Greece, and Italy were already planning on adopting the EU app. She also said that the Commission was convening a special panel on children’s online safety, which would meet on Thursday, and would deliver a set of recommendations for all EU member states by the summer.
“We need a harmonized European approach,” von der Leyen said.
Many critics of social media bans and mandatory online age verification measures are worried about the privacy concerns that come with such regulations. Some experts claim that age verification requirements could create mass surveillance systems that can be abused by bad actors.
The EU officials claim that the app, which would work on any device, will be “completely anonymized” to ensure privacy and will follow the same principles as the EU’s COVID-19 digital certification app, which ended up becoming the blueprint of similar digital certifications in other countries and was eventually adopted by the World Health Organization.
EU tech chief Henna Virkkunen said that the privacy measures will be built on a cryptographic method called zero-knowledge proofs, and that the app would be open-source, adding that private companies and partner countries would be free to use it as a blueprint.
Many tech executives, though not all, have opposed the regulatory push for age verification, arguing that compliance would be costly. Pornhub executives and Meta CEO Mark Zuckerberg were among tech leaders who have instead campaigned for device-level age checks, a measure that Apple recently announced that it would implement in the United Kingdom.
“Online platforms can easily rely on our age verification app, so there are no more excuses,” von der Leyen said. “We will have zero tolerance for companies that do not respect our children’s rights, and this is why we are moving ahead with full speed and determination on the enforcement of our European rules.”
The European Union’s adoption of a social media age-verification requirement, as similar initiatives gain speed elsewhere, could have some implications for the United States. American regulators have not necessarily followed in their European counterparts’ steps in the past, including with the COVID vaccine verification system that von der Leyen gave as an example, but tech regulation in the European Union has had an impact on the United States. For example, when the EU enacted cookie consent laws, many digital platforms began showing consent pop-ups for American users too, because it was cheaper to switch their systems rather than creating separate European versions.
Also, just because the Trump administration isn’t keen on tightening the leashes on tech companies doesn’t mean the states aren’t. Over the past year, states like California, Utah, Louisiana, and Texas have taken regulatory action and passed statewide online age-restriction laws that would rely on device-level age checks.
Social media platform operators have also been under increasing legal scrutiny in the United States, particularly after the verdicts in two bellwether social media lawsuits last month opened the floodgates to holding tech companies accountable for the impact their platforms have on society.
Late last month, Meta was found liable for exposing children to sexual predators in a New Mexico case and for endangering young people’s mental health through addictive design features like the infinite scrolls in a California lawsuit. The verdicts are significant because, up until then, platform operators were protected from liability for third-party content via Section 230 of the Communications Decency Act.