CSAM, grooming, age verification, COPPA, KOSA, and predatory behavior online.
132 articles across 6 topics
On April 3, 2026, the legal basis allowing platforms to detect child sexual abuse material (CSAM) in Europe will expire. Without it, online service providers across the EU can no longer proactively detect and remove this content from their platforms – not because the tools don’t exist, but ...
Juries in New Mexico and California have accomplished something Congress has unsuccessfully spent years trying to do: Holding social media companies accountable for endangering children who use their platforms.
European Union regulators are probing Snapchat. Concerns exist that the platform inadequately protects children. Risks include exposure to predators and criminal recruitment. The investigation focuses on Snapchat's age verification systems. These systems are suspected of being insufficient ...
The European Parliament has voted not to prolong an interim derogation from e-Privacy rules that allows online service providers to voluntarily detect, remove, and report child sexual abuse material (CSAM) and grooming. Meaning this essential safeguard set to expire on 3 April 2026.
Dual jury verdicts found social media harmful to children. Other countries like Australia, Brazil and Malaysia have restrictions for kids.
The European Commission suspects that the US social network is in breach of EU law on digital services, exposing minors to "grooming attempts and recruitment for criminal purposes" and to "information relating to the sale of illegal goods"
The vote followed weeks of clashes, as national governments pushed the European Parliament to drop its privacy objections to the rules.
LONDON (AP) — European Union regulators are investigating Snapchat over concerns the platform isn't doing enough to protect kids and exposing them to risks such as increased vulnerability to child predators or recruitment by criminals.
The UK is moving forward with its efforts to ban social media for young people. Ahead of this week’s House of Lords debate on the topic, we’re getting you situated with a primer on what’s been
House Republicans revived federal child online safety legislation, but in doing so they stripped out the provision that gave KOSA its sharpest legal edge.
The family of 17-year-old Hailey Buzbee has released the first draft of "Hailey's Law," a legislative proposal aimed at strengthening Indiana's response to missing children cases.
A SENATOR on Monday filed a resolution seeking to investigate Roblox and other online gaming platforms amid reports of games being utilized for planning acts of violence. Under Senate Resolution No. 357, Senator Ana Theresia Hontiveros-Baraquel, who chairs the Senate Committee on Women, Children, ...
Pinterest CEO Bill Ready urged governments to ban social media access for users under 16, praising Australia's youth social media restrictions and arguing tech companies have failed to prioritize child safety. The debate centers on youth mental health, age verification, and whether governments should impose stronger protections for minors online.
In 48 hours, lawmakers moved more aggressively than they have in nearly a decade, but will it pass a challenge in federal court?
New tools enabled by the REPORT Act have made it easier to submit seized CSAM, according to Shehan. NCMEC launched an Electronic File Submission system on January 29, 2026, allowing law enforcement to submit seized CSAM electronically instead of mailing physical media.
UK regulators warn tech companies to implement age assurance controls, New Zealand legislators propose an overhaul to country’s online safety laws, and WhatsApp introduces parental controls – plus other key updates.
EU countries and lawmakers on Monday failed to agree to an extension of a temporary measure governing how Alphabet's OGL.O Google, Meta Platforms and other online platforms tackle child sexual abuse material, leaving a legal vacuum on the issue.
The threat of online child exploitation continues to loom despite the government’s plan to restrict children’s access to certain digital platforms, as experts warn that predators can easily move across multiple online spaces where young users interact.
The European Commission has published the long-awaited guidelines clarifying how online platforms such as social media platforms, online marketplaces, app stores and other content-sharing services should protect minors under Article 28(1) of the Digital Services Act (“DSA”).1Published on ...
EU extends CSAM rules to allow voluntary detection of child abuse material online while negotiations on permanent law continue.
The Stop Online Predators Act was included in the governor’s State of the State proposals.
Stay up-to-date on the latest legislative developments related to children and digital media with "Policy Update" in Children and Screens' monthly newsletter ScreenShots · On March 5, the House Energy and Commerce Committee (E&C) held a markup of several child online safety bills, including ...
MEPs support extending an exemption to privacy legislation allowing the voluntary detection of child sexual abuse material online until 3 August 2027.
The EPP Group is calling on the other political groups in the European Parliament to vote to extend EU rules that allow online platforms to detect child sexual abuse material online. Without an extension before the 3 April deadline, platforms could...
The measure included the Kids Online Safety Act, though House Democrats contended the bill would leave a “giant loophole” for Big Tech.
IAPP Staff Writer Alex LaCasse reports on the U.S. House Committee on Energy and Commerce advancing the KIDS Act along party lines to a full vote before the House of Representatives.
Karnataka, home to tech hub Bengaluru, plans to ban children under 16 from social media platforms, joining a growing global movement that includes Australia, Indonesia, and Malaysia — though enforcement details remain unclear.
Congress is struggling for consensus on legislation to protect kids in the digital age after years of legislative work, but at least one bill stands a shot of becoming law this year.
Bills that would seek to provide greater online protections for kids moved forward on both sides of Capitol Hill on Thursday, though a House measure was pulled back from a committee vote in a bid for bipartisan support. The Senate passed a bipartisan bill by unanimous consent that would amend ...
House Republicans forged ahead on a child online safety package over the vocal opposition of committee Democrats.
Senate versions of the Kids Online Safety Act and COPPA 2.0, have bipartisan support.
A GOP-led package of kids’ online safety bills, including the landmark Kids Online Safety Act (KOSA), advanced out of a House committee Thursday amid heavy pushback from Democrats and technology safety advocates. In a lengthy bill markup Thursday, House Energy and Commerce Chair Brett Guthrie ...
A House committee has revamped its version of kids online safety legislation for at least the third time, losing bipartisan support in the process and likely impairing its ability to pass the House.
The House Energy and Commerce brought a social media bill to the markup process before it heads to the floor.
House Energy and Commerce Chair Brett Guthrie (R-Ky.) will roll out a new legislative package focused on kids’ online safety this week after negotiations between Republicans and Democrats fell apart. Guthrie will introduce the package, dubbed the Kids Internet and Digital Safety (KIDS) Act, ...
House members tagged on new penalties for AI and social media tech companies, and social media controls for minors including age verification, parental access controls and a curfew.
Dear Chairman Guthrie and Ranking Member Pallone: On behalf of the National Conference of State Legislatures (NCSL), the bipartisan organization representing the legislatures of our nation's states, territories and commonwealths, we write to express our appreciation for the House Energy and ...
Commentary / March 2, 2026 · TikTok’s and Meta’s 2025 DSA risk assessments describe a range of risks and a multitude of mitigations addressing risks to minors: screentime management, parental controls, privacy-oriented design defaults, and restrictions on notifications.
All new smartphones and tablets ... of child sexual abuse material (CSAM), under a proposal to be voted on in the House of Lords this week. The amendment to the crime and policing bill, which has cross-party support, would require tech companies to embed an AI tool to detect CSAM on ...
What to know about the bipartisan Kids Online Safety Act, which has been reintroduced and has a second chance in front of Congress.
State and federal bills seek to limit minors’ access to social media, but civil liberties advocates warn that the resulting online censorship threatens constitutional rights without delivering real safety.
West Virginia Attorney General JB McCuskey wants you to think he's protecting children. His press release says so. His legal complaint opens with the genuinely horrific line that Apple has, in internal communications, described itself as the "greatest platform for distributing child porn."
House Republicans have long expressed concerns about KOSA, citing potential infringements on First Amendment rights and risks of censorship—concerns that ultimately dragged down COPPA 2.0 alongside KOSA. Constitutional objections clearly remain for House leadership. During a December 2025 ...
Acting Attorney General Jennifer Davenport announced that she has joined together with a bipartisan coalition of attorneys general from around the country in urging Congressional leadership to protect children from online harm and pass the Senate version of the Kids Online Safety Act (KOSA), S.1748.
UK: Online safety and age assurance | UK government to consult on social media ban for children | Online Safety Act updates - New priority offences; Ofcom expedites decision on measures to block non-consensual intimate images; call for evidence on a statutory report on content harmful to children; ...
A bipartisan coalition of 40 state and territorial attorneys general to congressional leadership urges support for the Senate version of the Kids Online Safety Act (KOSA), S. 1748.
Age verification mandates won't magically keep young people safer online, but that has not stopped governments around the world spending 2025 implementing or attempting to introduce legislation requiring all online users to verify their ages before accessing the digital space.
COLUMBIA, Mo. (KMIZ) Three bills that add age verification to social media and AI programs are up for executive action Monday by the Missouri House's Emerging Issues Committee. House Bill 3393 and 2392, sponsored by Representatives Don Mayhew (R-Crocker) and Marty Murray (D-St.
Nine in 10 Americans support age verification laws online. Almost as many say those laws aren't working, and they know because they’ve gotten around them, too.
European Union regulators accused several adult content platforms of failing to adequately prevent minors from accessing explicit material.
With online age checks spreading across the globe, users are turning to VPNs to bypass the new requirements. Now, some lawmakers are promising to rein in the tech.
Study finds problems in 21 of 25 platforms run by major companies targeting children and teenagers
On-device face scans and cross-platform age keys decrease privacy risks, but trust issues abound.
Governments worldwide are introducing age checks and social media restrictions for minors. Here's how the global timeline is unfolding.
Social media firms need to use better age verification technologies to keep children off their platforms, the U.K.'s Information Commissioner's Office said.
Instagram, Snapchat, TikTok, YouTube and Roblox are among the platforms UK regulators say aren't putting children's safety at the heart of their products.
The legal campaign against state social media age check laws is entering a more precarious phase for NetChoice and the CCIA.
The UK and Australia aren't the only places cracking down.
For years, tech companies successfully resisted pressure from child safety advocates to do more to keep kids off their services, claiming technical limitations would make any attempt to restrict access for teens impractical, overly broad or a security risk.
New proposals would require stronger safeguards across digital platforms while placing age verification at center of effort to protect minors online.
Across the country, lawmakers are trying to figure out how — or whether — government should step in when it comes to kids and the internet. Some proposals focus on social media platforms. Others target app stores. A few states have gone a step further, looking at the devices themselves.
Governments are building identity verification into every digital layer: OS, app store, platform, SIM card. Different names, same infrastructure.
New age-verification laws and tools are designed for child safety on social media and the internet, but adults are in the crosshairs, say privacy experts.
: Bad legislation, but an especially big headache for FOSS
Countries around the world are tightening rules to verify the age of children on social media platforms. New systems use AI face scans, government ID checks, and behaviour analysis to detect underage users, though privacy and accuracy concerns remain.
What started as age gates on adult websites has quietly crept into app stores and operating systems.
Explore the key trends shaping online age verification in 2026, including biometric age estimation, privacy-preserving technologies, and standards-based approaches.
In our next look at US state law developments in the children's space, we turn to age verification laws. These laws are focused on either app stores or platforms.
In an open letter, over 400 computer scientists caution governments against imposing age restrictions on internet platforms.
On February 25, 2026, the Federal Trade Commission (FTC) issued an Enforcement Policy Statement Promoting the Adoption of Age-Verification Technology signaling a significant shift in its enforcement approach under the Children’s Online Privacy Protection Act (COPPA).
Age verification through a live selfie or uploading a piece of government ID is the way that technology companies say will be used to keep children off harmful sites.
Big Social is reaching its Big Tobacco moment as it regulations intensify and social media companies look to implement age verification methods on its platforms.
This post is also available in: עברית (Hebrew)As online platforms prepare to enforce stricter age verification rules, they face a complex technical challenge: confirming users’ ages without exposing sensitive personal data. Systems that rely on biometric analysis or government-issued ...
Apple complies with new age-assurance laws in the U.S. and abroad, including those that block users from downloading apps aimed at adults.
A growing number of online platforms are adopting age verification measures, raising concerns from users about privacy, security, and censorship.
Discord is delaying its age verification rollout after backlash from users, and has addressed privacy concerns in a new blog.
Age verification is forcing companies to undermine data privacy laws.
2026 guide to global online age regulations: key changes, practical implications, and implementation choices for secure, low-friction age-control flows.
Discord has begun rolling out mandatory age verification and the internet is, understandably, freaking out. We’ve written extensively about why age verification mandates are a censorship and surveillance nightmare. Discord’s shift only reinforces those concerns.
Police across the UK are struggling to keep up as cases of child sexual abuse images soar online. Experts warn the true scale is hidden, with AI, encrypted messaging and social media fuelling a growing crisis that leaves children at risk and authorities stretched to the limit.
London: The Internet Watch Foundation (IWF) has reported that the amount of AI-generated child sexual abuse material found online rose by 14 per cent in 20...
AI-generated abuse content surged in 2025, with watchdogs warning that the technology is making harmful material easier to create and spread.
Internet Watch Foundation identifies over 8,000 AI-made CSAM files, with majority of videos classified in most severe category under UK law - Anadolu Ajansı
The rapid evolution in artificial intelligence has come with its dilemma of misusing the tools. According to safety watchdog, the AI misuse has fuelled alarming surge in child sexual abuse content...
AI-generated content is more explicit, extreme and complex than other types of child pornography that have been seen in the past, says the Internet Watch Foundation.
Internet Watch Foundation verified 8,029 pieces of realistic AI-made content, with 65% of videos in worst category
Teens will be sentenced Wednesday after admitting to creating AI CSAM.
Three teenagers in Tennessee have sued Elon Musk’s xAI, claiming the company’s image-generation tools were used to morph real photos of them into explicitly sexual images.
Exclusive: eSafety commission pointed to Musk’s promise that ‘removing child exploitation is priority #1’ in letter obtained by Guardian Australia
With technology evolving every day and predators learning new tactics, state criminal investigators say Artificial Intelligence poses a different challenge when prosecuting predators with sexually explicit content involving children.
Lawsuit details how sexualized AI-generated images were produced and distributed without girls’ knowledge
Lawuit details how sexualised AI-generated images were produced and distributed without girls’ knowledge
COLUMBUS, Ohio (WCMH) — During his final State of the State Address this week, Ohio Gov. Mike DeWine called on the legislature to outlaw child sexual abuse material generated by artificial intelligence, specifically praising one bill currently making its way through the Ohio Senate.
A Cybertip received in January has led the Centre Police Department (CPD) to 21-year-old Malaki Ray Sipsy, who was allegedly using AI to make CSAM.
Florida lawmakers passed a bill to increase penalties for child sex crimes and AI-generated material. Here's what to know.
Technology companies and child ... AI CSAM across multiple platforms. Online platform operators implement detection systems that identify AI-generated content, while organizations like the National Center continue to expand reporting mechanisms. However, the rapid advancement of AI model technology often outpaces detection capabilities, creating an ongoing technological arms race between offenders and those protecting children from sexual abuse...
An anonymous tip led police to discover the images Joel Salinas reportedly made of his classmates using AI technology.
Reports of artificial intelligence-generated child sexual abuse material (CSAM) skyrocketed from 4,700 in 2023 to more than 400,000 in just the first half of 2025, according to the National Center for Missing and Exploited Children.
InvestigateTV+ takes an in-depth look at a new era of risk for children, the push for stronger protection, plus sexual abuse survivors share their trauma to encourage parents to engage in difficult but important conversations.
Deepfake pornography is rapidly expanding as generative artificial intelligence (GAI) enables the manipulation of innocent images into sexually explicit content. The National Center for Missing & Exploited Children (NCMEC) reported GAI-related child sexual exploitation cases rising from 6,835 ...
The House approved HB47 on Feb. 27, 2026, creating new felonies for AI-generated child sexual abuse material and adding strict social media limits for minors; the bill now goes to the Senate.
The National Center for Missing and Exploited Children said it received over a million reports tied to AI-generated child sexual abuse material in just nine months.
Stability AI said it had introduced safeguards to enhance its safety standards and “is deeply committed to preventing the misuse of our technology, particularly in the creation and dissemination of harmful content, including CSAM.” · Amazon and OpenAI, when asked to comment, pointed to reports they posted online that explained their efforts to detect and report child sexual abuse material...
Alaska law enforcement officials are struggling to prosecute child sexual abuse material cases as artificial intelligence makes it easier for criminals to generate synthetic images.
The removal of the current legal basis enabling voluntary detection by online platforms could have far-reaching implications for safeguarding children.Last year alone, Europol processed around 1.1 million of so-called CyberTips, originating from the National Center for Missing & Exploited Children ...
Meta's end-to-end encryption raises child safety concerns, with a New Mexico trial revealing internal warnings that disappearing abuse reports highlight the tension between privacy and protection.
Get the latest online child sexual abuse and exploitation statistics from one of the leaders in child safety technology. Information updated in January of 2026
Edinburgh-based Cyacomb adds Similarity Matching to Examiner Plus, helping police spot altered child abuse images on phones in minutes.
Here's what clinicians must do when patients admit viewing CSEM: more on state reporting rules, confidentiality limits, and why California is different.
300 million children are affected by tech-facilitated abuse each year—Paul Gullon-Scott examines what Childlight's Into the Light Index 2025 findings mean for digital forensic investigators on the frontline of CSAM cases.
Criminals are using artificial intelligence to exploit children. Even though the content is generated, the consequences are real.
As investigations continue into Dalten Johnson’s reported child sexual abuse situation, more victims from his alma mater are stepping forward with their stories.
Multiple state attorneys general and dozens of families have now sued Roblox.
There are no barriers to entry on Leomatch, with users needing only to key in their name, age and location. Read more at straitstimes.com.
A growing transnational network known as the “764 Network” is exploiting children online through social media, gaming platforms, and chat apps. Victims are groomed and coerced into producing abusive content or self-harm, while perpetrators share the material for status within underground ...
Pennsylvania State Police are warning parents about the growing dangers children face online.
Utah investigators are warning about sadistic online predators who threaten, manipulate and even physically harm children.
Los Angeles County sued the online gaming platform Roblox for its alleged failure to protect children from danger.
A growing number of families are suing Roblox after learning their children were groomed, exploited, or exposed to sexually explicit content through the platform. Roblox spent years branding ...
Investigation comes amid growing scrutiny of the impact and liability of technology platforms towards kids.
New Mexico jury fines Meta $375M for failing to protect children from predators on its platforms.
Sen. Josh Hawley announces investigation into Google following child sex trafficking hearing, requesting internal policies on detection and removal of abuse material.
The lawsuit accuses Apple of prioritizing privacy branding and its own business interests over child safety.
Los Angeles County has filed a lawsuit against Roblox, the popular online gaming platform, alleging the company has failed to adequately protect children from predatory behavior and grooming on its platform.
Two juries will now decide whether Meta’s platforms crossed legal lines on child safety, age verification, and addictive design.