Curated news, research, and policy across AI safety, child safety, privacy, fraud, and online abuse.
A drive-through analogy shows how prompt injection attacks exploit a structural weakness in AI's large language models—and why the problem is hard to fix.
Large language models (LLMs) have rapidly transformed artificial intelligence applications across industries, yet their integration into production systems has unveiled critical security vulnerabilities, chief among them prompt injection attacks. This comprehensive review synthesizes research ...
Credit: VentureBeat made with Seedream v4.5 on fal.ai · In the chaotic world of Large Language Model (LLM) optimization, engineers have spent the last few years developing increasingly esoteric rituals to get better answers
Master prompt engineering in 2026 with practical prompting techniques, reasoning methods, safety defenses, and real-world examples.
The National Center for Missing and Exploited Children said it received over a million reports tied to AI-generated child sexual abuse material in just nine months.
Technology companies and child ... AI CSAM across multiple platforms. Online platform operators implement detection systems that identify AI-generated content, while organizations like the National Center continue to expand reporting mechanisms. However, the rapid advancement of AI model technology often outpaces detection capabilities, creating an ongoing technological arms race between offenders and those protecting children from sexual abuse...
State and federal bills seek to limit minors’ access to social media, but civil liberties advocates warn that the resulting online censorship threatens constitutional rights without delivering real safety.
What to know about the bipartisan Kids Online Safety Act, which has been reintroduced and has a second chance in front of Congress.
PrivacyWorld’s Alan Friel and Kyle Fath broke down what companies need to consider in 2026 to meet new and ongoing data laws and regulations in a Stafford
Global data protection laws are expanding fast in 2026. Track key regulatory trends, regional risks and what data access governance leaders must do to stay compliant.
The data privacy world is evolving at a break-neck pace—what are the biggest trends in data privacy in 2026 and how should you respond?
IAPP News Editor Joe Duball rounds up the new U.S. state privacy laws and rules that came into force 1 Jan. and the potential impacts they will bring.
Impersonation scams using AI technology rose by 1,400% in 2025, leading to a record-breaking loss of $17 billion in crypto fraud.Bybit intercepted or
Crime In a major escalation of the fight against crypto-enabled fraud, U.S. authorities announced the seizure of more than $580 million in digital assets
Scammers are increasingly employing AI tools in romance scams, making these campaigns even harder to detect and therefore even more dangerous for targets.
US authorities seize $580M in crypto scam tied to Chinese crime networks behind pig butchering scams across Southeast Asia.United States authorities
Unsurprisingly, 2026 starts where 2025 left off: with disinformation escalating, accountability under pressure, and the rules of a free, trustworthy and pluralistic information space openly contested. This first EU DisinfoLab newsletter of the year brings together the stories setting the tone: from US travel bans targeting European counter-disinfo practitioners, to Venezuela and the rapid spread of disinformation following the US operation, and to platforms ...
16-17 February 2026 Amsterdam Law School (University of Amsterdam)
Drawing from the Digital Policy Alert’s daily monitoring of G20 countries, the roundup summarizes the highlights in four core areas of digital policy.
How did the two directors of the German digital rights nonprofit HateAid become targets of the Trump administration? Here’s how they’re continuing their mission.