»This Week
The federal government’s attempt to impose order on AI through new policy frameworks collided this week with the messy reality of an industry under siege—supply chain attacks compromised LiteLLM and Trivy while Claude’s explosive growth forced Anthropic to ration usage even as it fought off a Trump administration security designation in court. Google’s Gemini 3.1 Flash Live and leaked benchmarks of Claude Mythos arrived amid enterprises deploying autonomous agents for CFO-level work faster than regulators designed for human oversight can adapt. What connects these stories is a fundamental mismatch: AI development and deployment are outpacing every system meant to secure, govern, or constrain them.
- This Week
- Top Stories
- Federal AI Policy and Governance
- AI Supply Chain Malware Attacks
- Anthropic Wins Injunction Against DoW
- Google Gemini Audio and Media Updates
- Claude Usage Growth Skyrocketing
- China AI Geopolitics and Competition
- AI Agents in Enterprise Automation
- AI Startup Funding Trends
- AI Robotics and Autonomous Systems
- Social Media Child Safety Regulation
»Top Stories
»Federal AI Policy and Governance
18 articles
- The White House released a National Policy Framework for AI and legislative recommendations, drawing mixed reactions from policy observers [1] [2] [3]
- Senators introduced legislation requiring the U.S. Energy Information Administration to monitor data center electricity usage and collect power consumption data [4] [5]
- Sanders and AOC proposed a data center moratorium that critics argue lacks economic and technical justification [6]
Why it matters: The federal government is simultaneously trying to establish AI governance standards while grappling with the infrastructure costs of the technology—revealing tension between promoting AI development and managing its resource demands.
Cited sources:
- [1] The Federal AI Policy Framework: An Improvement, But My Offer Is (Still Almost) Nothing thezvi.substack.com
- [2] Statement: Head of US Policy on the White House AI legislative recommendations futureoflife.org
- [3] Unpacking the White House National Policy Framework for AI cset.georgetown.edu
- [4] Data centers get ready — the Senate wants to see your power bills techcrunch.com
- [5] Senators want US energy information agency to monitor data center electricity usage arstechnica.com
- [6] The Sanders-AOC Data Center Moratorium Doesn’t Add Up datainnovation.org
»AI Supply Chain Malware Attacks
8 articles
- Hackers compromised LiteLLM version 1.82.8 by injecting malicious code (litellm_init.pth) that steals credentials from approximately 47,000 users [1] [2] [3]
- Attackers also breached the widely-used Trivy security scanner in a separate ongoing supply-chain attack targeting open source infrastructure [4]
- Self-propagating malware infected open source software repositories and specifically targeted Iran-based machines for data destruction [5]
Why it matters: These coordinated attacks on popular AI development tools expose critical vulnerabilities in the open source supply chain that developers rely on daily—credential theft at this scale gives attackers potential access to thousands of production AI systems.
Cited sources:
- [1] My minute-by-minute response to the LiteLLM malware attack simonwillison.net
- [2] LiteLLM Hack: Were You One of the 47,000? simonwillison.net
- [3] Malicious litellm_init.pth in litellm 1.82.8 — credential stealer simonwillison.net
- [4] Widely used Trivy scanner compromised in ongoing supply-chain attack arstechnica.com
- [5] Self-propagating malware poisons open source software and wipes Iran-based machines arstechnica.com
»Anthropic Wins Injunction Against DoW
6 articles
- A federal judge granted Anthropic a preliminary injunction blocking the Trump administration’s designation of the company as a supply chain security risk, calling the label “Orwellian” [1] [2]
- The designation would have banned Anthropic’s AI models from use by the Defense Department [3] [4]
- The ruling halts enforcement of the ban while the case proceeds [5] [1]
Why it matters: The injunction preserves Anthropic’s access to lucrative government contracts and sets a precedent for how AI companies can challenge national security determinations that lack clear evidence.
Cited sources:
- [1] Anthropic wins injunction against Trump administration over Defense Department saga techcrunch.com
- [2] Federal judge blocks Trump’s ban on Anthropic AI models, calls security risk label “Orwellian” the-decoder.com
- [3] Anthropic vs. DoW #6: The Court Rules thezvi.substack.com
- [4] Anthropic Supply-Chain-Risk Designation Halted by Judge wired.com
- [5] Anthropic vs. DoW Preliminary Injunction Ruling lesswrong.com
»Google Gemini Audio and Media Updates
13 articles
- Google released Gemini 3.1 Flash Live, a real-time multimodal voice model supporting low-latency audio, video, and tool use [1] [2], with performance improvements that could make AI voices harder to distinguish from humans [3]
- Google added a feature allowing users to transfer chat histories and personal information directly from ChatGPT and Claude into Gemini [4] [5]
- Google launched Lyria 3 Pro, which enables users to create longer audio tracks [6]
Why it matters: The combination of more natural-sounding voice AI and easy migration tools positions Google to capture users from competing chatbots while raising new concerns about AI impersonation.
Cited sources:
- [1] Gemini 3.1 Flash Live: Making audio AI more natural and reliable deepmind.google
- [2] Google Releases Gemini 3.1 Flash Live: A Real-Time Multimodal Voice Model for Low-Latency Audio, Video, and Tool Use for AI Agents marktechpost.com
- [3] The debut of Gemini 3.1 Flash Live could make it harder to know if you’re talking to a robot arstechnica.com
- [4] You can now transfer your chats and personal information from other chatbots directly into Gemini techcrunch.com
- [5] Google’s new Gemini update makes it easy to import memories from ChatGPT and Claude the-decoder.com
- [6] Lyria 3 Pro: Create longer tracks in more deepmind.google
»Claude Usage Growth Skyrocketing
10 articles
- Anthropic’s Claude is experiencing skyrocketing growth among paying consumers [1], while the company tweaks usage limits to discourage demand during peak hours [2]
- Anthropic leaked data revealed a new model called “Claude Mythos” with dramatically higher test scores than any previous model [3], following what sources called “the biggest Claude launch of all time” [4]
- The company positions itself as an antidote to OpenAI’s approach to AI [5], though it faces struggles with Chinese competition and its own safety focus [6]
Why it matters: Claude’s rapid consumer adoption and leaked performance gains suggest Anthropic is successfully differentiating from OpenAI through safety-conscious positioning, even as infrastructure constraints force rationing at peak times.
Cited sources:
- [1] Anthropic’s Claude popularity with paying consumers is skyrocketing techcrunch.com
- [2] Anthropic tweaks timed usage limits to discourage Claude demand during peak hours go.theregister.com
- [3] Anthropic leak reveals new model “Claude Mythos” with “dramatically higher scores on tests” than any previous model the-decoder.com
- [4] [AINews] The Biggest Claude Launch of All Time latent.space
- [5] Anthropic reportedly views itself as the antidote to OpenAI’s “tobacco industry” approach to AI the-decoder.com
- [6] Anthropic struggling with Chinese competition, its own safety obsession go.theregister.com
»China AI Geopolitics and Competition
8 articles
- A US panel attributed China’s AI advantages to its access to open-source models and dominance in manufacturing capabilities [1]
- Beijing intervened in Meta’s Manus AI project, unsettling tech founders and VCs pursuing “China shedding” strategies [2] [3]
- China faces a brain drain problem as its AI experts increasingly seek to leave the country [4]
Why it matters: China’s AI sector is caught between structural advantages in hardware production and growing geopolitical friction that’s driving talent exodus and complicating Western tech partnerships.
Cited sources:
- [1] US panel credits China’s AI edge to open-source models, manufacturing dominance cset.georgetown.edu
- [2] Beijing’s surprise intervention on Meta’s Manus rattles tech founders, VCs eyeing ‘China shedding’ cnbc.com
- [3] The least surprising chapter of the Manus story is what’s happening right now techcrunch.com
- [4] China’s not thrilled its AI experts want to leave the country go.theregister.com
»AI Agents in Enterprise Automation
24 articles
- AI agents are automating complex business functions including CFO work at Intuit [1], call center operations [2] [3], and financial analysis for family offices [4], moving beyond traditional robotic process automation [5]
- Agentic commerce systems require accurate contextual data to function [6], but face regulatory frameworks designed for human decision-making that will slow deployment [7]
- Utah passed healthcare AI regulation [8], while the ControlAI 2025 Impact Report tracks broader governance efforts [9] and LatticeFlow AI develops validation systems for enterprise AI agents [10]
Why it matters: Enterprises are deploying autonomous AI agents for critical business roles faster than regulators can adapt existing compliance frameworks built for human oversight.
Cited sources:
- [1] Intuit thinks it’s found your company’s next CFO: AI fastcompany.com
- [2] AI companies lick their chops as FCC proposes forcing call center onshoring go.theregister.com
- [3] ‘Empathetic’ Salesforce bots to help those fired by uncaring humans go.theregister.com
- [4] Ocorian: Family offices turn to AI for financial data insights artificialintelligence-news.com
- [5] RPA matters, but AI changes how automation works artificialintelligence-news.com
- [6] Agentic commerce runs on truth and context technologyreview.com
- [7] Agentic Commerce is Coming, but Regulation Meant for Humans Will Slow It Down datainnovation.org
- [8] Utah Shows How States Should Regulate AI in Healthcare datainnovation.org
- [9] ControlAI 2025 Impact Report alignmentforum.org
- [10] CEO Interview: LatticeFlow AI cbinsights.com
»AI Startup Funding Trends
23 articles
- Air Street Capital closed a $232M Fund III to invest in AI-first companies [1], while AI and security startups dominated the week’s largest funding rounds [2] [3]
- Deccan AI raised $25M to compete with Mercor by sourcing technical experts from India [4]
- SoftBank secured a $40B loan that analysts connect to a potential 2026 OpenAI IPO [5]
Why it matters: AI startups continue attracting substantial capital even as overall investment activity slows, with specialized funds and geographic diversification strategies gaining traction.
Cited sources:
- [1] Air Street Capital announces $232M Fund III to back AI-first companies nathanbenaich.substack.com
- [2] The Week’s 10 Biggest Funding Rounds: Investment Slows, But Security And AI Remain Top Picks news.crunchbase.com
- [3] The Week’s 10 Biggest Funding Rounds: A Varied Week For Big Deals, Led By AI And Defense news.crunchbase.com
- [4] Mercor competitor Deccan AI raises $25M, sources experts from India techcrunch.com
- [5] Why SoftBank’s new $40B loan points to a 2026 OpenAI IPO techcrunch.com
»AI Robotics and Autonomous Systems
14 articles
- Researchers developed AI systems for autonomous wheelchair navigation that can handle complex indoor environments [1]
- Niantic uses data from Pokemon Go players to train AI models for mapping and understanding urban spaces [2]
- Engineers trained a driving AI system at 50,000 times real-time speed, dramatically accelerating autonomous vehicle development [3]
Why it matters: These advances show AI navigation systems moving from controlled simulations into real-world applications—from assistive mobility devices to self-driving cars—where training speed and spatial understanding determine commercial viability.
Cited sources:
- [1] AI Aims for Autonomous Wheelchair Navigation spectrum.ieee.org
- [2] Mapping Cities with Pokemon Go datainnovation.org
- [3] Training Driving AI at 50,000× Real Time spectrum.ieee.org
»Social Media Child Safety Regulation
13 articles
- European regulators found major adult platforms including Pornhub, Stripchat, XNXX, and XVideos in breach of the Digital Services Act for failing to block minors from accessing their services [1], while separately investigating Snapchat’s child protection compliance [2]
- Dutch courts ordered Elon Musk’s Grok AI to stop generating nude images [3], and a UK lawmaker targeted by AI deepfakes failed to get accountability from US tech companies [4]
- Recent court rulings against Meta and YouTube over child safety violations could end Big Tech’s legal immunity protections [5] [6], with some countries now planning social media bans for minors [7]
Why it matters: Regulators and courts are dismantling the legal shields that allowed platforms to avoid responsibility for child harm — forcing companies to either redesign their products or face restrictions and liability.
Cited sources:
- [1] Commission preliminarily finds Pornhub, Stripchat, XNXX and XVideos in breach of the Digital Services Act for allowing minors to access their services digital-strategy.ec.europa.eu
- [2] Commission investigates Snapchat’s compliance with child protection rules under the Digital Services Act digital-strategy.ec.europa.eu
- [3] Elon Musk’s Grok ordered to stop creating AI nudes by Dutch court as legal pressure mounts cnbc.com
- [4] Brit lawmaker targeted by AI deepfake fails to get answers from US Big Tech go.theregister.com
- [5] Meta’s legal defeat could be a victory for children, or a loss for everyone theverge.com
- [6] How the Meta and YouTube child safety rulings end Big Tech invincibility fastcompany.com
- [7] The social media ban for kids is spreading. This country is the latest to plan on restrictive legislation fastcompany.com