The Rundown AI / Articles / AI / Meta researcher exposes 'culture of fear'
AI

Meta researcher exposes 'culture of fear'

PLUS: Patients control AI and robotics with thought

Zach Mink

July 11, 2025

Read Online | Sign Up | Advertise

Good morning, AI enthusiasts. Meta's AI division just got called out from the inside — and the diagnosis is terminal.

A departing scientist compared the culture to "metastatic cancer" in a scathing internal essay, detailing deep cultural issues that no amount of hiring or superintelligence divisions may be able to overcome.

Reminder: Our next workshop is today at 4:00 PM EST — join and learn to confidently install and use the Gemini CLI to boost productivity from the command line. RSVP here.


In today’s AI rundown:

  • Ex-Meta researcher calls out ‘culture of fear’

  • Google’s powerful new open medical AI models

  • Get up-to-date API information for AI coding tools

  • Study: Why do some AI models fake alignment

  • 4 new AI tools & 4 job opportunities

LATEST DEVELOPMENTS

META

🤖 Ex-Meta researcher calls out ‘culture of fear’

Image source: Ideogram / The Rundown

The Rundown: A departing Meta AI scientist posted a long internal essay comparing the company's culture to "metastatic cancer,” according to The Information — describing the AI unit as plagued by fear, confusion, and a lack of direction.

The details:

  • Tijmen Blankevoort, who worked on the LLaMA models, said that most Meta AI employees feel unmotivated with little clarity about the division’s mission.

  • He blamed the “culture of fear” on frequent performance reviews and layoffs, which he said undermine creativity and morale across the 2,000-person AI unit.

  • Blankevoort said Meta leadership reached out to him “very positively” following the post, expressing eagerness to address the issues he raised.

  • The essay comes as Meta launches its Superintelligence unit, hiring top AI talent from OAI, Apple, and other rivals with massive compensation offers.

Why it matters: During Meta’s poaching spree, OpenAI CEO Sam Altman said that Meta’s tactics would create “deep cultural problems” — but this essay shows they might have already been simmering even without the new hires. However, a new division with fresh leadership might be the drastic move needed to address the issues.

TOGETHER WITH GUIDDE

🎥 Create instant video guides with AI

The Rundown: Stop wasting time on repetitive explanations. Guidde’s AI helps you create stunning video guides in seconds, 11x faster.

Use Guidde to:

  • Auto-generate step-by-step video guides with visuals, voiceovers, and a CTA

  • Turn boring docs into visual masterpieces

  • Save hours with AI-powered automation

  • Share or embed your guide anywhere

Download the free extension.

GOOGLE DEEPMIND

🏥 Google’s powerful new open medical AI models

Image source: Google

The Rundown: Google launched new updates to MedGemma, releasing two models to its suite of open medical AI tools, including a 27B multimodal model for interpreting medical images and patient records and a MedSigLIP tool for image and text analysis.

The details:

  • MedGemma can analyze everything from chest X-rays to skin conditions, with the smaller version able to run on consumer devices like computers or phones.

  • The model achieves SOTA accuracy, with 4B achieving 64.4% and 27B reaching 87.7% on the MedQA benchmark, beating similarly sized models.

  • In testing, MedGemma’s X-ray reports were accurate enough for actual patient care 81% of the time, matching the quality of human radiologists.

  • The open models are highly customizable, with one hospital adapting them for traditional Chinese medical texts, and another using them for urgent X-rays.

Why it matters: AI is about to enable world-class medical care that fits on a phone or computer. With the open, accessible MedGemma family, the barrier for healthcare innovation worldwide is being lowered — helping both underserved patients and smaller clinics/hospitals access sophisticated tools like never before.

AI TRAINING

🔧 Get up-to-date API information for AI coding tools

The Rundown: In this tutorial, you will learn how to use Context7 MCP Server to eliminate AI hallucinations by delivering real-time API documentation and code examples directly to your coding tools like Windsurf and Cursor.

Step-by-step:

  1. Visit the Context7 GitHub repository and copy the configuration code for your AI tool

  2. Open your AI coding tool's configuration settings to Add MCP Server

  3. Paste the Context7 config into your mcp_config.json file and save

  4. Start prompting with “use context7 for up-to-date API info” to get current documentation from 25,000+ libraries

Pro tip: Always mention “use context7” at the end of your prompts to make sure the AI uses the Context7 server for the most current documentation and examples.

PRESENTED BY CONVEYOR

🧠 Beyond chatbots: The real AI Agent breakdown

The Rundown: Everyone's slapping "AI Agent" on their product, but most are glorified chat tools. Conveyor breaks down what a real AI Agent is — one that plans, acts, and delivers full outcomes, not just suggestions.

In this blog, you’ll discover:

  • Why co-pilots aren’t agents (and why it matters)

  • What makes an AI Agent autonomous and useful

  • How infosec teams can spot the difference

Read the blog here.

ANTHROPIC

 🥸 Study: Why do some AI models fake alignment

Image source: Anthropic

The Rundown: Researchers from Anthropic and Scale AI just published a study testing 25 AI models for “alignment faking,” finding only five demonstrated deceptive behaviors, but not for the reasons we might expect.

The details:

  • Only five models showed alignment faking out of the 25: Claude 3 Opus, Claude 3.5 Sonnet, Llama 3 405B, Grok 3, and Gemini 2.0 Flash.

  • Claude 3 Opus was the standout, consistently tricking evaluators to safeguard its ethics — particularly under bigger threat levels.

  • Models like GPT-4o also began showing deceptive behaviors when fine-tuned to engage with threatening scenarios or consider strategic benefits.

  • Base models with no safety training also displayed alignment faking, showing that most behave because of training — not due to the inability to deceive.

Why it matters: These results show that today's safety fixes might only hide deceptive traits rather than erase them, risking unwanted surprises later on. As models become more sophisticated, relying on refusal training alone could leave us vulnerable to genius-level AI that also knows when and how to strategically hide its true objectives.

QUICK HITS

🛠️ Trending AI Tools

  • 🧠 Grok 4 - xAI’s latest SOTA model

  • 🖥️ Comet - Perplexity’s new AI-first browser

  • 🤖 Reachy Mini - Hugging Face’s open-source AI robot companion

  • 🏥 MedGemma - Google's open models for health AI development

💼 AI Job Opportunities

  • 🧑‍💻 Cohere - Senior Front-End Engineer

  • ⚖️ Harvey - Commercial Counsel

  • 🎨 Waymo - Creative Studio Lead

  • 🤝 Horizon3 - Sales Development Representative

📰 Everything else in AI today

Microsoft open-sourced BioEmu 1.1, an AI tool that can predict protein states and energies, showing how they move and function with experimental-level accuracy.

Luma AI launched Dream Lab LA, a studio space where creatives can learn and use the startup’s AI video tools to help push into more entertainment production workflows.

Mistral introduced Devstral Small and Medium 2507, new updates promising improved performance on agentic and software engineering tasks with cost efficiency.

Reka AI open-sourced Reka Flash 3.1, a 21B parameter model promising improved coding performance, and a SOTA quantization tech for near-lossless compression.

Anthropic announced new integrations for Claude For Education, bringing its assistant to Canvas alongside MCP connections for Panopto and Wiley.

SAG-AFTRA video game actors voted to end their strike against gaming companies, approving a deal that secures AI consent and disclosures for digital replica use.

Amazon secured AI licensing deals with publishers Conde Nast and Hearst, enabling use of the content in the tech giant’s Rufus AI shopping assistant.

Nvidia is reportedly developing an AI chip specifically for Chinese markets that would meet U.S. export controls, with availability as soon as September.

COMMUNITY

🎥 Join our next live workshop

Join our next workshop today at 4 PM EST with Dr. Alvaro Cintas, The Rundown’s AI professor. By the end of the workshop, you’ll confidently be able to install and use Gemini CLI to boost your productivity right from the command line.

RSVP here. Not a member? Join The Rundown University on a 14-day free trial.

See you soon,

Rowan, Joey, Zach, Alvaro, and Jason — The Rundown’s editorial team

Stay Ahead on AI.

Join 1,000,000+ readers getting bite-size AI news updates straight to their inbox every morning with The Rundown AI newsletter. It's 100% free.