Get the latest AI news, understand why it matters, and learn how to apply it in your work — all in just 5 minutes a day. Join over 2,000,000+ subscribers.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
AI

Anthropic-Pentagon AI feud escalates

Zach Mink • 6 minutes

Read Online | Sign Up | Advertise

Good morning, AI enthusiasts. The Pentagon may soon label Anthropic a “supply chain risk” in response to the company’s limits on how the military uses its AI.

The feud, which only appears to be escalating, highlights a deeper tension now shaping the AI era: who controls how frontier models are deployed in military operations — the labs that build them, or the governments that use them?


In today’s AI rundown:

  • Anthropic-Pentagon feud escalates over AI use

  • OpenAI adds new ‘Lockdown mode’ in ChatGPT

  • Turn a YouTube thumbnail into 5 social posts

  • Alibaba nears frontier with open-weight Qwen-3.5

  • 4 new AI tools, community workflows, and more

LATEST DEVELOPMENTS

ANTHROPIC

‼️ Anthropic-Pentagon feud escalates over AI use

Image source: Nano Banana / The Rundown

The Rundown: The Pentagon is reportedly “close” to cutting ties with Anthropic and designating the company a “supply chain risk” — a badge usually reserved for foreign adversaries — over its restrictions on how Claude is used by the military.

The details:

  • The designation of “supply chain risk,” if given, would force all U.S. defense contractors to cut ties with Anthropic — hitting the AI’s major business severely.

  • Defense officials are demanding the right to use AI for “all lawful purposes,” while Anthropic is holding firm against granting broad permissions.

  • The company is open to loosening restrictions but wants to ensure its AI is not used for spying on Americans or building autonomous weapons, Axios reports.

  • Claude is currently the only AI on the Pentagon’s classified systems, and was also reportedly used via Palantir to capture Nicolás Maduro in January.

Why it matters: Experts have long warned about the unchecked use of AI in warfare, and this standoff marks a notable moment showing the growing friction between companies’ responsible-use guardrails and the military’s operational demands. Only time will tell which side shapes the rules of AI in national security.

TOGETHER WITH YOU.COM

🧠 You.com founders predict an AI winter is coming

The Rundown: You.com’s Co-founders Richard Socher and Bryan McCann are among the most-cited AI researchers in the world. They just released 35 predictions for 2026.

Three that stand out:

  • The LLM revolution has been “mined out” as capital floods back to research

  • “Reward engineering” becomes a job; prompts can’t handle what’s coming next

  • Traditional coding will be gone by December — AI writes code and humans manage it

Read all 35 predictions.

OPENAI

🔒️ OpenAI adds ‘Lockdown Mode’ to ChatGPT

Image source: Reve / The Rundown

The Rundown: OpenAI just introduced a “Lockdown Mode” in ChatGPT, alongside new Elevated Risk labels, as part of an effort to protect “highly security-conscious users” from threats like prompt injection (where AI is tricked into leaking data).

The details:

  • The Lockdown Mode is an optional setting that deterministically disables certain ChatGPT tools and capabilities that an attacker could exploit.

  • Specific protections in the mode include limiting web browsing to cached content — ensuring no live network requests leave OpenAI’s environment.

  • ChatGPT workspace admins can enable the mode, with the ability to whitelist specific apps/actions that remain accessible even when the lockdown is active.

  • The company is also adding new “Elevated Risk” labels that appear across ChatGPT, Atlas, and Codex to flag features that may introduce any level of risk.

Why it matters: As AI models go from simple chatbots to full-fledged agents capable of browsing the web, connecting to apps, and executing complex tasks, the security stakes are higher than ever. This update acknowledges that change and the fact that deterministic “hard blocks” may be the only way to address some AI risks.

AI TRAINING

📲 Turn a YouTube thumbnail into 5 social posts

The Rundown: In this guide, you will build a YouTube thumbnail from scratch in Canva, then use the AI resize feature to instantly duplicate it into every social media format, with AI handling the layout for each size.

Step-by-step:

  1. In Canva (you will need a Pro account), click the “Create” button and search for “thumbnail.” Click the YouTube thumbnail project type

  2. In the sidebar, pick a template that you like. Update the text to match your video title and change the colors if you want

  3. If there’s a cutout person in it, you can take a selfie, drag it into Canva, then click the AI remove background button. Now, replace the placeholder cutout

  4. Finally, click the “Resize” in the top left and select all social placements you want. Canva AI will create thumbnail layouts in the correct dimensions for each

Pro tip: You can also generate thumbnails using Nano Banana Pro, then recreate them in Canva.

PRESENTED BY SPEECHMATICS

🗣️ Voice Agents need speed you trust

The Rundown: Speechmatics’ Voice Agent API delivers partials in <250ms and finals around 250-300ms. Built real-time first, not batch models forced into streaming. That’s why you get unmatched accuracy at conversational latency.

What you get:

  • 55+ languages, accent-agnostic performance

  • Native LiveKit, Pipecat, Vapi support

  • Deploy cloud, on-prem, hybrid, on-device

Start building with $200 free credits.

ALIBABA

🧠 Alibaba nears frontier with open-weight Qwen-3.5

Image source: Qwen

The Rundown: Alibaba’s Qwen released Qwen3.5-397B-A17B, an open-weight vision language model featuring a “hybrid architecture” that delivers massive inference gains while rivaling proprietary giants like OpenAI’s GPT-5.2 and Google’s Gemini 3 Pro.

The details:

  • Qwen-3.5 uses a sparse MoE design, activating only 17B parameters out of 397B for each query, balancing high-level capabilities with low latency.

  • The model is close to frontier players across the board, and even surpasses them in domains like agentic search, doc recognition, and instruction following.

  • Alibaba claims it is 60% cheaper to use and at least 8x better at processing large workloads than its immediate predecessor, Qwen3-Max.

  • The release is aimed at handling continuous, multimodal reasoning required by agents, although it doesn’t seem very good at running a vending machine yet.

Why it matters: Chinese labs are on a roll, and with Qwen3.5 combining near frontier performance, 60% lower costs, and open weights, the race is clearly shifting toward efficiency and scalability. If the momentum continues, the AI balance may hinge less on raw size and more on who can deliver powerful models at the lowest rate.

QUICK HITS

🛠️ Trending AI Tools

  • 👨‍💻 HeyGen - Turn ideas into videos in minutes, with no filming required. Use Code TRDAI20 for 20% off your first 3 months.

  • 🧠 Qwen3.5-397B-A17B - Alibaba’s open-weight vision language model

  • 🤖 Manus - AI agents, now accessible via Telegram with long-term memory

  •  Seed 2.0 - ByteDance’s open-source models for general-purpose agents

*Sponsored Listing

📰 Everything else in AI today

Meta patented a social networking system that uses AI trained on a user’s interaction data to simulate their responses when they’re on a long break, or even deceased.

India kicked off its AI Impact Summit, hosting execs from global AI giants, including OpenAI’s Sam Altman, Google’s Sundar Pichai, and Anthropic’s Dario Amodei.

Sam Altman and Dario Amodei confirmed India is now the second-largest market for ChatGPT and Claude, with Amodei also announcing Anthropic’s Bengaluru office.

Ireland’s Data Protection Commission is probing xAI’s Grok over concerns it can generate sexualized images of women and children, after similar UK and EU action.

SpaceX (and xAI) will reportedly compete in the Pentagon’s $100M contest to produce voice-controlled, autonomous drone swarming technology.

ElevenLabs launched its “ElevenLabs for Government” initiative to help public sector agencies deploy secure, multilingual voice and chat AI.

COMMUNITY

🤝 Community AI workflows

Every newsletter, we showcase how a reader is using AI to work smarter, save time, or make life easier.

Today’s workflow comes from reader Lisa G. in Australia:

“I volunteer for a wildlife shelter, which runs solely on donations...We needed an app to manage our rescue team operations, so I built one with Base44’s web app creator. It manages rescue calls and tracks progress of the rescue status, volunteer task assignments, and maps to wildlife track locations.

It saves our team time as we previously used WhatsApp to record Rescues, which was really inefficient. I am now building more apps for the shelter.”

How do you use AI? Tell us here.

🎓 Highlights: News, Guides & Events

See you soon,

Rowan, Joey, Zach, Shubham, and Jennifer — the humans behind The Rundown

AI

AI's new physics discovery

Zach Mink • 7 minutes

Read Online | Sign Up | Advertise

Good morning, AI enthusiasts. OpenAI's GPT-5.2 just discovered that a widely accepted answer in particle physics was wrong, proposed the correct one, and autonomously wrote the formal proof in 12 hours.

The "can AI actually think?" debate from skeptics isn't going away, but the real conversation is shifting from if AI can contribute to science to how fast it rewrites what we thought we already knew.


In today’s AI rundown:

  • GPT-5.2 makes theoretical physics discovery

  • The Rundown Roundtable: Our AI use cases

  • Launch an outbound calling agent in 15 minutes

  • ByteDance’s frontier push with Seed 2.0

  • 4 new AI tools, community workflows, and more

LATEST DEVELOPMENTS

OPENAI

🔬 GPT-5.2 makes theoretical physics discovery

Image source: Lovart / The Rundown

The Rundown: OpenAI just published a new research preprint where GPT-5.2 independently discovered a mathematical formula and formally proved it was correct, marking what the company calls AI's first original contribution to theoretical physics.

The details:

  • The paper tackles a problem in particle physics that was assumed solved, with 5.2 finding the existing answer was wrong and proposing a correct one.

  • A specialized research version of 5.2 autonomously wrote the math proof in 12 hours, verified by physicists from Harvard, Cambridge, and Princeton.

  • OAI's Kevin Weil is credited as a co-author, with Harvard physicist Andrew Strominger saying the AI "chose a path no human would have tried."

Why it matters: There will still be debate from skeptics over whether AI is truly capable of ‘new’ ideas, but the results are getting harder to argue with. AI being pointed at and challenging long-held beliefs in humanity’s most important scientific fields is starting to feel less like sci-fi and more like the very near future.

TOGETHER WITH AWS MARKETPLACE

📶 Unlock agentic AI potential

The Rundown: AWS Marketplace's new eBook breaks down how organizations are using agentic AI to streamline operations, with actionable guidance on building, buying, and deploying AI agents at scale.

Inside the eBook, you'll find:

  • Real-world examples of orgs. transforming workflows with agentic AI

  • Frameworks and deployment guidance for native and open-source agents

  • How to discover, buy, and deploy AI agents through AWS Partners

Download the free eBook to kickstart your agentic AI journey.

THE RUNDOWN ROUNDTABLE

💡The Rundown Roundtable: Our AI use cases

Image Source: Lovart / The Rundown

The Rundown: The Rundown Roundtable is a weekly feature where we poll members of The Rundown staff about how we use AI in our work and daily lives.

Rishi, Growth: I connected Claude Code to Apify's API to scrape high-performing content from IG/TikTok for ad creative inspiration, and to ElevenLabs' API to automatically transcribe videos — so I can analyze not just visuals, but the exact hooks, pacing, and language top creators are using. From there, I developed a scriptwriting system that takes rough thoughts on a hook and angle and turns them into ad scripts, drawing inspiration from winners and applying proven copywriting principles.

After generating, it also grades itself against a 12-point rubric. If it doesn't score at least 90%, it rewrites until it does. I then feed it back in and say, "This is the final version." It analyzes the changes I made and updates its understanding of my style.

Nate, University Educator: I continue to find that Claude Artifacts (with its front-end design) is delightfully useful — and use it several times a day to learn something new, or catch up on a news story, and turn anything into a custom webpage right inside chat.

Attach surveys, a long article, a spreadsheet, etc., and tell Claude to turn it into an interactive page displaying key insights. The design is impressive, and it takes just minutes. Next time, try this to quickly share findings with your team.

AI TRAINING

📞 Launch an outbound calling agent in 15 minutes

The Rundown: In this guide, you will build an AI agent that makes real sales calls on your behalf. You’ll learn to create the agent, get a $1 phone number, and upload a list of contacts for it to call — with the whole setup taking about 15 minutes.

Step-by-step:

  1. Create a new AI agent in Eleven Labs. Choose a voice and add system instructions. Include details about your business, offer, and the goal of the call

  2. Sign up on Twilio (free + $15 in credits) and buy a phone number for $1.20. Copy the number, Account SID, and Auth Token from the dashboard

  3. Connect Twilio to Eleven Labs. Go to Phone Numbers in ElevenLabs, click Create New, and paste in your number, SID, and Auth Token

  4. Click Outbound in ElevenLabs, create a “batch” with Telephony as the channel, and download a CSV template. Add in leads’ numbers, then upload the CSV back. You can now start calling with the agent, test it, or schedule calls for later.

Pro tip: Toggle on “Transfer to Number” in your agent's Tools to have it patch hot leads through to your phone.

PRESENTED BY CONCIERGE

👋 Your brand's AI answer engine

The Rundown: Today’s buyers use AI every day to answer their questions, and have no patience for a scavenger hunt on your website. Concierge is a custom Perplexity-style answer engine, trained on your company’s brand & content, that delivers accurate, personalized responses to any questions your website visitors have.

Modern B2B brands use Concierge to:

  • Handle any buyer question (no matter how technical) with advanced RAG on your content, media, and documentation.

  • Control and visibility over every conversation, with guardrails and sentiment analysis.

  • Build trust with website visitors before they are willing to commit to a demo.

Use Concierge to turn every question into an opportunity.

BYTEDANCE

🌱 ByteDance’s frontier push with Seed 2.0

Image source: ByteDance

The Rundown: ByteDance released Seed 2.0, a new family of AI models that match or beat GPT-5.2 and Gemini 3 Pro across dozens of benchmarks at nearly 1/10 of the price — capping a week that also saw its Seedance 2.0 model spark a Hollywood firestorm.

The details:

  • Seed 2.0 Pro surpasses GPT-5.2 ($1.75/M) and Gemini 3 Pro ($5/M) across a series of math, reasoning, and vision benchmarks at just $0.47/M input tokens.

  • ByteDance says the model is built for real-world agentic tasks, with demos showing it autonomously completing 96-step CAD modeling workflows.

  • The launch comes on the heels of the viral Seedance 2.0 video model, which is facing pushback from Hollywood over copyrighted characters and voices.

  • Seed 2.0 is live now on ByteDance's Doubao app in “Expert Mode” and via API, though consumer availability outside China is still limited.

Why it matters: Move over, DeepSeek… ByteDance is the one rattling the Western AI landscape now. With Seed 2.0 now surpassing the Nov-Dec releases from top labs at bargain prices, the pressure on Western labs is only going one direction — and the Seedance IP drama shows China’s powerhouse isn’t slowing down to ask permission.

QUICK HITS

🛠️ Trending AI Tools

*Sponsored Listing

📰 Everything else in AI today

OpenClaw creator Peter Steinberger is joining OpenAI, with Sam Altman posting that he will help “drive the next generation of personal agents”.

The Pentagon is considering cutting off Anthropic’s $200M defense deal over the refusal to let the military use Claude for "all lawful purposes."

Anthropic’s Claude was reportedly used via a Pentagon-linked Palantir deployment to support the U.S. military operation that captured Venezuela’s Nicolás Maduro.

Spotify CEO Gustav Soderstrom revealed that the company’s top devs haven’t written a single line of code this year, saying they are “all in” on the transition to AI.

Alpha School shared new test results showing its 2-hour, AI-first academic model has students scoring in the 99th percentile across virtually every grade and subject.

Simile raised $100M to build AI simulations of human behavior, with agents modeled on real people to help companies predict customer decisions.

COMMUNITY

🤝 Community AI workflows

Every newsletter, we showcase how a reader is using AI to work smarter, save time, or make life easier.

Today’s workflow comes from reader Peter S. in Hollis, NH:

"We recently had a water pipe break in our basement, causing damage to the items we store there. The insurance company required us to itemize, take pictures, and provide an estimated replacement cost for each damaged item.

Instead of searching websites, I uploaded the photos in Copilot with descriptions to obtain a good estimate and several links to where we can buy the items. What would have been days to complete the inventory list turned into a couple of hours."

How do you use AI? Tell us here.

🎓 Highlights: News, Guides & Events

See you soon,

Rowan, Joey, Zach, Shubham, and Jennifer — the humans behind The Rundown

AI

Google's upgrade breaks reasoning barriers

Zach Mink • 7 minutes

Read Online | Sign Up | Advertise

Good morning, AI enthusiasts. OpenAI and Anthropic have been grabbing all the 2026 headlines — but Google just reminded everyone why it's still the biggest powerhouse in the AI race.

With an upgraded Deep Think obliterating benchmarks across math, coding, and science, and a new research agent autonomously solving open problems, the tech giant is pushing frontier AI for scientific research into uncharted territory.


In today’s AI rundown:

  • Google's Deep Think crushes reasoning benchmarks

  • OAI launches ultra-fast coding model on Cerebras chips

  • How to generate a TV commercial with AI

  • MiniMax's open-source M2.5 hits frontier coding levels

  • 4 new AI tools, community workflows, and more

LATEST DEVELOPMENTS

GOOGLE

Google's Deep Think crushes reasoning benchmarks

Image source: Google

The Rundown: Google just released a major update to its Gemini 3 Deep Think reasoning mode, posting dominant scores across math, coding, and science — while also introducing its Olympiad-level math research agent driven by the new upgrade.

The details:

  • Deep Think hit 84.6% on ARC-AGI-2, obliterating Opus 4.6 (68.8%) and GPT-5.2 (52.9%), and set a new high of 48.4% on Humanity's Last Exam.

  • It also reached gold-medal marks on the 2025 Physics & Chemistry Olympiads and scored a 3,455 Elo on Codeforces, nearly 1,000 points above Opus 4.6.

  • Google also unveiled Aletheia, a math agent that autonomously solves open problems, verifies proofs, and hits new highs across domain benchmarks.

  • The Deep Think upgrade is live for Google AI Ultra subscribers in the Gemini app, with API access open to researchers via an early access program.

Why it matters: After Google dominated benchmarks and headlines to close 2025, the focus has been more on Anthropic and OpenAI in 2026 — but don’t forget about the tech giant as arguably the biggest powerhouse in the AI race. Deep Think’s scores are wild, and the frontier for math and science is quickly moving into uncharted territory.

TOGETHER WITH VOXEL51

💸 Stop wasting 95% of your data labeling budget

The Rundown: Most teams are labeling massive amounts of data that never gets used for model training. Voxel51's technical workshop on Feb. 18 shows how to build feedback-driven annotation pipelines that eliminate over-labeling — saving time and money while improving model performance.

Join the workshop and learn:

  • How to use zero-shot selection and embeddings for maximum cost savings

  • QA workflows to review specific objects and fix errors fast

  • How to implement dedicated test sets to catch label drift early

  • Debugging with embeddings to visualize the clusters confusing your model

Register now.

OPENAI

OAI launches ultra-fast coding model on Cerebras chips

Image source: OpenAI

The Rundown: OpenAI released GPT-5.3-Codex-Spark, a new speed-optimized coding model that runs on Cerebras hardware, cranking out 1,000+ tokens per second and marking the company's first AI product powered by chips beyond its Nvidia stack.

The details:

  • Spark trades intelligence for speed, trailing the full 5.3-Codex on SWE-Bench Pro and Terminal-Bench but finishing tasks in a fraction of the time.

  • The release comes just weeks after OAI inked a $10B+ deal with Cerebras and separate agreements with AMD and Broadcom, diversifying away from Nvidia.

  • OAI's vision is for Spark to handle quick interactive edits while the full Codex tackles longer autonomous tasks in the background.

  • The model is rolling out as a research preview for ChatGPT Pro subs, with API access initially limited to a handful of enterprise design partners.

Why it matters: Codex's main criticism has been its speed, and OpenAI just addressed it in a big way — while making its chip diversification play real with the first product built on Cerebras hardware. Real-time coding with instant feedback will definitely change workflows for development tasks that are able to compromise a bit of power for speed.

AI TRAINING

📺 How to generate a TV commercial with AI

The Rundown: In this guide, you will learn to generate a 20-second ad in the style of a professional TV commercial — taking the guesswork out of outputs without needing to click and pray.

Step-by-step:

  1. Think of a commercial idea and ask Gemini to plan out two 5s scenes. Once done, ask it to write prompts for the start and end frames of both scenes.

  2. Now, log in to Higgsfield (you will need a basic/pro plan) and click Image > Create Image > Nano Banana Pro. Set 4k quality, 4 variations, and 21:9 ratio.

  3. Generate the start + end frame for scene 1 and just the end frame for scene 2. Download the ones you like best.

  4. In Higgsfield, go to Video > Kling 3.0, upload your frames with the short scene prompt, and hit generate. After this, stitch the videos in a free editor.

Pro tip: Ask Gemini to use photography terms like “Hero shot” when generating scene prompts. You can also generate music for the ad with Suno + Eleven Labs.

PRESENTED BY CDATA

🏗️ Build secure agentic AI that scales

The Rundown: Microsoft and CData are teaming up for a live 45-minute session on how to design secure, scalable agentic infrastructure using Copilot Studio, Agent 365, and CData's Connect AI — including a live cross-system workflow demo.

In this session, you'll learn:

  • How Microsoft and CData deliver connectivity, context, and control for production AI agents

  • Agent design and production best practices from both teams

  • How a Copilot Studio agent syncing with Salesforce and Dynamics 365 is built and deployed

Register here for the session. All registrants will receive the session recording.

MINIMAX

💰 MiniMax's open-source M2.5 hits frontier coding levels

Image source: MiniMax

The Rundown: Chinese AI lab MiniMax launched M2.5, an open-source model that rivals Opus 4.6 and GPT-5 on agentic coding benchmarks — but at a fraction of the cost, making it cheap enough to power AI agents running around the clock.

The details:

  • M2.5 shows especially strong coding performance, scoring roughly even with Opus 4.6 and GPT-5.2 across key development benchmarks.

  • Two APIs are available: a faster M2.5-Lightning ($2.40/M output) and a standard M2.5 ($1.20/M output), both priced much lower than Opus ($25/M).

  • MiniMax revealed that M2.5 now handles 30% of daily company tasks across R&D, product, sales, HR, and finance, as well as 80% of new code commits.

  • The models are available via API, though the open-source weights and license have yet to be published.

Why it matters: Every few months, it feels like a Chinese lab drops a model that changes the cost math for the entire industry. M2.5’s frontier-level coding at this price makes "intelligence too cheap to meter" feel closer than ever, an important development as agents handling longer autonomous tasks become more common.

QUICK HITS

🛠️ Trending AI Tools

  • 🔒 Incogni - remove your personal data from the web so scammers and identity thieves can’t access it. Use code RUNDOWN to get 55% off.*

  • 🧠 Gemini 3 Deep Think - Google's upgraded AI reasoning mode

  • ⚡️ GPT-5.3-Codex-Spark - OpenAI’s ultra-fast model for real-time coding

  • 🤖 M2.5 - Minimax’s new open-source frontier model with powerful coding

*Sponsored Listing

📰 Everything else in AI today

ByteDance officially launched Seedance 2.0, the company’s viral SOTA video model, publishing benchmark results and a technical blog, but access still remains restricted.

Mustafa Suleyman told FT that most white-collar work will be "fully automated by AI within 12 to 18 months," with Microsoft pursuing "true self-sufficiency" with its models.

Elon Musk said that xAI's wave of departures was forced, not voluntary — calling it a reorg for "speed of execution" after losing ten co-founders and engineers this week.

OpenAI is retiring GPT-4o, GPT-4.1, and o4-mini from ChatGPT today, coming amid pushback from users calling for 4o’s preservation.

Anthropic officially announced a new $30B funding round at a $380B valuation, with its revenue run rate hitting $14B — $2.5B of which comes from Claude Code alone.

OAI researcher Zoë Hitzig resigned after the launch of ChatGPT ads, warning OAI’s archive of human thought creates “unprecedented potential for manipulation.”

COMMUNITY

🤝 Community AI workflows

Every newsletter, we showcase how a reader is using AI to work smarter, save time, or make life easier.

Today’s workflow comes from reader Anthony H. in Australia:

"I needed a QR code scanner to check in our members at regular meetings to be used on an iPad. I couldn't find a good solution that wasn't expensive or bloated with extra features not needed. So, I created my own with Google AI Studio, GitHub, and Vercel.

It features event session creation, member profiles, auto-create custom QR codes for each member, and a system backup, as the data is held locally due to privacy. I added bulk import and export functions. Reports can be created that we need for our funding requirements as well."

How do you use AI? Tell us here.

🎓 Highlights: News, Guides & Events

See you soon,

Rowan, Joey, Zach, Shubham, and Jennifer — the humans behind The Rundown

Robotics

Apptronik's $935M humanoid moment

Jennifer Mossalgue • 6 minutes

Read Online | Sign Up | Advertise

Good morning, robotics enthusiasts. Austin-based humanoid startup Apptronik — born from a UT lab that once built robots for NASA — just stretched its Series A to a staggering $935M at a valuation north of $5B.

Now the question is whether its flagship humanoid Apollo, purpose-built for factories and warehouses and forged in partnership with DeepMind, can make "embodied AI" more than a buzzword — and do it faster than Figure and Tesla.


In today’s robotics rundown:

  • Apptronik’s Series A hits $935M

  • Alibaba’s robot brain tops Google

  • This startup weaves a robot hand like rope

  • Nvidia’s new world model for robots

  • Quick hits on other robotics news

LATEST DEVELOPMENTS

APPTRONIK

🦄 Apptronik’s Series A hits $935M

Image source: Apptronik

The Rundown: Austin-based humanoid startup Apptronik stretched its Series A to a staggering $935M, pushing the University of Texas spinout to a roughly $5.5B post-money valuation — about triple where it stood a year ago.

The details:

  • The company says it wasn't actively fundraising, just fielding inbound interest it couldn't turn down; backers include Google, Mercedes-Benz, and B Capital.

  • CEO Jeff Cardenas said the company will use the funding to expand its Austin office, open a new location in California, and scale production of its robots.

  • Apptronik’s Apollo is being developed for work in logistics and manufacturing, including pilots with Google DeepMind, Mercedes-Benz, and GXO.

  • The company traces its lineage to UT’s Human Centered Robotics Lab and NASA’s Valkyrie program, with a decade-plus of experience in bipedal robotics.

Why it matters: Part of the excitement around Apptronik is its work with Google DeepMind and Mercedes-Benz on embodied AI — robots that perceive messy environments and act on reasoning. With rival Figure AI near $3B raised, Apptronik’s $935M buys it runway, but humanoids remain an extremely expensive business.

ALIBABA

🧠 Alibaba’s robot brain tops Google

Image source: Alibaba

The Rundown: Alibaba just released RynnBrain, an open-source “physical AI” model designed to power robots with better real‑world perception and planning, putting it in direct competition with Google and Nvidia in embodied AI.

The details:

  • RynnBrain is trained on Alibaba’s Qwen3‑VL vision-language system, letting robots map objects, predict trajectories, and navigate cluttered spaces.

  • The model targets a key weakness in current robotics stacks — poor spatial and temporal memory — by letting robots remember where items are.

  • Alibaba says RynnBrain hits performance on 16 embodied-AI benchmarks and outperforms Google’s Gemini Robotics‑ER 1.5 and Nvidia’s Cosmos‑Reason2.

  • Multiple versions, starting around 2B parameters, are already live on Hugging Face and GitHub for developers to drop into their own hardware.

Why it matters: RynnBrain is Alibaba’s bid to own the brains of China’s next generation of robots, not just sell the cloud around them. By open-sourcing a model that it claims beats Google and Nvidia on key benchmarks, Alibaba is angling to make its stack the default toolkit for anyone building bots that need to work in the real world.

ALLONIC

👉🏽 This startup weaves a robot hand like rope 

Image source: Allonic

The Rundown: Budapest-based robotics startup Allonic raised a record $7.2M pre-seed round to industrialize its “3D tissue braiding” process, which weaves biomimetic robot parts in minutes instead of from hundreds of rigid parts.

The details:

  • The company’s platform “grows” robot bodies by braiding soft, load-bearing tendons and joints around a 3D-printed skeleton in a single automated process.

  • This assembly-free approach aims to cut production of robot fingers, grippers, and arms to just a few minutes, while enabling more natural, biomimetic motion.

  • More than a dozen investors from OpenAI, Hugging Face, ETH Zurich, and Northwestern University backed the round — Hungary’s biggest-ever pre-seed.

  • Allonic envisions its platform as a customizable infrastructure for robot makers, with plans to scale from hands to full bodies across form factors.

Why it matters: Allonic is chasing the same lifelike hand space as Clone, but rather than engineering novel artificial muscles, it's using automated 3D braiding to grow the entire hand around a scaffold in one pass. If it scales, Allonic stops being just another hand startup and becomes the shared body shop everyone else plugs into.

NVIDIA

📽️ Nvidia’s new world model for robots

Image source: DreamDojo Github

The Rundown: A team of researchers led by Nvidia unveiled DreamDojo, a generalist robot world model trained on 44K hours of POV human video, aimed at teaching robots real‑world skills largely in simulation.

The details:

  • The model learns how the physical world works by predicting future frames and actions, capturing dynamics like contact, friction, and object motion.

  • Once this pre-learning is done, engineers only need a small amount of real robot data to teach specific arms and mobile robots how to perform tasks.

  • DreamDojo was developed by a joint team from Nvidia and multiple academic labs, including UC Berkeley, Stanford, and the University of Texas at Austin.

  • The research, detailed in an arXiv paper, reflects Nvidia’s push to become core infrastructure for the emerging robot “app store” ecosystem.

Why it matters: DreamDojo’s edge is a single physics model trained on massive first‑person human video that can be lightly fine‑tuned to many robots. It puts more weight on egocentric, contact‑rich video pretraining than Nvidia’s synthetic‑heavy GR00T‑Dreams and rivals like Gemini Robotics and Helix.

QUICK HITS

📰 Everything else in robotics today

Waymo started testing fully driverless robotaxis on public streets in Nashville, paving the way for a commercial robotaxi service in the city later this year.

San Francisco robotics startup Weave Robotics opened orders for its $7,500 laundry-folding robot, Isaac 0, to Bay Area residents, with deliveries starting this month.

Mexico is deploying robot dogs to scout dangerous areas and stream live video to police as part of its security plan for the 2026 World Cup matches in Monterrey.

Gather AI, which makes AI‑powered warehouse drones that autonomously scan inventory, raised a $40M round led by former Salesforce co-CEO Keith Block’s VC firm.

China’s Agibot staged “Agibot Night 2026” in Shanghai, a 60‑minute gala billed as the world’s first large live show performed entirely by humanoids.

France’s ITER fusion project brought in a 13-foot-tall industrial robot nicknamed “Godzilla,” considered the most powerful industrial bot of its kind.

A YouTuber built and trained a laundry‑folding robot in just 24 hours, showing the low-cost bot folding towels after a single day of rapid prototyping and model training.

Boston Dynamics CEO Robert Playter is stepping down after more than 30 years at the company, with CFO Amanda McMaster taking over as interim chief.

Waymo says its robotaxis sometimes call human “response agents” in the Philippines for guidance in unusual situations, but those workers never directly drive the cars.

China launched the Ultimate Robot Knockout Legend in Shenzhen, a “world’s first” humanoid combat league using EngineAI’s T800 robots.

An Amazon Prime Air MK30 delivery drone crashed into the side of an apartment building in Richardson, Texas, sending up smoke but causing no injuries.

Australian aerospace engineer Benjamin Biggs pushed the latest version of his custom “BlackBird” quadcopter to a record-breaking top speed of 411 mph (661 km/h).

Corvus Robotics launched Corvus One for Cold Chain, an autonomous drone system that conducts continuous inventory scans in industrial freezers down to -20°F.

COMMUNITY

🎓 Highlights: News, Guides & Events

See you soon,

Rowan, Joey, Zach, Shubham, and Jennifer — The Rundown’s editorial team

AI

xAI's next phase unleashed

Zach Mink • 6 minutes

Read Online | Sign Up | Advertise

Good morning, AI enthusiasts. After a wave of departures, including key members of the founding team, Elon Musk’s xAI is stepping on the gas.

The company just hosted its first all-hands meeting since the SpaceX merger (and posted it online), covering everything from the much-talked-about organizational restructure to an ambitious plan to set up deep space data centers via the Moon.


In today’s AI rundown:

  • xAI’s restructure, product roadmap, Moon ambitions

  • Z.ai’s GLM-5 — the new open-source king

  • Turn SOP docs into talking-head training videos

  • Anthropic details Claude Opus 4.6’s sabotage risk

  • 4 new AI tools, community workflows, and more

LATEST DEVELOPMENTS

XAI

🚀 xAI’s restructure, product roadmap, Moon ambitions

Image source: xAI

The Rundown: xAI hosted its first all-hands since merging with SpaceX, with CEO Elon Musk outlining a major reorganization, product roadmap updates, and lunar ambitions, all aimed at outpacing rivals and taking xAI to the forefront of AI.

The details:

  • Musk acknowledged the departure of team members and outlined a new structure for xAI, saying the move was meant to be “more effective” at scale.

  • The new structure has four core teams: Grok (chat and voice), a coding-focused unit, the Imagine team, and Macrohard (agents emulating companies).

  • He also spoke about future infrastructure plans with SpaceX, including setting up AI satellite factories on the Moon — using lunar resources and solar energy.

  • Musk added that SpaceX will also build an electromagnetic mass driver to “shoot” AI satellites/components for massive deep space data centers.

Why it matters: Musk is no stranger to audacious promises, and his timelines often shift. But by broadcasting xAI’s tightened focus, product roadmap, and ambitious lunar plans, he’s making sure the world knows he’s aiming to build advanced AI in a way no other AI giant is — scaling beyond Earth’s resource limits instead of draining them.

TOGETHER WITH MODULATE

🗣️ Voice-native AI architecture is here

The Rundown: Voice-specialized AI is here, and unlike OpenAI, xAI, and other leaders, it understands conversations and meaning — not just transcripts. Velma 2.0 is the world’s first voice-native AI designed to provide human-level, real-time conversation intelligence.

By orchestrating 100+ sub-models purpose-built for voice, Velma allows you to:

  • Decode intent, emotion, stress, and authenticity in messy, multilingual audio

  • Analyze audio 100x faster, cheaper, and more accurately than with LLMs

  • Get traceable outputs with an explainable path

Try Velma for yourself to understand the true meaning of your conversations.

Z.AI

🧠 Z.ai’s GLM-5 — the new open-source king

Image source: Artificial Analysis

The Rundown: China’s Z.ai just launched GLM-5, a 744B-parameter open-weights model that further closes the gap with the West’s frontier — sitting just behind Claude Opus 4.6 and GPT-5.2 on Artificial Analysis benchmarks.

The details:

  • GLM-5 scored 50 on Artificial Analysis’ Intelligence Index, surpassing closed models like Gemini 3 Pro and Grok 4 as well as open-source ones like Kimi K2.5.

  • The model uses DeepSeek’s Sparse Attention architecture with just 40B active parameters, and runs inference on Chinese chips, including Huawei Ascend.

  • On Humanity’s Last Exam, it hit 50.4 with tools, beating Opus 4.5, Gemini 3 Pro, and GPT-5.2. The coding performance on SWE-Bench was also close.

  • GLM-5 is open-source under an MIT license, available now on HuggingFace, Z.ai’s own platform, and via API at $1 per million input tokens.

Why it matters: The wave of Seedance 2.0’s viral AI clips hasn’t even faded, and there we have another near-frontier model from China that is already knocking at the door. The gap with the West isn’t closed yet, but with open weights, competitive pricing, and domestic chip support, it’s definitely narrowing faster than ever.

AI TRAINING

🎥 Turn SOP docs into talking-head training videos

The Rundown: In this guide, you will learn how to turn boring onboarding docs into engaging training videos narrated by an AI avatar. We tried a lot of tools and found the most efficient system for building quality AI training videos in bulk.

Step-by-step:

  1. Take your training doc and prompt Claude/ChatGPT with "Turn this into a three-minute training video script for an AI-generated avatar. Only include text overlays with bullets. The avatar can be seated, standing, head-on, etc."

  2. Save the script as a text file and go to Synthesia.io > Create New Video > Create from AI > Upload the script file, with objective and audience description

  3. Choose a template and click Create Outline. Review the outline and follow the steps to generate your video. It should take 10-25 minutes to generate

  4. When the video is complete, you can download and embed it somewhere like Notion or Google Docs

Pro tip: Repeat this for all onboarding docs to set up one-page onboarding that can be handed to any trainee!

PRESENTED BY SLACK FROM SALESFORCE

👋 Learn Slackbot in 2 minutes

The Rundown: Slackbot is a context-aware AI agent built directly into Slack — understanding your conversations, files, and workflows to deliver what you need, right when you need it, with zero setup.

Watch this 2-minute demo to see how Slackbot:

  • Makes your entire workspace searchable (docs, convos, apps)

  • Enhances every teammate with role-specific automations

  • Learns your project and preferences over time for even smarter outputs

  • Synthesizes what you need instantly, respecting permissions and using only what you can already see

Watch now.

AI SAFETY

‼️ Anthropic details Claude Opus 4.6’s sabotage risk

Image source: Nano Banana / The Rundown

The Rundown: Anthropic published its latest Sabotage Risk Report, revealing that its new Claude Opus 4.6 model displays an “elevated susceptibility” to be misused for “heinous crimes,” including assisting in the development of chemical weapons.

The details:

  • Anthropic found Opus 4.6 knowingly supported crimes like chemical weapon development in small ways, but could not execute attacks on its own.

  • When tasked to achieve a specific goal in a multi-agent test, the model proved far more willing to manipulate and deceive other agents than previous models.

  • Considering these findings, Anthropic deemed the overall sabotage risk “very low but not negligible” due to the model’s lack of coherent misaligned goals.

  • The company also classified the model’s capabilities as entering a “gray zone” that necessitated this mandatory report under its Responsible Scaling Policy.

Why it matters: Anthropic’s CEO Dario Amodei recently highlighted the risks of advanced AI, and now, one of his own models appears to be moving into the gray zone. With growing competition from OpenAI, Google, xAI, and Chinese labs, the pressure to push capabilities forward may only intensify the very risks he has warned about.

QUICK HITS

🛠️ Trending AI Tools

  • 🗣️ Unwrap Customer Intelligence - Connect your entire organization to the true voice of the customer with AI-driven insights from customer feedback*

  • 🧑‍💻 GLM-5 - Ziphu AI’s new open-source frontier model

  • 🤖 Claude - Anthropic’s AI assistant, now with more features for free users

  • 🧠 Ming-flash-omni 2.0 - Ant’s omni AI with speech, vision, image capabilities

*Sponsored Listing

📰 Everything else in AI today

Apple’s long-awaited Gemini-powered Siri AI upgrade has reportedly been pushed back (again) due to recent testing snags, now likely to come with iOS 26.5 or 27.

OpenAI elevated its “Mission Alignment” head, Joshua Achiam, to the role of Chief Futurist responsible for studying “AI impacts and engaging the world to discuss them.”

Meta broke ground on a new data center in Lebanon, Indiana — one of its largest infrastructure bets — adding 1GW of capacity to power its AI and core products.

Anthropic announced it will cover electricity price increases from its data centers, shielding local ratepayers, in line with similar pledges from Microsoft and OpenAI.

Google is rolling out UCP-powered checkout in Gemini and AI Mode in the U.S., integrating Veo into Google Ads, and testing sponsored retailer ads in AI Mode.

COMMUNITY

🤝 Community AI workflows

Every newsletter, we showcase how a reader is using AI to work smarter, save time, or make life easier.

Today’s workflow comes from reader Lindsay F. in Kingsville, Ontario:

“I own a 1970 Chevelle SS and am converting it into a modern driving ‘restomod.’ I am using both ChatGPT & Copilot to research and develop the entire restoration plan. The restoration of the vehicle will take place in phases, and the agents have provided me with a priority list, options for what parts to purchase, and where to source them from.

They have also developed a budget for the project, including parts & local labor rates and what the finished project will look like upon completion. I am 72 years old and just love how much this is helping me.”

How do you use AI? Tell us here.

🎓 Highlights: News, Guides & Events

See you soon,

Rowan, Joey, Zach, Shubham, and Jennifer — the humans behind The Rundown

AI

xAI's co-founder exodus continues

Zach Mink • 7 minutes

Read Online | Sign Up | Advertise

Good morning, AI enthusiasts. xAI just pulled off one of the boldest moves in tech with its SpaceX merger. But behind the scenes, the people who helped build the company keep walking out the door.

The departure of Tony Wu and Jimmy Ba now makes five co-founders gone in under a year — a pace of turnover that's raising questions about what's happening inside Musk's AI operation as it scales into orbit.

Reminder: Our next live workshop is today at 2 PM EST! Join for pt. 1 of our Agentic Workflows Bootcamp, where you’ll learn automation and evaluation techniques that actually deliver results. RSVP here.


In today’s AI rundown:

  • xAI's cofounder exodus continues

  • Ex-GitHub CEO's startup lands $60M

  • Improve Claude Code with “Insights” feature

  • Harvard finds AI tools expand workloads

  • 4 new AI tools, community workflows, and more

LATEST DEVELOPMENTS

XAI

🚪 xAI's co-founder exodus continues

Image source: Tony Wu (@Yuhu_ai_ on X)

The Rundown: xAI co-founders, Tony Wu and Jimmy Ba, just announced their departures from Elon Musk's AI startup, making them the fourth and fifth founding members to walk away from the company, coming right after its SpaceX mega-merger.

The details:

  • Wu posted on X that it’s “time for my next chapter”, saying a “small team armed with AIs can move mountains and redefine what’s possible”.

  • Wu led Grok's reasoning efforts and reported directly to Musk, joining xAI from Google in 2023, with no reason given for his departure.

  • Ba announced his departure late Tuesday, saying that 2026 will be the “busiest and most consequential year for the future of our species.”

  • Musk had reportedly "grown frustrated" with delays to new Grok models in recent weeks, with its anticipated 4.20 update still awaiting release.

Why it matters: xAI has a SpaceX merger in motion, accelerating model competition, deepfake blowback, and now a wave of senior exits is a lot of fires for a startup whose ambitions just jumped to space-based data centers. If there is anyone used to juggling chaotic situations, it’s Elon — but this leadership exodus is starting to raise questions.

TOGETHER WITH LAMBDA

🧠 2025 in AI, insights from hundreds of deployments

The Rundown: AI changed meaningfully in 2025, not just in research, but in production. Lambda’s 2025 AI wrapped breaks down the shifts that defined the year, from reasoning models and larger context windows to multimodal capabilities, open-source viability, and inference-first workloads.

Key shifts covered:

  • Reasoning, long-context, and multimodal models

  • Open-source and MoE-driven efficiency gains

  • Inference overtaking training in production

Read the report.

ENTIRE

💰 Ex-GitHub CEO's startup lands $60M

Image source: Entire

The Rundown: Ex-GitHub CEO Thomas Dohmke raised a record $60M seed round for Entire, an open-source developer platform designed to track and manage AI-generated code that is increasingly being shipped without humans reading it themselves.

The details:

  • Dohmke left Microsoft-owned GitHub last August after four years, saying the dev tools he built weren't made for a world where agents write the code.

  • Entire’s first release is Checkpoints, which logs AI agent actions like prompts and decisions while coding, so devs can better audit the outputs.

  • The tool works with both Claude Code and Gemini CLI, with OpenAI’s Codex and GitHub support coming soon.

  • The $60M seed round is the largest ever for a dev tools startup, putting the company’s initial valuation at $300M at launch.

Why it matters: Dohmke helped lead the platform where most of the world’s code lives, and him now building the agentic tooling layer is a strong signal of where the industry is heading. As AI generates more code than humans can review, helping devs trust and manage the output could be just as important as the agents themselves.

AI TRAINING

🔎 Improve Claude Code with “Insights” feature

The Rundown: In this guide, you will learn how to use the Claude Code’s “insights” feature to improve your coding habits. This hidden, built-in report gives you feedback directly from Claude Code, and will even build you custom skills and agent instructions.

Step-by-step:

  1. Open a new terminal session. Run the command claude /insights.

  2. Claude should begin working on your insights report. When it’s done, it will give you a link to a file named report.html. Copy it into an empty folder.

  3. Open your code editor (we use Cursor). Start the webpage with cmd + shift + p and find the “Open Live Server” tool.

  4. You’ll see the report outlining what worked, what didn’t, and how to improve. Use the “Existing CC Features to Try” section for new project instructions.

Pro tip: You can also give the HTML to Claude/ChatGPT and have the assistant run it.

PRESENTED BY GLEAN

🤝 Meet your new AI work partner

The Rundown: Join Glean’s flagship virtual launch event to discover their latest-gen AI assistant: an enterprise‑ready AI work partner that actually helps people get things done. Hear from speakers at Glean, Swiggy, and more, and learn how leading teams are turning enterprise context into real business impact.

Register for Glean:LIVE on Feb. 17 to:

  • Learn how context-aware AI drives broad adoption and lasting impact

  • Walk away with a vision for expanding AI value company-wide from day one

  • Discover the latest‑generation Glean Assistant — personalized, proactive, and a true domain expert

Register now to save your spot for Glean:LIVE.

AI RESEARCH

Harvard finds AI tools expand workloads

Image source: Lovart / The Rundown

The Rundown: A new Harvard Business Review research found that AI tools at a U.S. tech company didn't lighten employee workloads over 8 months, but actually grew them, with workers taking on broader tasks, logging more hours, and multitasking more.

The details:

  • The study tracked ~200 employees who adopted AI on their own, observing work habits and conducting 40+ in-depth interviews over eight months.

  • Workers utilizing AI expanded well beyond their roles, with the tech making unfamiliar work feel doable.

  • The study also noted AI blurring lines between work and rest, with employees firing off prompts after hours or during breaks.

  • Engineers also reported spending more time reviewing and coaching colleagues on AI-assisted code, with "vibe-coding" help requests piling up.

Why it matters: AI was supposed to free workers up, not quietly pile more on their plates — but that's exactly what Harvard found happening. The tech’s productivity gains are real, but so is the tradeoff of broader roles, blurred boundaries, and a new work pace that is changing more quickly than many employees are likely ready for.

QUICK HITS

🛠️ Trending AI Tools

  • 🤖 Oz from Warp – Launch hundreds of cloud agents in minutes, from Warp, API, or integrations. Try Oz today & get 1k extra credits on Build*

  • 🎨 Qwen-Image-2.0 - Alibaba's unified image generation and editing model

  • 🐦‍⬛ Raven-1 - Tavus’ real-time emotional perception model for AI conversations

  • 🌼 Orchids 1.0 - AI app builder for any stack with BYO model subscriptions

*Sponsored Listing

📰 Everything else in AI today

Isomorphic Labs unveiled IsoDDE, a drug design engine that more than doubles AlphaFold 3 on benchmarks and can spot drug targets from a protein's genetic code.

Alibaba's Qwen team released Qwen-Image-2.0, a new unified image generation and editing model with upgraded text rendering, realism, and speed.

Anthropic safeguards research lead Mrinank Sharma resigned, writing in a farewell letter that the company "constantly faces pressures to set aside what matters most".

OpenAI is reportedly dropping the “io” branding for its upcoming AI hardware device after a trademark lawsuit from audio startup iyO.

Runway raised $315M in Series E funding at a $5.3B valuation, with backing from Nvidia, Adobe, and AMD to pre-train its next generation of world simulation models.

COMMUNITY

🤝 Community AI workflows

Every newsletter, we showcase how a reader is using AI to work smarter, save time, or make life easier.

Today’s workflow comes from reader Devin P. in Talladega, AL:

"I'm a blind person, so I use screen readers to use computers and phones. Even though the Android apps for a lot of AI apps could be more accessible to me, the CLI coding packages, like Codex and Gemini CLI, are pretty nice. I first used Gemini to make Termux, a Linux Terminal app for Android, more accessible, resulting in Talking Termux.

I then had AI set up Emacs with a speech system called Emacspeak, dealing with Termux's differences to Linux, and the lack of a TCLX package. After all that, I wanted to have some fun, so I had Codex create ElMUD, a way to play some online text-based games, including sounds for some of them."

How do you use AI? Tell us here.

🎓 Highlights: News, Guides & Events

See you soon,

Rowan, Joey, Zach, Shubham, and Jennifer — the humans behind The Rundown

Tech

Musk’s ‘self-growing’ Moon city

Jennifer Mossalgue • 5 minutes

Read Online | Sign Up | Advertise

Good morning, tech enthusiasts. Elon Musk is now reshuffling SpaceX’s cosmic priority stack: Mars can wait. The Moon can’t.

He's selling investors on a “self-growing” lunar city within the decade. Perks? Flight windows every 10 days versus 26 months, a two-day commute versus six, and a pivot that conveniently locks Starship deeper into Artemis’s gravitational pull.


In today’s tech rundown:

  • Musk wants the moon first, not Mars

  • China’s Wegovy-style nasal spray

  • Lyft finally launches ride-sharing for teens

  • Ferrari taps Jony Ive for its first EV

  • Quick hits on other tech news

LATEST DEVELOPMENTS

SPACEX

🌝 Musk wants the moon first, not Mars

Image source: Ideogram / The Rundown

The Rundown: Elon Musk just demoted his once‑sacred Mars dream, recasting SpaceX’s mission around a faster, cheaper prize: building a self‑sustaining city on the Moon within a decade.

The details:

  • Musk says SpaceX has “shifted focus” from a near‑term Mars settlement to building a “self‑growing” city on the lunar surface.

  • He claims a lunar city could be achieved in under 10 years, while a comparable Mars city would likely take more than 20 years.

  • The math favors the Moon: launch windows open every 10 days with a two-day transit, versus every 26 months and a six-month journey to reach Mars.

  • The pivot aligns with investor briefings and reports of an uncrewed Starship lunar landing target around March 2027.

Why it matters: Musk’s Moon-first shift drops SpaceX squarely into the slipstream of NASA’s Artemis program, which is already counting on Starship as its ride to the lunar surface and eventually a permanent base. Mars isn’t dead — Musk says serious work starts in five to seven years.

BIOTECH

💉 China’s Wegovy-style nasal spray

Image source: Ideogram / The Rundown

The Rundown: A Chinese biotech firm is reportedly racing to turn Wegovy’s blockbuster weight-loss molecule into a cheaper, needle‑free nasal spray, with global trials slated for completion by 2028.

The details:

  • Shanghai Shiling Pharmaceutical is developing a nasal spray that uses semaglutide, the active ingredient in Novo Nordisk’s weight-loss drug Wegovy.

  • It promises a cheaper, more user‑friendly format for long‑term weight management in a country with a fast‑growing GLP‑1 market.

  • Timing is key: Novo’s core semaglutide patent in China expires in March, and Shiling has already staked out IP before global exclusivity ends in the 2030s.

  • Sweden’s Iconovo is already working on a Western intranasal semaglutide, developing an ICOone Nasal obesity spray now in preclinical proof‑of‑concept.

Why it matters: Obesity drugs are fast becoming one of the most valuable drug classes on the planet, and converting semaglutide into a spray could drop the barrier to adoption. If rivals time it right as exclusivity unwinds, they could undercut Novo and Lilly on price — and capture a serious share of the next decade's market.

LYFT

🚘 Lyft finally launches ride-sharing for teens

Image source: Lyft

The Rundown: Ride-hailing company Lyft is rolling out teen accounts that let 13- to 17-year-olds hail rides in more than 200 U.S. cities while parents watch from their phones — pitching it as a way to get screen-addicted Gen Alpha out of the house.

The details:

  • Only parents or guardians can create teen profiles and payment methods, and only vetted, highly rated drivers who opt in can be matched with teen riders.

  • Safety features include PIN verification, audio recording, Smart Trip Check-In for unusual route changes, and live location tracking for parents.

  • Teens are allowed to bring friends along if parents approve in the app, turning Lyft into a quasi-chaperoned way to reach school, jobs, malls, or hangouts.

  • The move follows Uber, which launched its own teen accounts in 2024, and Waymo, which offers teen rides via its robotaxi service in Phoenix and LA.

Why it matters: The launch reverses Lyft’s long-standing ban on unaccompanied minors and lands squarely in the middle of a growing debate over Gen Alpha’s lack of independence. Lyft is betting that ride-hailing can become the antidote to screen-induced isolation — a way to get kids out of their rooms and into the world.

EVS

🐎 Ferrari taps Jony Ive for its first EV

Image source: Ferrari

The Rundown: Ferrari just unveiled the Jony Ive–designed interior of the Luce, its first electric supercar, featuring a glass‑and‑aluminum cockpit that rejects the touchscreen-dominated cabins common across modern EVs.

The details:

  • Ferrari has named its first fully electric supercar the Luce and unveiled its interior ahead of an exterior reveal planned in Italy this May.

  • The cabin was co-designed by Ive and Marc Newson’s LoveFrom studio, mixing retro Ferrari cues with minimalist details like layered OLED displays.

  • The Luce uses a glass key made from Corning Gorilla/Fusion5 glass with an E‑Ink display that changes color when docked.

  • Underneath the design is a 122 kWh battery feeding four electric motors for more than 1K horsepower, 0–60 mph in 2.5 seconds, and a 330-mile range.

Why it matters: Ferrari is testing whether collectors will spend north of $600K for an electric halo car co-designed by the man behind the iPhone. It drops the Luce straight into the ring with Porsche’s Taycan Turbo GT, Lucid’s Air Sapphire, Tesla’s Model S Plaid, and the new wave of electric hypercars.

QUICK HITS

📰 Everything else in tech today

YouTube added an AI feature that lets Premium users generate music playlists from text or voice prompts on iOS and Android.

Instagram is reportedly internally testing “Instants,” a standalone Snapchat‑style app and related Instagram feature for sending disappearing photos and messages.

Decentralized social network Bluesky finally rolled out a long‑requested drafts feature, letting users save unfinished posts to edit and publish later.

Salesforce cut some 1K jobs across teams like marketing, product, data, and AI while reshuffling its top ranks with six new execs replacing five departing leaders.

YouTube megastar MrBeast is buying Gen Z–focused fintech app Step to turn his massive teen fanbase into financial-product customers.

A “March for Billionaires” in San Francisco to protest California’s proposed Billionaire Tax Act reportedly attracted only a few dozen supporters and some onlookers.

A new 50 MW wave energy pilot project is moving into development, aiming to prove that large‑scale ocean wave farms can reliably feed clean power into the grid.

Stellantis is taking a massive $26B hit to unwind ambitious electric‑vehicle plans, cancel several EV models, and shift back toward gasoline and hybrid cars.

Scientists are excited about a new nasal spray bird flu vaccine that triggers a strong immune response in animal tests and could block the virus right away.

New York state lawmakers introduced a bill to impose at least a three‑year moratorium on permits for building and operating new data centers.

COMMUNITY

🎓 Highlights: News, Guides & Events

See you soon,

Rowan, Joey, Zach, Shubham, and Jennifer — The Rundown’s editorial team

AI

ByteDance stuns the AI video world

Zach Mink • 7 minutes

Read Online | Sign Up | Advertise

Good morning, AI enthusiasts. China's AI labs are on a tear in the video space — and ByteDance's Seedance 2.0 might be the most impressive entry yet.

With viral examples coming out of its beta across a range of styles and use cases that look stronger than anything available, the TikTok parent is making a serious case that the next creative leap in AI video is coming from the East.


In today’s AI rundown:

  • ByteDance's Seedance 2.0 stuns the AI video world

  • OpenAI officially starts showing ads in ChatGPT

  • Build an AI-powered sales objection handler

  • Waymo taps Genie 3 to train self-driving cars

  • 4 new AI tools, community workflows, and more

LATEST DEVELOPMENTS

BYTEDANCE

🎬 ByteDance's Seedance 2.0 stuns the AI video world

Image source: RioAIGC on Douyin

The Rundown: Chinese AI giant ByteDance is going viral across social media with Seedance 2.0, a new model in beta with upgraded cinematic shots, consistency, and synced audio that looks to surpass current top available systems.

The details:

  • The model can reportedly handle text, image, audio, and video inputs, with tests showing impressive outputs across a range of styles and use cases.

  • The system also features native audio generation, 2K resolution, and 15s outputs, currently only available via ByteDance’s Jimeng AI video platform.

  • ByteDance also appears to have released Seedream 5.0 image model in preview on some third-party apps — marking its answer to Nano Banana Pro.

  • The model comes just days after the launch of rival Kuaishou’s Kling 3.0, with Chinese models seemingly moving near the frontier of the video sector.

Why it matters: China’s top labs are putting out some seriously powerful new video models, and Seedance 2.0 looks next in line for the next leap. With strong examples like smooth fight scenes, animation, UGC content, and motion graphics, Seedance 2.0 may have Veo-like implications for a much broader range of creative disruption.

TOGETHER WITH MONGODB

📈 Go from AI prototype to production, faster

The Rundown: MongoDB is closing the gap between AI prototype and production — helping teams keep conversational context clean, retrieve the right information from thousands of interactions, and connect AI agents to their data without custom plumbing.

With MongoDB's platform, you get:

  • Faster prototype-to-production for AI apps

  • Voyage AI frontier embedding + reranking models

  • One platform for vectors and operational data

Start building.

OPENAI

📢 OpenAI officially starts showing ads in ChatGPT

Image source: OpenAI

The Rundown: OpenAI just officially started testing ads in ChatGPT for U.S. users on its free and $8/month Go tiers, a move the company has been circling for months and that Anthropic used as ammo for its Super Bowl ad campaign this past weekend.

The details:

  • Ads appear below chat responses and are targeted based on the active conversation, chat history, memory, and prior ad engagement.

  • OAI emphasized that the ad content will not impact ChatGPT’s answers, “protecting the trust (users) place in it for important and personal tasks”.

  • Free-tier users can opt out of advertising entirely, but doing so cuts their daily message allowance — acting as a funnel towards paid plans.

  • The pilot has a reported minimum price tag of $200K for advertisers, with major marketing firms like Omnicom already locking in spots for their clients.

Why it matters: We’ve frequently said ads within AI feels like a slippery slope, but OAI is ripping the band-aid as the first test case for the industry. While the execution seems fine, encroaching sponsors could change the dynamic of how many experience ChatGPT — still, the tradeoff for free access to advanced intelligence is hard to argue.

AI TRAINING

📞 Build an AI-powered sales objection handler

The Rundown: In this guide, you will learn how to actually build something useful with those sales call transcripts you are accumulating. This is a simple, weekly process that turns your weekly call transcripts into a handy, quick reference document.

Step-by-step:

  1. Create a ChatGPT Project named "Sales Objections". Upload a text file listing your product lines, pricing, and core offers so the AI understands what you sell.

  2. Go to Instructions in project settings and paste: "Read the attached transcripts. Create a weekly report template. For every objection: Number the objection, State the objection clearly, Provide 3 descriptive bullets on the context, List which lead or prospect surfaced it, Provide two short, punchy rebuttals.”

  3. Each week, upload call transcripts into your project. Make sure they have the date and name of the client. Open a new thread, and tell it to create the report.

  4. Open a blank Notion page. Create a “Toggle Heading” for the week (e.g., "Week of Feb 9"). Copy the ChatGPT output and paste it inside the toggle.

Pro tip: Connect Notion inside ChatGPT (Settings → Connected Apps) so ChatGPT can see your current page and reference past weeks.

PRESENTED BY TELY AI

🔎 Are you invisible in AI search?

The Rundown: You’re in a niche industry. Customers search on Google, ChatGPT, and Perplexity, but your company doesn’t show up because your website doesn’t answer their questions. Tely AI analyzes the questions your customers ask and automatically creates and publishes content that answers them on your website, bringing high-quality leads on autopilot.

With Tely AI, you can:

  • Have +20% monthly organic growth

  • Get indexed on Google, ChatGPT, and Perplexity in as little as 1 week

  • Enjoy full automation for topics, writing, and publishing

  • Get discovered by buyers already searching for your solution

Get leads from Google and ChatGPT on autopilot.

SELF-DRIVING CARS & AI

🚗 Waymo taps Genie 3 to train self-driving cars

Image source: Waymo

The Rundown: Waymo just introduced the Waymo World Model, a driving simulator built on DeepMind's Genie 3 that generates hyper-realistic scenarios the company's fleet of self-driving cars has never encountered to help it deal with extreme edge cases.

The details:

  • The model takes Genie 3's visual knowledge and converts it into paired camera and lidar outputs, helping dream up scenarios its cars have never actually seen.

  • Engineers can reshape scenes with text prompts, driving inputs, or layout edits (like changing weather or adding obstacles) to test "what if" responses.

  • Waymo found a workaround for Genie 3's short memory by running footage at 4x speed, stretching simulations long enough to cover longer driving tasks.

Why it matters: Google's Street View data gave Waymo a head start in mapping the real world for its cars, but world models can now generate the extreme edge cases that no amount of road miles can produce. Waymo’s use of Genie is a prime example of one of the top use cases for world models — simulations for robotics training data.

QUICK HITS

🛠️ Trending AI Tools

  • 💻 Codex App - OpenAI’s Mac app interface for managing agents

  • 🚀 Composer 1.5 - Cursor’s updated in-house agentic coding model

  • 🎧 Audiobooks - ElevenLab’s AI-powered narration suite for audiobooks

  • ⚙️ Context Engine MCP - Augment Code’s semantic search in coding agents

📰 Everything else in AI today

Teleport announced an open-source blueprint for securing agentic AI. Join the Feb 19 webinar to learn why identity is the foundation for scaling agents.*

Sam Altman reportedly told employees that ChatGPT is surpassing 10% monthly growth, Codex weekly usage is up 50%, and a new updated model is coming this week.

Anthropic is set to raise a new funding round of $20B+ next week, according to a new report from Bloomberg — pushing the company’s valuation to $350B.

ElevenLabs launched Audiobooks, a full production suite powered by AI-generated narration for authors to streamline audiobook creation and distribution.

Anthropic is eyeing at least 10GW of data center capacity in the coming years, hiring Google and Stack Infrastructure execs to lead the push into leasing its own facilities.

*Sponsored Listing

COMMUNITY

🤝 Community AI workflows

Every newsletter, we showcase how a reader is using AI to work smarter, save time, or make life easier.

Today’s workflow comes from reader Clay D. in Carson City, NV:

"After serving in combat in Iraq, I was initially rated at 50% disabled by the VA. I knew that rating did not fully reflect the severity of my conditions...

I provided ChatGPT with my medical records, original claim, and publicly available VA Board of Veterans’ Appeals decisions from other similar cases. I instructed it to identify approval and denial patterns, apply relevant claim law, and rewrite my claim to avoid errors. The result was a stronger submission that led to a 100% VA disability rating..."

How do you use AI? Tell us here.

🎓 Highlights: News, Guides & Events

See you soon,

Rowan, Joey, Zach, Shubham, and Jennifer — the humans behind The Rundown

No matching search results

Try using different keywords, double-check your spelling, or explore related categories.

Clear Search

Stay Ahead on AI.

Join 2,000,000+ readers getting bite-size AI news updates straight to their inbox every morning with The Rundown AI newsletter. It's 100% free.