Get the latest AI news, understand why it matters, and learn how to apply it in your work — all in just 5 minutes a day. Join over 2,000,000+ subscribers.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
AI

AI 'godmother' calls for spatial intelligence

Zach Mink • 6 minutes

Read Online | Sign Up | Advertise

Good morning, AI enthusiasts. AI ‘godmother’ Dr. Fei-Fei Li just teased the next big leap in AI — spatially intelligent systems that could grasp the physics of the real world.

These systems could mark some big breakthroughs, but the question is: are we ready to take AI from understanding language to understanding the intricate details of the world around us?

P.S. We’re hiring a copywriter to test new AI tools and create educational materials that help millions understand and leverage AI. Apply here.


In today’s AI rundown:

  • AI ‘godmother’ advocates for spatial intelligence

  • Anthropic’s big cost advantage over OpenAI

  • Turn spreadsheet data into insights with Copilot

  • GPT-5 cracks a full 9x9 Sudoku puzzle

  • 4 new AI tools, community workflows, and more

LATEST DEVELOPMENTS

WORLD LABS

🤖 AI ‘godmother’ advocates for spatial intelligence

Image source: Reve / The Rundown

The Rundown: Famed AI specialist Dr. Fei-Fei Li just published a new essay detailing why the next breakthrough in AI will come from spatial intelligence, or systems that can understand, reason about, and generate 3D, physics-consistent worlds.

The details:

  • Li argues that while LLMs have mastered abstract knowledge, they lack the ability to perceive and act in space (things like estimating distance and motion).

  • She said spatial understanding is the cognitive core of human intelligence and a crucial step to take AI from language to perception and action.

  • World models, Li said, will be key to building this intelligence, but they need the ability to create realistic 3D worlds, understand inputs like images and actions, and predict how those worlds change over time.

  • She added that these models will ultimately unlock new advances in robotics, science, healthcare, and design by enabling AI to reason in the real world.

Why it matters: World models that understand how objects move and interact could one day predict molecular reactions, model climate systems, or test materials. The challenge lies in teaching AI real-world physics, but momentum is building fast with Li’s World Labs, Google, and Tencent all racing to bring spatially intelligent systems to life.

TOGETHER WITH LOVART

💨 Your AI design agent, 80% faster

The Rundown: Lovart is the AI design agent built for visual collaboration, with 3M+ users turning prompts into brand-ready visuals, videos, and decks all in one place. Its new Fast Mode makes creation up to 80% faster, while multi-model blending (Sora 2, Veo 3.1, Nano Banana) lets users mix motion, sound, and visuals seamlessly.

With Lovart, you can:

  • Turn ideas into ready-to-use assets in seconds

  • Skip design back-and-forths with an AI that gets your brand

  • Blend leading models for cinematic, ad-ready results

Try Lovart now and see why the platform just passed $30M ARR.

ANTHROPIC

🤑 Anthropic’s big cost advantage over OpenAI

Image source: Reve / The Rundown

The Rundown: Anthropic reportedly projects a major cost advantage over OpenAI — expecting to spend far less on compute for training and running its AI models over the next few years, according to The Information.

The details:

  • Anthropic estimates $6B in compute costs for 2025 versus OpenAI’s $15B, rising to $27B by 2028, compared to OpenAI’s $111B.

  • The savings are expected from the company’s use of chips from Amazon, Nvidia, and Google for specialized tasks, unlike OAI’s heavy reliance on Nvidia.

  • The news comes after Anthropic raised its revenue estimates, saying it expects to be cash flow positive by 2027 and generate $70B in revenue by 2028.

  • OpenAI, on the other hand, expects to hit $100B revenue mark in 2028 but won’t likely be cash flow positive by 2030.

Why it matters: Anthropic is taking a quieter, more disciplined path, building AI through efficiency and enterprise focus (its 80% revenue is from API). OpenAI, meanwhile, is chasing breadth with a product-heavy push across ChatGPT, research, Atlas, and more. How these choices play out will shape the next phase of AI.

AI TRAINING

 📊 Turn spreadsheet data into insights with Copilot

The Rundown: In this tutorial, you will learn how to use Microsoft Copilot Desktop's Voice and Vision features to analyze Google Sheets or Excel data hands-free, asking questions aloud and getting instant insights without typing formulas.

Step-by-step:

  • Install Microsoft Copilot from Microsoft Store (Windows) or App Store (macOS 14.0+/M1 chip), open the app, and sign in with your Microsoft account

  • Go to Settings via profile icon, toggle on “Voice Mode” and “Copilot Vision,” then open your Google Sheets/Excel file in the browser

  • Say “Hey Copilot,” click the specs icon (eyeglasses) on the toolbar to enable Vision mode — Copilot scans and confirms it sees your data

  • Ask analysis questions: “What’s the most revenue-generating product?” or “Calculate total revenue” - Copilot highlights cells and explains calculations

  • Close the toolbar, then prompt: “Draft a professional analysis report with Executive Summary, Top Performers table, and Key Insights”

Pro tip: Use this workflow for learning new skills, reading technical documents, or studying articles.

PRESENTED BY WARP

⚙️ Beyond commands: The future of the terminal

The Rundown: Warp fuses the terminal and IDE into one place, with AI agents built in. Edit files, review diffs, and ship code, all without leaving the platform that is trusted by over 600k developers and ranks ahead of Claude Code and Gemini CLI on Terminal-Bench.

Ask Warp agents to:

  • Debug your Docker build errors

  • Summarize user logs from the last 24 hours

  • Onboard you to a new part of your codebase

Download Warp for free and get bonus credits for your first week.

SAKANA AI

🧩 GPT-5 cracks a full 9x9 Sudoku puzzle

Image source: Sakana AI

The Rundown: GPT-5 just became the first AI model to solve a full 9x9 Sudoku puzzle, according to Sakana AI’s Sudoku-Bench, a benchmark designed to test deep reasoning, spatial logic, and creativity.

The details:

  • Launched in May, Sudoku-Bench tests LLMs on classic and modern Sudoku variants that combine multiple rule sets and demand long, multi-step reasoning.

  • No model had previously solved a full 9x9 puzzle until GPT-5 cracked it, showing better spatial and logical reasoning than its predecessors.

  • GPT-5 also achieved a 33% solve rate across puzzles — roughly double the previous leader, marking a major step forward in benchmark performance.

  • 67% of the puzzles remain unsolved, as models struggle with meta-reasoning (learning novel rules) and creative “break-in,” which humans use naturally.

Why it matters: GPT-5’s Sudoku breakthrough shows real progress in structured reasoning, but also how far AI still is from thinking like humans do. Closing that gap will require models that can combine mathematical logic, spatial awareness, and creative insight, essentially the same blend of skills we use to reason through the unknown.

QUICK HITS

🛠️ Trending AI Tools

  • 🧠 Taku - Build custom apps and tools in your workspace

  • 📽️ Hedra - Now generate video and images in batches of eight

  • ⚙️ Replit - Build AI apps with 300+ model integrations, no API keys

  • 🔎 Moondream - Run real-time video analysis

📰 Everything else in AI today

Time magazine launched an AI agent to let users query and generate text and audio briefs from its 102-year-old archive.

OpenAI is offering one year of ChatGPT Plus for free to U.S. servicemembers and veterans who retired/separated from active duty within the last 12 months.

Intel’s CTO and AI chief, Sachin Katti, departed for OpenAI, prompting CEO Lip-Bu Tan to assume oversight of the chipmaker’s AI and advanced technology divisions.

Legal AI company Clio, which provides tools to manage cases, research, and workflows, raised $500M in Series G funding at a $5B valuation.

Gamma, the platform for creating AI-generated presentations, websites, and social media posts, surpassed $100M ARR and announced a $68M raise at a $2.1B valuation.

COMMUNITY

🤝 Community AI workflows

Every newsletter, we showcase how a reader is using AI to work smarter, save time, or make life easier.

Today’s workflow comes from reader Diego V. in Berlin, Germany:

“As a project manager, I have to run a lot of meetings, with their respective meeting notes. I created an agent in Copilot that automatically turns on the meeting’s transcription. At the end of each event, it creates a new meeting notes page in my Loop workspace, populating the transcription content, and applying a custom template with discussion points, action items + owners, and links to any referred documentation.”

How do you use AI? Tell us here.

🎓 Highlights: News, Guides & Events

See you soon,

Rowan, Joey, Zach, Shubham, and Jennifer — the humans behind The Rundown

Robotics

Apple's $133B humanoid moonshot

Jennifer Mossalgue • 5 minutes

Read Online | Sign Up | Advertise

Good morning, robotics enthusiasts. Apple’s next big play might walk, talk, and do the dishes. Morgan Stanley predicts the company could pull in $133B a year from humanoids by 2040, outpacing today's entire hardware lineup outside the iPhone and rivaling its booming Services business.

If that vision holds, the iPhone maker’s future may not fit in your pocket much longer.


In today’s robotics rundown:

  • Apple could make $133B a year on robots

  • This robot can survive a blazing inferno

  • Amazon tests Whole Foods robot store

  • This ‘brain-free’ bot runs on air

  • Quick hits on other robotics news

LATEST DEVELOPMENTS

APPLE

🍎 Apple could make $133B a year on humanoids

Image source: Ideogram / The Rundown

The Rundown: Apple could rake in $133B annually from humanoids by 2040, according to a new Morgan Stanley analysis that reimagines the iPhone maker as a robotics powerhouse.

The details:

  • Led by Apple analyst Erik Woodring, Morgan Stanley predicts that Apple could own 9% of the global robotics market in 15 years.

  • Apple is already exploring personal home robots, including a mobile home robot and a motorized tabletop device, as reported earlier this year.

  • At ~$133B in annual robot revenue, Apple would dwarf the Mac’s ~$30B a year and surpass 2024 Services’ $96B by about 38%.

  • Morgan Stanley pegs the broader humanoid and embodied AI market at $5 trillion by 2050.

Why it matters: Morgan Stanley sketches a product roadmap starting with a tabletop “hub” robot as early as 2027 — the on-ramp before Apple scales to full humanoids. The projection: by 2040, robotics could dwarf Mac and iPad combined, rivaling Services as Apple's second-largest business behind the iPhone.

PARADIGM ROBOTICS

🔥 This robot can survive a blazing inferno

Image source: Paradigm Robotics

The Rundown: A Texas startup is building a robot that rolls into burning buildings so firefighters don’t have to walk in blind. FireBot survives 1,200°F (650°C) for 15 minutes, streaming thermal video, gas readings, and live intel back to command control.

The details:

  • FireBot v4 is a tracked firefighting scout built by Paradigm Robotics, a startup founded by University of Texas engineering alum Siddharth Thakur.

  • Built with stainless steel, tungsten, and titanium, it packs cameras and sensors that beam live video and data from inside a blaze via a handheld controller.

  • At about 300 lb. and four feet long, it crawls through debris to map hotspots and flag toxic plumes to forewarn crews.

  • The team is trialing Austin area fire departments, pitching FireBot as a data scout rather than a hose robot like Thermite RS3 or Shark Robotics’ Colossus.

Why it matters: First entry into a structure fire is often blind guesswork that puts firefighters at maximum risk. FireBot turns that crucial moment into a data problem, enabling commanders to see heat gradients, structural hazards, and gas concentrations before anyone crosses the door.

AMAZON

🛒 Amazon tests Whole Foods robot store

Image source: Amazon

The Rundown: Amazon just wired a Whole Foods outside Philadelphia with a robot-run “store within a store,” where a 10K-square-foot micro-fulfillment center pulls major brands like Tide and Pepperidge Farm alongside the grocer’s organics. 

The details:

  • Powered by Silicon Valley robotics startup Fulfil’s automated system, autonomous ShopBots fetch groceries from more than 12K stocked items.

  • ShopBots sort, retrieve, and stage products across multiple temperature zones behind the scenes while keeping aisles human-only.

  • In-store shoppers scan QR codes or use the Amazon app, then grab their bagged items at a pickup counter “within minutes,” Amazon says.

  • The test is part of Amazon’s broader grocery rethink, layering automation into Whole Foods after years of format experiments.

Why it matters: Amazon looks to crack the economics of grocery automation by hiding robots behind the walls instead of redesigning entire stores around them. If the hybrid model scales, it turns every Whole Foods into an instant-pickup hub without sacrificing the browse-and-buy experience that kept Just Walk Out from taking off.

ROBOTICS RESEARCH

💨 This ‘brain-free’ bot runs on air

Image source: University of Oxford

The Rundown: Oxford engineers just built “brain-free” soft robots that run on air — no chips, code, or motors — using modular fluidic blocks that act as muscle, sensor, and valve.

The details:

  • Published in Advanced Materials, the study shows these “fluidic robots” can produce complex, rhythmic motion.

  • By feeding them steady pressure, the robots self‑oscillate and sync like fireflies, hopping and crawling without a single line of software.

  • The tiny modular units snap together like LEGO to form tabletop robots roughly the size of a shoebox that can hop, shake, or crawl.

  • The team built a crawler robot that detects table edges and stops before falling, and a shaker robot that sorts beads by tilting a rotating platform.

Why it matters: Encoding decision-making directly into a robot's physical structure eliminates the need for software to “think,” creating robots that are faster and more efficient. Next phase: larger, untethered versions that could operate in extreme environments where electronics fail, like deep underwater or in space.

QUICK HITS

📰 Everything else in robotics today

Elon Musk says Tesla will likely build a gigantic chip fab to supply the semiconductors needed for its expanding AI and robotics ambitions.

Agility Robotics and Figure AI had a brief Twitter spat after Figure’s CEO claimed first-in-the-world autonomous humanoid bragging rights and Agility pushed back.

XPeng's IRON humanoid moved so realistically at its AI Day debut that engineers had to cut open its leg onstage to prove it wasn't a person in a costume.

Elon Musk floated replacing prison time with a “more humane” alternative: assigning offenders a Tesla Optimus robot that shadows them and monitors for future crimes.

Salad chain Sweetgreen is selling its Spyce robotics unit — maker of the Infinite Kitchen makelines — to Wonder for $186.4M.

Goldman Sachs’ Nov. 3–6 field research reportedly finds China’s humanoid suppliers in a “capacity-first” push — pre-building 100K–1M units of annual output.

Elon Musk says Tesla will start production of the pedal- and steering-wheel-free Cybercab in April at its Austin factory.

REK (Robot Entertainment Kombat) is launching “REK America,” taking its VR‑piloted humanoid fighting league on a five-city U.S. tour.

Robotaxi maker WeRide began trading in Hong Kong last week, adding a dual listing alongside Nasdaq, as CEO Tony Han courts global capital to bankroll its costly R&D.

Poseidon Aerospace raised $11M in seed funding to develop Egret and Heron cargo UAVs, a logistics-first bet to strengthen battlefield supply chains.

The father–son duo behind the world’s fastest quadcopter has built a battery‑free drone that looks like a flying solar panel, designed to run entirely on sunlight.

At deadmau5’s Red Rocks show on Sunday, Figure deployed its humanoids onstage as part of the production, integrating them into the set and visuals (with mixed results).

COMMUNITY

🎓 Highlights: News, Guides & Events

See you soon,

Rowan, Jennifer, and Joey—The Rundown’s editorial team

AI

OpenAI calls for superintelligence safety

Zach Mink • 7 minutes

Read Online | Sign Up | Advertise

Good morning, AI enthusiasts. OpenAI expects AI to start making “significant discoveries” by 2028 — and is calling on industry and government to work together to prepare for the risks that could come with superintelligent systems.

The actual pace of progress is impossible to predict, but one thing’s clear: OAI is already setting the tone for how the world should adapt to the next wave of intelligence.


In today’s AI rundown:

  • OpenAI’s reccos to brace for superintelligent AI

  • The Rundown Roundtable: Our AI use cases

  • Get the most out of ChatGPT’s Deep Research

  • Research: McKinsey’s 2025 AI reality check

  • 4 new AI tools, community workflows, and more

LATEST DEVELOPMENTS

OPENAI

🛡️ OpenAI’s reccos to brace for superintelligent AI

Image source: Reve / The Rundown

The Rundown: OpenAI just shared its view on AI progress, predicting systems will soon become smart enough to make discoveries and calling for global coordination on safety, oversight, and resilience as the technology nears superintelligent territory.

The details:

  • OpenAI said current AI systems already outperform top humans in complex intellectual tasks and are “80% of the way to an AI researcher.”

  • The company expects AI will make small scientific discoveries by 2026 and more significant breakthroughs by 2028, as intelligence costs fall 40x per year.

  • For superintelligent AI, OAI said work with governments and safety agencies will be essential to mitigate risks like bioterrorism or runaway self-improvement.

  • It also called for safety standards among top labs, a resilience ecosystem like cybersecurity, and ongoing tracking of AI’s real impact to inform public policy.

Why it matters: While the timeline remains unclear, OAI’s message shows that the world should start bracing for superintelligent AI with coordinated safety. The company is betting that collective safeguards will be the only way to manage risk from the next era of intelligence, which may diffuse in ways humanity has never seen before.

TOGETHER WITH YOU.COM

💡One major reason AI adoption stalls? Training.

The Rundown: AI implantation often goes sideways due to unclear goals and a lack of a clear framework. You.com’s checklist pinpoints common pitfalls and guides you to build a capable, confident team that can make the most out of your investments.

Inside, you’ll get:

  • Key steps for building a successful AI training program

  • Guidance on overcoming employee resistance and fostering adoption

  • A structured worksheet to monitor progress and share across your organization

Get your checklist today.

THE RUNDOWN ROUNDTABLE

💡 The Rundown Roundtable: Our AI use cases

Image source: Ideogram / The Rundown

The Rundown: The Rundown Roundtable is a new weekly feature where we poll members of The Rundown staff on how the team is using AI. This week: how we’re using AI in our daily lives outside of work.

Zach, Al Writer: The internet connection in our basement has been terrible, and I put ChatGPT on the task. After troubleshooting and reviewing images of the setup, it recommended a series of new adapters and splitters (also directing me right to the purchase pages). The solution led to at least a 10x increase in speed and completely upgraded our experience.

Rowan, Founder: I’ve been using Notion AI (Claude) as a copilot to better personalize my daily life/routine/schedule. I have a weird way of combining time blocking and to-do lists for productivity, and I host it all in Notion. Then I get Notion AI to review for inefficiencies and better optimize my time.

For example, it told me that since I work at my desk and remotely, I should move my gym session to mid-day and treat it as a work break to recharge instead of end-of-day.

Jennifer, Tech & Robotics Writer: I recently took my daughter, who has complex food allergies, to Spain. Since I’m not great at Spanish, I asked ChatGPT to create a printable guide for restaurant servers that explained her allergies in detail and asked them to double-check ingredients with the chef. I also used it to help me practice what to say in Spanish and to suggest local dishes she could probably enjoy safely.

AI TRAINING

🧐 Get the most out of ChatGPT’s Deep Research

The Rundown: In this tutorial, you’ll learn how to use ChatGPT’s Deep Research to automatically browse the web, analyze dozens of sources, and generate structured, cited reports for market, customer, or competitive intelligence.

Step-by-step:

  1. Start a new chat in ChatGPT, click the + icon, and select Deep Research to activate the agent that runs multi-step web research and compiles insights

  2. Write a research prompt describing your goal (e.g., “Conduct market research for household robotics and identify ICP, pain points, and distribution strategy”), then answer any clarifying questions it asks

  3. Submit your request. Deep Research will browse the web for 5–30 minutes, analyze sources, and build a fully cited report you can track in real time

  4. Review the final report for insights, trends, and competitor data, then export it as a PDF/link. You can even attach it to a custom GPT for ongoing intelligence

Pro tip: Use Deep Research for projects requiring verified data. Give detailed context and measurable objectives to ensure the report is both comprehensive and actionable.

PRESENTED BY ATLASSIAN

🤝 AI’s collaboration problem

The Rundown: AI adoption has doubled, and workers are saving over an hour daily, but only 4% of Fortune 1000 executives report efficiency gains — why? Atlassian’s new AI Collaboration Index: Executive Insights report reveals a disconnect between personal productivity and true business transformation.

Key findings from 200 executives and 12,000 knowledge workers include:

  • The elite 4% seeing transformation are building connected systems

  • The missing link: AI helps individuals but fails at team collaboration

  • The biggest AI opportunities within marketing, engineering, and human resources

  • Companies prioritizing experimentation over perfect strategy see 2x more innovation

Read the full report to uncover what organizations achieving real AI transformation are doing differently.

AI RESEARCH

🧠 McKinsey’s 2025 AI reality check

Image source: McKinsey

The Rundown: McKinsey released its State of AI 2025 survey of nearly 2K organizations, revealing that while almost every company now uses AI, most are stuck in pilots, with only a fraction achieving enterprise-wide impact or scaling agents.

The details:

  • The survey found that 88% of companies now use AI somewhere, but most of them are in experimentation or pilot phases, with just 33% actually scaling it.

  • While 39% reported EBIT impact from AI, just 6% achieved an impact of 5% or more, largely by redesigning workflows and using it to drive innovation.

  • 62% are working with AI agents, but adoption is early, with 39% experimenting and just 23% scaling them, mostly in IT and knowledge management.

  • About 32% of companies expect workforce reductions of 3% or more next year, while 13% expect increases. Larger firms are more likely to predict cuts.

Why it matters: The key lesson comes from the high performers — the few seeing real bottom-line impact from AI. Their success shows that the real value of AI comes not from efficiency gains, but from redesigning workflows, scaling across functions, and using it to fuel growth and innovation.

QUICK HITS

🛠️ Trending AI Tools

  • ⚡️ Semrush One: Measure, optimize, and grow visibility from Google to ChatGPT, Perplexity, and more*

  • 📽️ Sora 2: OpenAI’s video AI, now adding watermarks with account IDs

  • 💻️ Higgsfield: AI video platform, now with a workspace for teams

  • 🤖 Grok-4 Fast: xAI’s lighter model, upgraded with a 2M token context window

*Sponsored Listing

📰 Everything else in AI today

Google introduced the File Search Tool, a fully managed RAG system that provides a simple, integrated, and scalable way to ground Gemini with users’ data.

OpenAI wrote a letter last week asking the Trump administration to expand a Chips Act tax credit to cover AI data centers, servers, and electrical grid components.

Google added new capabilities in Vertex AI Agent Builder, including SOTA context management, single command deployment, and observability and evaluation features.

UK firms plan 3% pay raises next year, but 1 in 6 expect AI to reduce headcount — some by over 10% — amid the weakest hiring outlook since the pandemic.

OpenAI expanded Codex access with the launch of a cost-efficient GPT-5-Codex-Mini, 50% higher rate limits, and priority processing for Pro and Enterprise users.

COMMUNITY

🤝 Community AI workflows

Every newsletter, we showcase how a reader is using AI to work smarter, save time, or make life easier.

Today’s workflow comes from reader Zack T. in Singapore:

“As an app marketer with limited knowledge in HTML, I upload reference designs I find online and ask ChatGPT to produce HTML templates for promotion pages, listing the parameters needed such as deal name, usual price, promotional price, discount %, image link, and link URL. Then I upload an Excel file with columns of parameters for each deal, and paste the output HTML into the source code of the promotion pages.”

How do you use AI? Tell us here.

🎓 Highlights: News, Guides & Events

See you soon,

Rowan, Joey, Zach, Shubham, and Jennifer—the humans behind The Rundown

AI

Adobe’s big AI leap for creators

Rowan Cheung • 6 minutes

Read Online | Sign Up | Advertise

Good morning, AI enthusiasts. Adobe just wrapped up MAX 2025, taking its most ambitious leap yet toward an AI-powered future of creativity.

From embedding agentic assistants directly inside its apps to uniting the top AI models under one plan, the company is reshaping how creators imagine, design, and deliver content.

To explore these innovations and what they mean for both creators and enterprises, we sat down with David Wadhwani, President of Digital Media business at Adobe, for an exclusive Q&A.


In today’s AI rundown:

  • Adobe’s vision for the AI era

  • Adobe Firefly, the all-in-one creative AI studio

  • AI assistants for creative work

  • Partnering to advance AI-powered creativity

  • The ultimate creative skill for the AI age

LATEST DEVELOPMENTS

ADOBE’S VISION

🔮 Adobe’s vision for the AI era

The Rundown: At MAX 2025, Adobe unveiled a unified AI strategy bringing the top creative models—spanning image, audio, and video—into one plan, while introducing conversational experiences powered by agentic AI across Adobe Firefly, Photoshop, and Adobe Express.

Cheung: You just wrapped up Adobe MAX, where AI took center stage. How is Adobe evolving its vision for this era of creativity, and what stood out most this year?

Wadhwani: I’ve attended MAX for 22 years, and our strategy has never been more important as creators navigate rapid changes in AI. At MAX, we introduced a single plan with access to leading models and creative tools for images, audio, and video.

Users can now generate unlimited images and videos through Dec. 1. The response has been incredible — creators are exploring the new Firefly app, and businesses are scaling up content production with our enterprise solutions.

Wadhwani added: As AI reshapes how we can express ideas, our vision is to put creators at the center of two major forces reshaping creativity today: generative tools powered by AI models and conversational interfaces powered by AI agents.

We’re developing the best creative AI tools for a growing universe of customers. Our apps now integrate models from Adobe, Google, OpenAI, Runway, Luma AI, and others. We’re also bringing conversational experiences to Firefly, Adobe Express, and Photoshop.

Why it matters: By doubling down on generative and agentic experiences, Adobe is reimagining creativity as a shared process where humans imagine and AI builds — amplifying ideas at unprecedented scale. The company’s big bet: the next creative advances will be via collaboration between human intuition and machine intelligence.

FIREFLY

🧠 Adobe Firefly, the all-in-one creative AI studio

The Rundown: Adobe is turning the Firefly app into a full creative AI studio, combining third-party models and its own commercially safe AI with pro-grade video, audio, and image tools so creators can ideate, produce, and deliver everything in one place.

Cheung: The Adobe Firefly app has evolved into an all-in-one creative AI studio. What’s the biggest change in how users will experience it?

Wadhwani: Firefly now supports every stage of content creation, with several AI model options and video, audio, imaging, and design tools—including studio-quality features like Generate Soundtrack, Generate Speech, and a timeline-based video editor.

Creators can now go from a spark of inspiration to a finished piece of content without ever leaving the app.

Cheung: You’ve also continued to work on Firefly models. Can you tell me more about the latest Firefly model innovations showcased at MAX?

Wadhwani: We introduced Firefly Image Model 5, which generates native 4MP images with photorealistic lighting, natural textures, and anatomical accuracy, while maintaining coherence across complex scenes. 

New editing features like Prompt to Edit and Layered Editing give creators more control and flexibility. Audio capabilities have also expanded, with Generate Soundtrack powered by the Firefly Audio Model and Generate Speech, a text-to-speech tool built on Firefly Speech Model and ElevenLabs’ AI.

Wadhwani added: With all Firefly models trained only on content Adobe has rights to use, and not on user data, our models ensure creators can confidently use what they generate in their work and campaigns.

Why it matters: Adobe is bridging the fragmented creative experience that has long forced creators to stitch together different tools. With the Firefly app, the entire process (from idea to output) happens seamlessly in one place. It’s a shift from juggling software to simply creating.

AGENTIC AI

🤖 AI assistants for creative work

The Rundown: Adobe is embedding agentic AI in its apps, turning assistants into active teammates that can perform repetitive tasks, automate workflows, and coordinate across projects, so that creators can focus on creating.

Cheung: Tell us about the agentic capabilities built by Adobe. What led to introducing agents directly inside tools, rather than offering them as new standalone products?

Wadhwani: By embedding the AI assistants directly into apps like Photoshop, Express, and Firefly, these assistants can help creators move faster by automating repetitive tasks, offering personalized suggestions, and guiding creatives through complex workflows, with every action happening in context.

In Express, you can iterate on content with a simple chat and make changes without starting over. In Photoshop, you can ask for help organizing assets or applying edits in bulk and easily switch between conversation and hands-on editing – without ever leaving your canvas.

Wadhwani added: We also previewed Project Moonlight, a personal orchestration assistant for Firefly that connects workflows across multiple Adobe apps and beyond.

Why it matters: Agents that execute, adapt, and collaborate free creators from repetitive tasks and turn time once spent managing processes into time spent making ideas real. As these agents grow more capable, they could evolve into true creative partners that anticipate needs, refine ideas, and accelerate the entire creative cycle.

PARTNERSHIPS

🤝 Partnering to advance AI-powered creativity

The Rundown: Adobe has partnered with Google and YouTube to give creators more choice and flexibility to work with industry-leading tools, access top AI models, and create with confidence.

Cheung: You also announced several new partnerships at Adobe MAX. Can you elaborate on what they are and why they’re important?

Wadhwani: The partnerships we announced reflect our commitment to giving creators the best tools, the best models, and the confidence to bring their ideas to life on their terms and true to their vision. 

Our expanded partnership with Google Cloud will bring Google's advanced AI models, including Gemini, Veo, and Imagen, into Adobe apps. Through Adobe Firefly Foundry, our enterprise customers will also be able to customize Google's models with their own proprietary data to generate on-brand content experiences at scale.

We also announced a new partnership with YouTube, where creators will be able to use Premiere Mobile to access exclusive YouTube Shorts templates with Premiere’s pro-level mobile video editing, making it easier than ever to capture, edit, and publish standout content directly from their smartphones.

Why it matters: By partnering with giants like Google, Adobe is betting that the future of creativity will be shaped by collaboration, not competition. These partnerships also mark a shift toward ecosystem flexibility, where professionals can combine the best technology while staying true to their brand, workflow, and vision.

FUTURE

🚀 The ultimate skill for the age of creative AI

The Rundown: As AI takes on more of the technical heavy lifting, Adobe believes the creators who stand out will be those who bring creative direction—the ability to guide the technology with vision and style—to the forefront.

Cheung: What skills or mindsets will become most critical for creators to thrive in this AI-driven world?

Wadhwani: The most valuable skill in the AI era isn’t technical, it’s creative direction: the ability to guide technology with imagination, taste, and intent. 

As AI handles more of the mechanical, repetitive work, what will continue to set creators apart is their vision, storytelling, and willingness to experiment across mediums. Those who embrace AI as another instrument in their creative toolbox will unlock entirely new ways to express themselves. 

At Adobe, we’re building tools that make AI-powered creation effortless, so creators can focus on what only humans can do: imagine, connect, and move people through the power of creativity. Human creativity and emotion can’t be replaced.

Why it matters: In a world where AI can generate with speed and scale, Wadhwani believes human creativity remains the ultimate edge. Like many others in creative AI, Adobe also believes the future of creative work belongs to those who see AI as a collaborator, one that enhances imagination and brings ideas to life with greater impact.

Tech

Musk wins $1T pay package

Jennifer Mossalgue • 5 minutes

Read Online | Sign Up | Advertise

Good morning, tech enthusiasts. Elon Musk just scored the biggest payday in corporate history — a potential $1T windfall, now officially approved by Tesla’s board.

The catch? He has to rocket Tesla’s value from $1.4T to $8.5T in 10 years — no easy feat. It’s meant to keep him loyal to the EV grind. Instead, he’s dreaming of dominion over robots, not roads.


In today’s tech rundown:

  • Tesla clears Musk’s $1T pay package

  • Startup builds artificial womb for preemies

  • IKEA’s smart home lineup just blew up

  • Australia gives solar power for free

  • Quick hits on other tech news

LATEST DEVELOPMENTS

TESLA

🤑 Tesla clears Musk’s $1T pay package

Image source: Ideogram / The Rundown

The Rundown: Tesla shareholders delivered an overwhelming endorsement for Elon Musk's unprecedented compensation package, with over 75% voting to award the CEO up to $1T in stock over the next decade.

The details:

  • The compensation requires Tesla to hit a massive $8.5T market cap — about 550% higher than today's valuation and roughly 70% above Nvidia's record.

  • The company must also hit operational milestones, including 20M vehicle deliveries, 1M commercial robotaxis, and 1M humanoids sold.

  • Doing so would also grant him an additional 12% stake in the company and boost voting control.

  • The approval comes despite opposition from proxy advisory firms Glass Lewis and ISS, which questioned the sheer magnitude of dilution and key-person risk.

Why it matters: The world’s richest man just talked Tesla into maybe crowning him the first trillionaire — but only if he can pump its value to six times what it is now. That’s a big ask for a company whose robotaxis still need human chaperones, whose Cybertruck bombed, and whose Chinese competition is feasting on its market share.

BIOTECH

👶🏼 Startup builds artificial womb for preemies 

Image source: TU/e, Bart van Overbeeke

The Rundown: A Dutch startup is developing an artificial womb, a fluid‑filled incubator that mimics the uterine environment to keep premature babies born between 22 and 24 weeks alive long enough for their lungs and brains to mature.

The details:

  • The system uses an artificial placenta roughly the size of a human fist that connects to the baby's umbilical cord, delivering oxygen and nutrients.

  • A double-layered sac, dubbed AquaWomb, is designed to mimic the uterine environment, complete with resistance against kicks to strengthen muscles.

  • Babies born at 22 weeks have only a 10% survival chance with high risks of lung disease and neurological damage, but two weeks later, that jumps to 60%.

  • Its design prioritizes parental bonding with access ports for touch and a "uterus phone" that transmits parents' voices and heartbeats through the fluid.

Why it matters: The FDA is reportedly reviewing data to consider human trials, while U.S. firm Vitara Biomedical has raised over $125M for similar “biobag” tech. If the approach succeeds, it could rescue more fragile newborns and slash the risk of lasting complications.

IKEA

🏡 IKEA’s smart home lineup just blew up

Image source: IKEA

The Rundown: IKEA’s not just dabbling in smart homes anymore — it’s going all-in. The Swedish giant just dropped 21 new Matter-compatible gadgets spanning lighting, sensors, and controls, all built to work with any platform, no brand lock-in required.

The details:

  • The lineup is built on Thread for fast, reliable connections and plugs into Apple Home, Google Home, and Alexa with the same setup flow.

  • IKEA’s DIRIGERA hub now acts as a Matter controller and a bridge, so new gear pairs easily, while many older IKEA devices still work in your setup.

  • The new smart bulbs, already a top seller for IKEA, come in 11 versions, with choices for dimmable white or full color.

  • Five sensors cover everyday needs: motion, door/window, temperature and humidity, air quality (CO₂ and PM2.5), and water leaks.

Why it matters: By embracing Matter and Thread, IKEA is cutting through the chaos of competing smart home standards, making its gear work seamlessly with whatever ecosystem you already use. It’s a practical step toward a future where setting up connected devices feels as easy as screwing in a light bulb, at least that’s the promise.

CLIMATE TECH

☀️ Australia gives solar power for free

Image source: Mark Stebnicki on Pexels

The Rundown: Australia is offering millions of households up to three hours of free electricity each day starting in July 2026 under a new “Solar Sharer” scheme that is essentially Airbnb for your neighbor’s rooftop panels.

The details:

  • The program leverages Australia's world-leading solar boom — one in three homes has panels — to solve the grid's midday oversupply problem.

  • The free period will be set in the midday window, with customers required to opt in; apartment dwellers can participate without owning rooftop panels.

  • It launches in New South Wales, South Australia, and southeast Queensland, with potential expansion to other states by 2027.

  • Proponents say shifting EV charging and laundry into the free window saves cash, but households that can’t time-shift might get dinged with higher rates.

Why it matters: Australia’s ultra-cheap rooftop solar — about AU$840 per kilowatt, roughly a third of U.S. costs — has blanketed one in three homes. If Solar Sharer scales, it could fast-track EV charging and a grid-wide fossil retreat, while testing whether time-of-use freebies cut bills or just penalize households that can’t shift.

QUICK HITS

📰 Everything else in tech today

Google is reportedly in early talks to deepen its Anthropic investment at a potential valuation north of $350B.

Meta is reportedly making billions of dollars every year from scam ads and illicit product promotions on its platform, according to Reuters.

Elon Musk said Tesla will unveil the production version of its second‑gen Roadster on April 1, 2026, adding that he chose April Fools’ Day for “some deniability.”

Anthropic is opening Paris and Munich offices in a global push that’s already added Tokyo, Seoul, and Bengaluru alongside its London, Dublin, and Zurich operations.

Snap shares rose about 9% after it announced a $400M partnership to integrate Perplexity’s AI-powered search directly into Snapchat.

Netflix is reportedly negotiating to license iHeartMedia’s video podcasts with an eye to exclusive distribution on its platform, according to Bloomberg.

SpaceX is buying an additional $2.6B in spectrum licenses from EchoStar, expanding a $17B September agreement as Starlink continues to add customers worldwide.

Klarna reportedly feared that up to 288K customer logins were exposed in a data leak, projecting roughly $41M in legal and remediation costs.

The White House will block Nvidia from selling its scaled‑down B30A AI chip to China, effectively shutting the company out of that market, reports The Information.

Peloton is recalling about 833K Original Series Bike+ exercise bikes after reports of seat posts breaking, with three complaints, including two injuries.

Amazon launched Kindle Translate in beta, an AI tool that lets self‑publishers translate ebooks at no cost, initially between English, Spanish, and German.

Volcanologists are using portable observatories — packed with thermal cameras, infrasound sensors, and gas detectors — to capture eruptions from safe distances.

Google said it will purchase 200K metric tons of carbon removal from Brazil-based Mombak by funding the acquisition and reforestation of Amazon farmland.

COMMUNITY

🎓 Highlights: News, Guides & Events

See you soon,

Rowan, Jennifer, and Joey—The Rundown’s editorial team

AI

China's open-source AI closes the gap

Zach Mink • 6 minutes

Read Online | Sign Up | Advertise

Good morning, AI enthusiasts. Nvidia CEO Jensen Huang said China was “nanoseconds” behind the U.S. in AI just days ago — and Moonshot AI’s new release suggests he wasn’t exaggerating.

With the open-source (!) Kimi K2 Thinking coming for models like GPT-5 and Claude Sonnet 4.5 at a fraction of the price, China’s next ‘DeepSeek’ moment may have just arrived.

Reminder: Our next live workshop is today at 4 PM EST! Join and learn how to build an AI foundation across your marketing tasks to see real results. RSVP here.


In today’s AI rundown:

  • Kimi K2 Thinking takes open-source to new level

  • OpenAI walks back federal backstop comments

  • Create polished presentations with Kimi K2 Slides

  • Microsoft sets up new Superintelligence Team

  • 4 new AI tools, community workflows, and more

LATEST DEVELOPMENTS

MOONSHOT AI

📶 Kimi K2 Thinking takes open-source to new level

Image source: Moonshot AI

The Rundown: Alibaba-backed Chinese startup Moonshot AI just released Kimi K2 Thinking, an open-source reasoning model that matches or exceeds models including GPT-5 and Claude 4.5 Sonnet across a series of benchmarks at much lower cost.

The details:

  • Kimi outperformed GPT-5 and Sonnet 4.5 on several agentic benchmarks, also achieving a new top score of 44.9% on Humanity’s Last Exam.

  • The model also showed large improvements over its predecessor from just four months ago on coding, though still slightly trailing behind the top models.

  • K2 Thinking can also autonomously chain together 200-300 tool calls to accomplish tasks, while also excelling in creative writing.

  • The model reportedly cost under $5M to train, with its pricing coming in significantly below the current frontier models.

Why it matters: Nvidia’s Jensen Huang said just days ago that China is ‘nanoseconds’ behind in AI. Whether he was referencing Moonshot or not, this certainly speaks to that sentiment. K2 Thinking is the closest both open-source and Chinese labs have been to the frontier, with pricing that makes it a very serious alternative to top closed options.

TOGETHER WITH NORTON

 Traditional browsers are going extinct

The Rundown: Forget passive browsing with no added context — Norton Neo is the world’s first safe, AI-native browser, making every interaction faster, safer, and more efficient.

Neo’s features include:

  • Zero-prompt productivity: AI that truly works for you

  • Configurable memory, unique to you

  • Private and safe by design

  • Context-aware, smart tab grouping

Try Norton Neo here and join the future.

OPENAI

⏮️ OpenAI walks back federal backstop comments

Image source: WSJ Tech Live

The Rundown: OpenAI CFO Sarah Friar reversed course after her comments suggesting the company wanted federal guarantees for infrastructure spending were heavily criticized, with Sam Altman saying that OAI opposes bailouts for private AI firms.

The details:

  • Friar initially told the WSJ that OAI sought a federal "backstop" to help finance AI investments, later saying she "muddied the point" with poor word choice.

  • White House AI czar David Sacks also rejected the idea of a federal bailout, saying other major frontier AI companies could replace any failed competitor.

  • Altman posted a statement rejecting government guarantees for private AI buildouts, saying "we should fail" if OpenAI "screws up."

  • Altman also addressed criticism of OAI becoming “too big to fail” and its massive spending, detailing revenue projections and future compute trends.

Why it matters: The AI leader’s spending was already scrutinized, and Friar’s backstop comments just poured gasoline on the fire. While it didn’t sound like they were an intended position, OAI’s circular dealmaking, national infrastructure efforts, and escalating financial commitments are certainly creating more questions than answers.

AI TRAINING

🎉 Create polished presentations with Kimi K2 Slides

The Rundown: In this tutorial, you will learn how to use Kimi Slides to generate complete, professional presentations from a single prompt, handling layout, structure, and design automatically using a 1T parameter AI model.

Step-by-step:

  1. Go to Kimi, log in, and select Kimi Slides from the dashboard (free credits available for testing)

  2. Enter your presentation prompt (e.g., "Create a business plan presentation for a small coffee shop") and choose between Preset Mode (structured) or Adaptive Mode (creative/visual)

  3. Select a template from the minimalist to colorful design options, then click "Generate Slides" (takes 5-10 minutes for 14-18 slides)

  4. Review completed presentation, make any manual edits needed, then download as PPTX with fonts embedded to maintain formatting

Pro Tip: Experiment with different prompts and presentation styles to discover what fits your brand best.

PRESENTED BY FUEL iX

🛡️ Is your AI security ready for 2026?

The Rundown: AI vulnerabilities are multiplying, and defenses are struggling to keep up. 2026 will demand bold, proactive strategies — are your systems ready? Join Uncharted: The AI Safety & Security Summit for exclusive insights, proven tactics, and actionable plans to safeguard your organization.

Attend the online summit on Nov. 13 and:

  • Gain practical strategies for implementing secure AI solutions

  • Learn about cutting-edge AI security tools and techniques

  • Connect with experts shaping the future of AI safety

Register now.

MICROSOFT

🌟 Microsoft establishes new Superintelligence Team

Image source: Microsoft AI

The Rundown: Microsoft AI CEO Mustafa Suleyman announced the MAI Superintelligence Team, a research division dedicated to building advanced systems that solve specific problems in areas like medicine and energy over open-ended AGI.

The details:

  • Suleyman emphasized building “Humanist Superintelligence”, prioritizing AI that “always works for, in service of, people and humanity more generally.”

  • The team will focus on narrow, high-impact societal challenges, highlighting AI learning companions, medical superintelligence, and clean energy advances.

  • Suleyman’s Inflection co-founder Karen Simonyan is serving as chief scientist alongside poached researchers from DeepMind, OAI, and Anthropic.

  • The announcement comes in the wake of Microsoft’s new arrangement with OAI, allowing both to pursue superintelligence independently of each other.

Why it matters: We’ve seen all the major AI labs forge their own unique vibe over the last few years, and it feels like Suleyman has finally crafted one for Microsoft’s AGI efforts — forging a ‘humanist’ identity and unique direction that felt missing for much of the tech giant’s initial OAI arrangement.

QUICK HITS

🛠️ Trending AI Tools

📰 Everything else in AI today

The State of Code Roundtable NYC, Nov 13 – explore the AI productivity paradox and learn how to boost code quality in the age of AI. Register for free.*

Nvidia CEO Jensen Huang said in an interview with the Financial Times that China is “nanoseconds behind America in AI” and that they are “going to win the AI race”.

Google’s 4x faster Ironwood TPU AI chips are set to be available in the “coming weeks,” with Anthropic already committing to using 1M to train and run Claude.

Perplexity launched an upgrade to its Comet AI assistant, featuring enhanced web interaction capabilities, multi-tab functionality, and performance improvements.

SoftBank Group and OpenAI introduced a new joint venture called SB OAI Japan, which will launch “Crystal intelligence,” an enterprise AI solution for Japan, in 2026.

*Sponsored Listing

COMMUNITY

🤝 Community AI workflows

Every newsletter, we showcase how a reader is using AI to work smarter, save time, or make life easier.

Today’s workflow comes from reader Michael H. in Lubbock, TX:

"I’ve created a strategic executive assistant that is trained on my business’s core values, mission statement, and leadership principles — which are nuggets of wisdom from over 30 business books, the Bible, and success metrics for my role, departments, and our business. I can run through conversations, situations, decisions, etc., and it’s shared among all of our 80+ leaders and used daily across the company."

How do you use AI? Tell us here.

🎓 Highlights: News, Guides & Events

See you soon,

Rowan, Joey, Zach, Shubham, and Jennifer — the humans behind The Rundown

Robotics

Rivian spins off robotics startup

Jennifer Mossalgue • 5 minutes

Read Online | Sign Up | Advertise

Good morning, robotics enthusiasts. EV maker Rivian just spun out its second startup of the year — Mind Robotics, an industrial AI venture backed by $115M and a vague promise to turn factory-floor data into a “robotics data flywheel.”

While details are under wraps, could Rivian be quietly building the next big leap in factory intelligence, or something even more disruptive?


In today’s robotics rundown:

  • Rivian launches a robotics spinoff

  • Xpeng unveils next-gen robotaxis and humanoid

  • Humanoid startup K-Scale calls it quits

  • MIT teaches robots to map faster

  • Quick hits on other robotics news

LATEST DEVELOPMENTS

RIVIAN

🌪️ Rivian launches a robotics spinoff

Image source: Wikimedia Commons / Richard Truesdell

The Rundown: Rivian just launched Mind Robotics, its second spinoff this year — an industrial AI venture aimed at turning the EV maker’s factory-floor data into a commercial “robotics data flywheel.” So far, what that means is anyone’s guess.

The details:

  • Funding is already lined up, with roughly $115M in external seed capital to kickstart the new venture.

  • Rivian CEO RJ Scaringe announced the spinoff on an earnings call but offered limited details on what it will actually build or sell.

  • The company didn’t confirm employee transfers, though its shareholder letter nods to a “strong bench of technology talent,” hinting at internal hires.

  • In March, the company carved out its skunkworks micromobility team into a new venture called Also, which just debuted its first e-bike.

Why it matters: Rivian is hush about its robotics plans, but U.S. automakers are racing to monetize their factory AI — GM announced similar plans in October, and Tesla’s pitching its Optimus for manufacturing. With $115M committed, investors are betting that Rivian’s learned something worth commercializing beyond the assembly line.

XPENG

🚖 Xpeng unveils next-gen robotaxis and humanoid

Image source: Xpeng

The Rundown: Xpeng just unveiled three new robotaxis, engineered for full self-driving and powered by its own AI chips, as well as its sleek next-gen humanoid, in male and female versions.

The details:

  • At its Xpeng AI Day, the company unveiled Iron, its next-gen humanoid with fluid, humanlike movement, slated for mass production by late 2026.

  • The new robotaxis run a vision‑only stack powered by four in‑house Turing AI chips per car, avoiding lidar and HD maps to cut hardware cost.

  • Xpeng said its VLA 2.0 model will power the robotaxis, humanoids, and its flying-car projects, unifying autonomy and embodied AI under one system.

  • Alibaba is partnering on the robotaxi rollout through its AutoNavi mapping unit and Amap ride-hailing app, with trials starting in Chinese cities next year.

Why it matters: Xpeng is clearly taking a page from Tesla’s playbook, skipping driver-assist tech to jump straight into driverless rides, controlling both the AI brains and the hardware while cutting costs with vision-only systems. If it works, the company runs its own ride network and mass-produces humanoids on the same tech stack.

K-SCALE

🤖 Humanoid startup K-Scale calls it quits

Image source: K-Scale / X

The Rundown: K‑Scale Labs, the year-old Palo Alto startup promising a low‑cost open‑source humanoid, is pulling the plug on K‑Bot preorders and returning deposits after failing to raise the cash to build at scale.

The details:

  • The startup’s K-Bot was pitched as a low-cost, open-source humanoid for developers, priced under $10K to undercut commercial rivals.

  • Founder Benjamin Bolte told customers that the company has laid off most of its staff and has “less than a month of runway.”

  • The Information reports that K-Scale pursued potential deals with 1X and Bot Co., but those talks fell through.

  • The core engineering team has now launched Gradient Robots, a new startup aiming to be “the open‑source Unitree for America.”

Why it matters: Bolte drew a sharp comparison between U.S. and Chinese capital markets, saying he expected a deeper U.S. appetite for a cost‑competitive domestic humanoid. In a final nod to its ethos, K‑Scale is open‑sourcing K‑Bot and Zeroth Bot so others can keep building.

MIT

🧭 MIT teaches robots to map faster

Image source: MIT

The Rundown: MIT researchers developed a system that lets robots rapidly map large, unpredictable environments by creating and stitching together smaller 3D submaps on the fly — solving a major bottleneck in robotic navigation.

The details:

  • The MIT system lets robots map large spaces by stitching together smaller 3D submaps, bypassing AI vision models’ 60-image limit. 

  • The breakthrough blends vintage computer vision math with modern AI to correct the distortions machine-learning models introduce into submaps.

  • It generates close-to-real-time 3D maps of complex spaces using just smartphone video, with less than 5 cm of error.

  • The system can improve search-and-rescue robots, extended reality apps for VR headsets, or help warehouse robots accurately locate and move inventory.

Why it matters: Current robot mapping systems can’t process the thousands of images required to navigate disaster zones in real time. MIT’s approach looks to make fast, accurate 3D mapping practical for search-and-rescue missions, warehouse automation, and VR headsets, with no special cameras or expert tuning required.

QUICK HITS

📰 Everything else in robotics today

Norway’s Physical Robotics, makers of π humanoid and founded by Halodi Robotics’ Phuong Nguyen, has come out of stealth and announced $4M in fresh funding.

Volkswagen is building its own smart‑driving chip in China with Horizon Robotics, a homegrown partner, to power future VW models sold there.

Hullbot, an Australian ocean robotics firm, secured $16M in funding to develop autonomous underwater robots to clean and inspect ship hulls.

China is extending its EV‑and‑battery playbook into humanoids, reusable rockets, and LEO satellites in a push to win on scale and cost, The Information reports.

Elon Musk said on the All‑In podcast that Tesla aims to scale its robotaxi pilot to 500 cars in Austin and 1K in the Bay Area by year‑end.

China’s Leju Robotics unveiled Kuavo 5, a modular humanoid that can switch between walking and wheels, swap hands for different jobs, and work for hours.

Infravision raised $91M to scale its drone‑powered TX System for building and maintaining power lines, cutting costs on helicopter methods.

DJI launched its Neo 2 selfie drone, a sequel to last year’s model that adds forward lidar plus downward IR sensing for obstacle avoidance and safer follow‑me flight.

Adaptronics raised $3.6M to roll out its next‑gen electrostatic robotic grippers across Europe, moving from pilots to broader factory and logistics deployments.

COMMUNITY

🎓 Highlights: News, Guides & Events

See you soon,

Rowan, Jennifer, and Joey—The Rundown’s editorial team

AI

Apple taps Gemini for Siri overhaul

Zach Mink • 7 minutes

Read Online | Sign Up | Advertise

Good morning, AI enthusiasts. After years of delays, Apple seems to have finally picked a lane for its Siri AI overhaul — with one of its biggest rivals stepping in as a “behind-the-scenes” partner.

A reported $1B annual deal brings Google's Gemini under the voice assistant’s hood, making the anticipated Spring release a seemingly make-or-break moment for the tech giant’s already messy AI situation.


In today’s AI rundown:

  • Apple taps Google’s Gemini for Siri overhaul

  • Ex-Meta designers launch Stream Ring AI wearable

  • Use AI to find patents and innovation opportunities

  • Edison Scientific debuts Kosmos AI scientist

  • 4 new AI tools, community workflows, and more

LATEST DEVELOPMENTS

APPLE & GOOGLE

📱 Apple taps Google’s Gemini for Siri overhaul

Image source: Reve / The Rundown

The Rundown: Apple reportedly finalized plans to deploy a custom 1.2T parameter version of Google's Gemini model for its long-delayed Siri overhaul, according to Bloomberg — committing roughly $1B annually to license the technology.

The details:

  • Gemini will handle summarization and multi-step planning within Siri, running on Apple's Private Cloud Compute infrastructure to keep user info private.

  • Apple also trialed models from OpenAI and Anthropic, with the 1.2T parameter count far exceeding the 150B used in the current Apple Intelligence model.

  • Bloomberg said the partnership is “unlikely to be promoted publicly”, with Apple intending for Google to be a “behind-the-scenes” tech supplier.

  • The new Siri could arrive as soon as next Spring, with Apple planning to use Gemini as a stopgap while it builds its own capable internal model.

Why it matters: After years of delays and uncertainty around Siri’s upgrade, Gemini is the model set to bring the voice assistant into the AI world (at least in some capacity). Apple views the move as temporary, but building its own solution, considering the company’s struggles and employee exodus, certainly doesn’t feel like a given.

TOGETHER WITH VANTA

⚠️ Knowledge gaps = security gaps

The Rundown: AI is moving faster than security teams can keep up — and 59% say AI risks outpace their expertise. Vanta's new State of Trust report surveyed 3,500 business and IT leaders across the globe to reveal how organizations are navigating this growing gap.

The data reveals:

  • 61% of teams spend more time proving security than improving it

  • AI-driven attacks are growing bigger, faster, and more sophisticated

  • Nearly half of leaders say AI gives them time for strategic security work

Download the State of Trust report to see what early adopters are doing to stay ahead.

SANDBAR

💍 Ex-Meta designers launch Stream Ring AI wearable

Image source: Sandbar

The Rundown: Sandbar, a startup founded by former Meta designers, launched Stream Ring — an AI wearable that captures whispered thoughts through a ring device and transcribes voice into organized notes while also doubling as a music controller.

The details:

  • Cofounders Mina Fahmi and Kirak Hong developed the ring after working on neural interfaces at CTRL-Labs, which was acquired by Meta in 2019.

  • Users activate recording by holding a touchpad rather than shouting wake words, with whisper-detection microphones converting speech to text.

  • The AI assistant responds in a synthesized version of the wearer's voice using ElevenLabs speech technology, enabling back-and-forth conversation.

  • The Stream Ring is available for preorder starting at $249 (plus a $10 subscription model), with a planned summer 2026 delivery.

Why it matters: Another wearable has entered the arena, with the Stream Ring continuing the infusion of AI voice tech across form factors, joining pendants, pins, and more. Simplicity may be a differentiator, but there is no shortage of competition from both other wearables and hardware like earbuds getting AI upgrades of their own.

AI TRAINING

 🔎 Use AI to find patents and innovation opportunities

The Rundown: In this tutorial, you will learn how to use Perplexity's AI-powered search to quickly find patents, analyze innovation gaps, and position your invention without infringement risk.

Step-by-step:

  1. Go to Perplexity and search naturally: "Are there any patents related to AI automations?" - Perplexity automatically activates Patent Research (beta), showing relevant filings, owners, and dates

  2. Refine with conversational queries: "Find active patents for AI-driven industrial automation and model drift detection", then follow up with "Summarize main claims" or "Show whitespace in this field"

  3. Toggle on Agent Mode for advanced analysis - the AI automatically retrieves patents from multiple jurisdictions, creates tables, and builds visualization charts (showing "12 steps completed")

  4. Review generated PNG charts showing patent clusters and risk zones, plus CSV files with patent IDs, titles, owners, and claims - identify which companies dominate and where opportunities exist

  5. Use results to inform product design by identifying saturated areas to avoid, high-opportunity/low-risk zones for innovation, and specific technologies or claims requiring caution

Pro tip: Start with a broad query to capture the full patent landscape. Then iterate: ask the agent to list patents by company, summarize claims, or visualize whitespace.

PRESENTED BY SYNK

💡 How Vibe Coding is changing secure development

The Rundown: AI coding tools are boosting productivity, but they're also introducing new security concerns. Join Snyk's live session on Nov. 20 at 11 AM ET to learn how to embed security in GenAI-powered development workflows before vulnerabilities make it to production.

Snyk Staff Developer Advocate Sonya Moisset will cover:

  • How to spot security vulnerabilities hidden in AI-generated code

  • Secure coding best practices tailored for GenAI workflows

  • Strategies for building AI-native apps with security from the start

Register now. Plus, ISC2 members will earn 1 CPE credit for attending live!

AI RESEARCH

🧪 Edison Scientific debuts Kosmos AI scientist

Image source: Reve / The Rundown

The Rundown: Futurehouse just announced the launch of its commercial spinout Edison Scientific, alongside the debut of Kosmos — an autonomous AI research system that beta testers report can complete six months of scientific work in a single day.

The details:

  • Kosmos coordinates cycles of literature review, data analysis, and hypothesis generation, processing 1,500 papers and executing 42k lines of code per run.

  • All of Kosmos’ generations maintain full citation traceability for every claim, making it easily auditable down to specific lines of code.

  • 79% of Kosmos’ outputs were validated as accurate, with the AI reproducing unpublished findings and making new discoveries across multiple fields.

  • Edison Scientific will commercialize the platform following pharma demand, while FutureHouse continues nonprofit foundational research development.

Why it matters: Edison Scientific says the “era of AI-accelerated science is here,” with Kosmos continuing the trend of AI models removing the human-bandwidth limitation for research and analysis. These timeline-compressing abilities are set to completely transform the pace of progress across scientific domains.

QUICK HITS

🛠️ Trending AI Tools

  • 🧪 Kosmos - Edison Scientific’s next-generation AI scientist

  • 🗺️ Codemaps - Windsurf’s coding tool to understand & navigate codebases

  • 🎥 Sora App - OpenAI’s AI video platform, now available for Android users

  • 🧭 Google Maps - New Gemini integration for conversational assistance

📰 Everything else in AI today

OpenAI said it now has 1M+ business customers, becoming the fastest-growing platform in history, with ChatGPT for Work growing 40% in two months to 7M+ seats.

Stability AI won a UK High Court case against Getty Images over trademark infringement related to AI training, with Getty saying the ruling shows that even well-resourced companies “face significant challenges in protecting their creative works.”

xAI reportedly required employees to submit biometric data to train its "Ani" and other AI companions, telling staff the collection was a mandatory job requirement.

Google integrated its Gemini AI into Maps, enabling conversational navigation, multi-step questions, and directions based on visible buildings instead of just distances.

Famed ‘Big Short’ investor Michael Burry disclosed over $1B in put options against Nvidia and Palantir, following cryptic social media warnings about an AI bubble.

Snap is partnering with Perplexity to integrate its AI into Snapchat starting in 2026, with Perplexity paying $400M to reach the platform’s nearly 1B monthly users.

COMMUNITY

🤝 Community AI workflows

Every newsletter, we showcase how a reader is using AI to work smarter, save time, or make life easier.

Today’s workflow comes from reader Elaine in Toronto, Canada:

"I’ve found a simple but powerful way to learn from The Rundown AI. Every day, you feature a user story — and instead of just reading it, I copy their experience straight into ChatGPT and ask the model to teach me how to do the same, step by step. It feels like recreating a mini experiment every morning. Over time, this has become my favorite way to learn: turning other people’s discoveries into my own hands-on lessons."

How do you use AI? Tell us here.

🎓 Highlights: News, Guides & Events

See you soon,

Rowan, Joey, Zach, Shubham, and Jennifer — the humans behind The Rundown

No matching search results

Try using different keywords, double-check your spelling, or explore related categories.

Clear Search

Stay Ahead on AI.

Join 2,000,000+ readers getting bite-size AI news updates straight to their inbox every morning with The Rundown AI newsletter. It's 100% free.