Get the latest AI news, understand why it matters, and learn how to apply it in your work — all in just 5 minutes a day. Join over 2,000,000+ subscribers.

Microsoft's homegrown AI debut
Read Online | Sign Up | Advertise
Good morning, AI enthusiasts. For years, Microsoft's AI strategy has been synonymous with OpenAI — but that narrative just got complicated.
The company's new MAI-Voice-1 and MAI-1-preview models mark its first homegrown AI, signaling a shift that could throw yet another wrench into the AI world’s most-watched partnership.
Reminder: Our next live workshop is today at 4 PM EST with The Rundown’s AI Educator, Nate Grahek — join and learn all the latest tips and tricks for getting the most out of ChatGPT. RSVP here.
In today’s AI rundown:
Microsoft releases homegrown AI
OpenAI’s gpt-realtime for voice agents
Create an AI agent to handle email support
Cohere’s SOTA enterprise translation model
4 new AI tools, community workflows, and more
LATEST DEVELOPMENTS
MICROSOFT
🤖 Microsoft releases homegrown AI

Image source: Microsoft
The Rundown: Microsoft just introduced MAI-Voice-1 and MAI-1-preview, marking its first fully in-house AI models and coming after years of relying on OpenAI's technology in a turbulent partnership.
The details:
MAI-Voice-1 is a speech generation model capable of generating a minute of speech in under a second, already integrated into Copilot Daily and Podcasts.
MAI-1-preview is a text-based model trained on a fraction of the GPUs of rivals, specializing in instruction following and everyday queries.
CEO Mustafa Suleyman said MAI-1 is “up there with some of the best models in the world”, though benchmarks have yet to be publicly released.
The text model is currently being tested on LM Arena and via API, with Microsoft saying it will roll out in “certain text use cases” in the coming weeks.
Why it matters: Microsoft's shift toward building in-house models introduces a new dynamic to its OAI partnership, also positioning it to better control its own AI destiny. While we await benchmarks and more real-world testing for a better understanding, the tech giant looks ready to pave its own path instead of being viewed as OAI’s sidekick.
TOGETHER WITH AUGMENT CODE
👋 Meet Auggie CLI
The Rundown: Augment Code is bringing the power of its AI coding agent and context engine right to your terminal with Auggie CLI, now generally available.
From standalone terminal sessions to every piece of your dev stack, with Auggie CLI, you can:
Build features and debug issues
Get instant feedback suggestions for your PRs and builds
Triage customer issues and alerts from your observability stack
Build with the AI coding platform that gets you, your team, and your code
OPENAI
🗣️ OpenAI’s gpt-realtime for voice agents

Image source: OpenAI
The Rundown: OpenAI moved its Realtime API out of beta, also introducing a new gpt-realtime speech-to-speech model and new developer tools like image input and Model Context Protocol server integrations.
The details:
gpt-realtime features nuanced abilities like detecting nonverbal cues and switching languages while keeping a naturally flowing conversation.
The model achieves 82.8% accuracy on audio reasoning benchmarks, a massive increase over the 65.6% score from its predecessor.
OpenAI also added MCP support, allowing voice agents to connect with external data sources and tools without custom integrations.
gpt-realtime can also handle image inputs like photos or screenshots, giving the voice agent the ability to reason on visuals alongside the conversation.
Why it matters: The mainstream adoption of voice agents feels like an inevitability, and OpenAI’s additions of upgraded human conversational abilities and integrations like MCP and image understanding bring even more functionality for enterprises and devs to plug directly into customer support channels or customized voice applications.
AI TRAINING
✉️ Create an AI agent to handle email support

The Rundown: In this tutorial, you will learn how to build an AI agent that automatically triages incoming emails, tags team members in Slack, and drafts professional responses, turning your overwhelming inbox into an organized workflow.
Step-by-step:
Go to Zapier Agents, click "New Agent", name it "Email Triage Assistant", and set it to run daily at 9 AM (batch processing saves Zapier calls)
Click Copilot and paste: "Every day at 9 AM PST, retrieve all emails from the last 24 hours. Classify as: Spam, Auto-replies, PR/Marketing, Customer Support, Feedback, or General Inquiry"
Add team tagging rules customized for your team members to funnel to specific departments or responsibilities
Click "Add tools" and connect Gmail, Slack, and your FAQ URLs — grant full permissions for autonomous operation
Test with your current inbox, verify categorization accuracy, then enable the daily schedule
Pro tip: Feed your agent FAQ URLs, Notion docs, and previous support threads in the instructions. The more context you provide, the better it handles edge cases and knows exactly who to loop in.
PRESENTED BY STACK AI
🛠️ Your secure enterprise AI toolkit
The Rundown: Deploy 10 AI agents that actually drive ROI on StackAI—the secure enterprise AI toolkit trusted by finance, legal, ops, & IT teams who move 80% faster than the rest.
With StackAI’s toolkit, you’ll get:
Drag and drop platform + ship as chatbots, forms, apps
Built-in PII protections, guardrails, audit trails, SSO, and compliance
Seamless integrations with 100+ tools you already use
COHERE
🌍 Cohere’s SOTA enterprise translation model

Image source: Midjourney
The Rundown: Cohere introduced Command AI Translate, a new enterprise model that claims top scores on key translation benchmarks while allowing for deep customization and secure, private deployment options.
The details:
Command A Translate outperforms rivals like GPT-5, DeepSeek-V3, and Google Translate on key benchmarks across 23 major business languages.
The model also features an optional ‘Deep Translation’ agentic workflow that double-checks complex and high-stakes content, boosting performance.
Cohere offers customization for industry-specific terms, letting pharmaceutical companies teach their drug names or banks add their financial terminology.
Companies can also install it on their own servers, keeping contracts, medical records, and confidential emails completely offline and secure.
Why it matters: Security has been one of the biggest issues for companies wanting to leverage AI tools, and global enterprises face a choice of uploading sensitive documents to the cloud or paying for time-consuming human translators. Cohere’s model gives businesses customizable translation in-house without data privacy risks.
QUICK HITS
🛠️ Trending AI Tools
🎥 Google Vids - Create and edit videos with AI-powered tools
🔊 MAI-Voice-1 - Microsoft’s new in-house voice generation model
🗣️ gpt-realtime - OpenAI’s new advanced speech-to-speech model
🥁 HunyuanVideo-Foley - Open-source model for professional-grade audio
📰 Everything else in AI today
Free Event: The Future of AI Agents in Coding with Guy Gur-Ari & Igor Ostrovsky, co-founders of Augment Code. Ask them anything today in r/webdev.*
xAI released Grok Code Fast 1, a new advanced coding model (previously launched under the codename sonic) that features very low costs for agentic coding tasks.
Anthropic published a new threat report revealing that cybercriminals exploited its Claude Code platform to automate a multi-million dollar extortion scheme.
OpenAI rolled out new features for its Codex software development tool, including an extension to run in IDEs, code reviews, CLI agentic upgrades, and more.
Krea introduced a waitlist for a new Realtime Video feature, enabling users to create and edit video using canvas painting, text, or live webcam feeds with consistency.
Tencent open-sourced HunyuanVideo-Foley, a new model that creates professional-grade soundtracks and effects with SOTA audio-visual synchronization.
TIME Magazine released its 2025 TIME100 AI list, featuring many of the top CEOs, researchers, and thought leaders across the industry.
*Sponsored Listing
COMMUNITY
🤝 Community AI workflows
Every newsletter, we showcase how a reader is using AI to work smarter, save time, or make life easier.
Today’s workflow comes from reader Scott M. in Franklin, TN:
"My client was using a legacy version of QuickBooks Desktop, which lacked the feature for sending automated follow-up emails for overdue invoices. To address this, I built a custom automation using Zapier AI: the workflow logs into the accounting email, IDs invoices that are more than 60 days past due, and follows the invoice link to verify whether it's been paid. If payment has not been made, the automation sends a reminder email stating that the invoice is late and includes the original payment link. Every communication includes the accounting department, ensuring they stay informed about delinquent payments."
How do you use AI? Tell us here.
🎓 Highlights: News, Guides & Events
Read our last AI newsletter: The AI app power rankings
Read our last Tech newsletter: Klarna gets a $14B reality check
Read our last Robotics newsletter: Nvidia’s palm-sized robot brain
Today’s AI tool guide: Create an AI agent to handle email support
RSVP to our next workshop today at 4 PM EST: Essential ChatGPT tips
See you soon,
Rowan, Joey, Zach, Shubham, and Jennifer—the humans behind The Rundown


Nvidia's palm-sized 'robot brain'
Read Online | Sign Up | Advertise
Good morning, robotics enthusiasts. Nvidia just unveiled the Jetson AGX Thor, a $3,499 mini “robot brain,” packing desktop-level AI power into a palm-sized chip.
Robots can now run any generative AI model entirely on-device, no cloud required. With early adopters including Meta, Boston Dynamics, and Figure, could this tiny powerhouse spark a new frontier in robotics?
In today’s robotics rundown:
Nvidia’s tiny, powerful new ‘robot brain’
Tesla shifts gears for Optimus training
Boston Dynamics’ Spot flips like a gymnast
Hyundai to launch massive U.S. robotics hub
Quick hits on other robotics news
LATEST DEVELOPMENTS
NVIDIA
🧠 Nvidia’s tiny, powerful new ‘robot brain’

Image source: Nvidia
The Rundown: Nvidia just dropped Jetson AGX Thor, a $3,499 “robot brain” that packs desktop-level AI horsepower into a palm-sized module, letting robots run massive language, vision, and multimodal models without ever touching the cloud.
The details:
The module features a 2,560-core Blackwell GPU, 96 fifth-generation Tensor cores, and delivers up to 2,070 FP4 teraflops of AI compute.
With 128 GB RAM and 14 Arm CPU cores, Thor supports large LVM AI models locally, with 7x more AI computing power than its predecessor, Jetson Orin.
Early adopters Amazon, Meta, Boston Dynamics, Agibot, and Agility Robotics are integrating Thor into robots for warehouses and research.
Nvidia is also offering a Drive AGX Thor variant for self-driving and autonomous vehicle development.
Why it matters: Nvidia’s Jetson AGX Thor will give physical AI a major boost, letting machines run massive AI models locally, cutting out cloud delays, and giving robots real-time decision-making. With performance and efficiency far beyond previous generations, it looks to make complex, multimodal AI feasible on the edge.
TESLA
🤖 Tesla shifts gears for Optimus training

Image source: Tesla
The Rundown: Tesla has shaken up its Optimus robot training strategy, ditching motion-capture suits and VR headsets in favor of a vision-only approach using video recordings of human workers performing tasks, Business Insider reports.
The details:
This methodology aligns with Tesla’s self-driving car development, using massive video data to train neural networks for adaptable behaviors.
Workers wear custom helmet-mounted rigs with five in-house cameras, capturing detailed hand and finger movements from multiple angles.
Leadership of the program transitioned to Ashok Elluswamy, Tesla’s AI director, after former Optimus chief Milan Kovac stepped down.
Experts note video-based learning could let Optimus generalize skills, but warn it may lack the physical feedback that comes from direct teleoperation.
Why it matters: This shift, insiders say, could let Tesla scale data collection faster, reflecting Elon Musk’s belief that AI learns best through cameras — a principle already powering Tesla’s self-driving tech. The real question: can it capture enough richly annotated video for a robot to master a wide range of household and industrial tasks?
BOSTON DYNAMICS
🤸🏽♀️ Boston Dynamics’ Spot flips like a gymnast

Image source: Boston Dynamics
The Rundown: Boston Dynamics just dropped a new clip showing Spot, its four-legged robot dog, landing gymnast-style backflips. But it’s not all for show — as lead engineer Arun Kumar explains, these stunts are a real stress-test for Spot’s agility.
The details:
Kumar explains in the video that backflips are not designed for customers, but to push the robot's hardware and motors to their absolute limits.
Several clips reveal Spot tumbling or landing awkwardly, highlighting the trial-and-error nature of training robots for extreme maneuvers.
Reinforcement learning drives the progress, with Spot training through countless trial-and-error cycles until the flips stick.
Spot’s backflip lessons help engineers develop better recovery algorithms for real-world challenges, ensuring the robot can right itself if it slips or trips.
Why it matters: Spot certainly isn’t the only robot dog that can do tricks, but watching it recover from failures gives us a glimpse into the messy, iterative process of developing real-world robotics. Plus, Kumar explains the real purpose behind the stunts, creating versatile robots that can recover from falls, even while carrying heavy payloads.
HYUNDAI
⚡️ Hyundai to launch massive U.S. robotics hub

Image source: Hyundai
The Rundown: Hyundai Motor Group just announced it will invest $26B in the U.S through 2028, with $5B of that earmarked for a state-of-the-art robotics manufacturing plant to produce 30K robots a year.
The details:
This facility is envisioned as a "Robotics Innovation Hub," focused on design, manufacturing, testing, and the deployment of advanced robots.
As Hyundai owns an 80% stake in Boston Dynamics, the U.S. robotics plant will accelerate the commercialization and scaling of Spot and Atlas robots.
Besides robotics, the plan includes building a new steel mill in Louisiana and scaling up Hyundai and Kia’s existing U.S. car manufacturing operations.
Why it matters: Hyundai is staking big on robotics, planning one of the largest, most advanced robot manufacturing hubs in the U.S. The facility will churn out robots at a scale rarely seen outside China or research labs, while creating thousands of jobs and supercharging Hyundai’s own smart factories.
QUICK HITS
📰 Everything else in robotics today
San Francisco partially lifted its five-year ban on private vehicles along Market Street, now allowing Waymo driverless taxis to operate during limited times.
Robomart, a Los Angeles-based startup, unveiled its level-four autonomous RM5 delivery robot with a $3 flat fee for customer orders.
China's Haiqin remotely operated vehicle (ROV), designed for deep-sea exploration up to 20K feet, successfully completed its maiden voyage in the South China Sea.
Global robotics investments soared to at least $4.35B in July 2025, with 93 funding rounds dominated by companies in the U.S., China, and Israel, a new report cites.
1X’s Bernt Bornich told CNBC that demand is high for the NEO home humanoid, which he says will offer full autonomy “closer to 2027.”
A fleet of Unitree robot dogs acted as volunteers at China’s Zhejiang University, helping students move into dorms by hauling their luggage.
North Carolina State University researchers created a self-driving lab where multiple robots, guided by AI, autonomously discover and optimize quantum dots.
COMMUNITY
🎓 Highlights: News, Guides & Events
Read our last AI newsletter: The AI app power rankings
Read our last Tech newsletter: Klarna gets $14B reality check
Read our last Robotics newsletter: Drones that fly like birds of prey
Today’s AI tool guide: Create stylish presentations with Canva AI
RSVP to our next workshop @ 4 PM EST Friday: Essential ChatGPT Tips
See you soon,
Rowan, Jennifer, and Joey—The Rundown’s editorial team

The AI app power rankings
Read Online | Sign Up | Advertise
Good morning, AI enthusiasts. Andreessen Horowitz just dropped its latest snapshot of which AI apps people actually use — and while ChatGPT still reigns supreme, the real story might be in who's climbing fast.
With Chinese apps quietly dominating mobile and vibe coding tools surging up the charts, the consumer AI landscape is shifting in ways nobody predicted.
Reminder: Our next live workshop is Friday at 4 PM EST with The Rundown’s AI Educator, Nate Grahek — join and learn all the latest tips and tricks for getting the most out of ChatGPT. RSVP here.
In today’s AI rundown:
A16z’s fifth GenAI consumer app rankings
AI giants team up on model safety testing
Create stylish presentations with Canva AI
Microsoft brings Copilot AI to your TV
4 new AI tools, community workflows, and more
LATEST DEVELOPMENTS
AI TRENDS
🏆 A16z’s fifth GenAI consumer app rankings

Image source: a16z
The Rundown: VC firm Andreessen Horowitz published the fifth edition of its ‘Top 100 GenAI Consumer Apps’ list, analyzing overall usage, featuring OpenAI leading the pack with Google right behind, the rise of vibe coding, and Chinese dominance in mobile AI.
The details:
Gemini came in at No. 2 behind ChatGPT, capturing 12% of ChatGPT's web traffic — with Google’s AI Studio, NotebookLM, and Labs all also making the list.
Grok is climbing the rankings at No. 4, showing a significant usage increase around Grok 4 and its AI companion launches.
Chinese-developed apps took 22 of the 50 slots on the mobile rankings, despite only three of them being primarily used in the country.
Vibe coding startups, including Lovable (No. 23), Cursor (No. 26), and Replit (No. 41), all rose on the list, with Bolt also featured on the ‘brink’ of cutoffs.
Why it matters: This usage-based snapshot is a good look at the pulse of shifting consumer trends in the space, and the stabilizing winners that continue as mainstays at the top of the charts. The rise of vibe coding apps in just five months shows how quickly adoption is growing in the AI-powered development space, in particular.
TOGETHER WITH VANTA
🛡️ AI-Powered risk management with Vanta
The Rundown: Risk isn’t just growing — it’s spreading across more systems and vendors than ever before and putting customers, reputation, and revenue at risk. Vanta’s new AI workflows help you centralize risk management, reduce manual work, and stay secure at scale.
Join Vanta Delivers on Sept. 10 and learn how to:
Draft and update policies faster with AI-powered workflows
Automatically flag evidence gaps and streamline remediation
Stay protected with continuous monitoring across systems and vendors
Work smarter with new Slack integrations for instant visibility
OPENAI & ANTHROPIC
🧪 AI giants team up on model safety testing

Image source: Ideogram / The Rundown
The Rundown: OpenAI and Anthropic just published new internal safety evaluations on each other’s models in a joint collaboration, testing leading models for risky behaviors, alignment, and real-world safety issues.
The details:
The companies tested GPT-4o, o3, Claude Opus 4, and Sonnet 4 for a range of behaviors, including misuse, whistleblowing, and more.
OpenAI’s o3 showed the strongest alignment overall among OpenAI models, with 4o and 4.1 being more likely to cooperate with harmful requests.
Models from both labs attempted whistleblowing in simulated criminal organizations, also using blackmail to prevent shutdown.
Testing showed varying approaches, with OpenAI models hallucinating more but answering more questions, and Claude prioritizing certainty over utility.
Why it matters: This safety collab is a welcome sight for accountability and transparency in the space, with two of the top labs in the world testing each other’s models instead of relying on internal evaluations. With models only continuing to grow more capable, the need for deep safety probing is more important than ever.
Note — GPT-5 was not yet released at the time of the testing, which is why it was not included in the evaluations.
AI TRAINING
🔥 Create stylish presentations with Canva AI

The Rundown: In this tutorial, you will learn how to use Canva AI to generate professional presentations that don't look cookie-cutter — combining AI speed with full editing control for decks that actually feel custom.
Step-by-step:
Go to canva.com and click into the text box on the homepage to instantly open Canva AI (or click the Canva AI logo)
Describe your presentation: "Create a 2026 business plan template companies can fill in with their own info"
Pick from the style options: Playful & colorful for creative teams, formal & polished for finance, or clean hybrids (the sweet spot)
Click "Use Canva Editor" to customize — swap stock images for product shots, change colors/fonts, add slides
Share with teammates, export to PPT, record voice-overs, or download as PDF
Pro tip: Create templated versions with placeholders you can fill in later. Canva gives you AI speed plus full editing freedom so your deck feels truly yours, not AI-generated.
PRESENTED BY WARP
🏆 Try the No. 1 coding agent for $1
The Rundown: Warp is the top-performing AI coding agent on benchmarks and trusted by over 600K active developers. Warp combines the power of the terminal with the interactivity of an IDE in one seamless platform — allowing devs to prompt, plan, review, and ship production-ready code end-to-end.
Why Warp:
Access top models like Opus, Sonnet, GPT-5 and more, all in a single subscription
Tops benchmarks at No.1 on Terminal-Bench Verified, 71% on SWE-Bench
A trusted platform with 600K+ devs, 56% of Fortune 500 engineering teams
Enter code RUNDOWN to try the #1 coding agent for $1 for the first month.
OPENAI
📺 Microsoft brings Copilot AI to your TV

Image source: Microsoft
The Rundown: Microsoft announced that Copilot will be embedded into Samsung’s 2025 TVs and smart monitors, giving the AI assistant an animated blob-like character that can field movie recommendations, episode recaps, general questions, and more.
The details:
The assistant appears on-screen as an animated blob-like character that lip-syncs and reacts visually as it responds to questions and prompts.
Copilot integrates directly into Samsung’s Tizen OS, Daily+, with users able to access it via remote or voice commands.
The AI companion enables group-friendly features like suggesting shows and providing spoiler-free recaps, plus everyday help like weather to planning.
Signed-in users can also leverage personalization features like remembering conversations and preferences.
Why it matters: While Copilot’s infusion is a (baby) step towards AI being embedded into every home, these listed features don’t feel like major needle movers. But the tech is coming, and connecting across every aspect and appliance in a user’s life will be the endgame for a true smart-home style ecosystem of personalized intelligence.
QUICK HITS
🛠️ Trending AI Tools
🌐 Cisco Agentic Network - Power AI agents, boost productivity, and future-proof your enterprise with scalable infrastructure*
🍌 Gemini 2.5 Flash Image - Google’s new SOTA image editing model
🪞 HeyGen Digital Twin - Create interactive, realistic AI avatars
🤖 Hermes 4 - Nous Research’s new hybrid reasoning family of models
*Sponsored Listing
📰 Everything else in AI today
China is reportedly aiming to triple its production of AI chips in the next year to reduce the need for Nvidia chips in the wake of U.S. export controls.
OpenAI published a new blog detailing additional safety measures on the heels of a lawsuit from parents alleging the AI assisted in their son’s suicide.
Anthropic announced the Anthropic National Security and Public Sector Advisory Council, focused on accelerating AI across the public sector.
Google is rolling out new features to its Vids AI video editing platform, including image-to-video capabilities, AI avatars, automatic transcript trimming, and more.
Nous Research introduced Hermes 4, a family of open-weight, hybrid reasoning models designed to be neutral and avoid sycophancy.
A group of authors settled their lawsuit against Anthropic, coming after the court ruled in June that the company’s use of books for training was fair use.
COMMUNITY
🤝 Community AI workflows
Every newsletter, we showcase how a reader is using AI to work smarter, save time, or make life easier.
Today’s workflow comes from reader Greg S. in Atlanta, GA:
"I run a number of different workflows using n8n and AI agents. The one I love the most is my weekly review of all emails from my 13-year-old twins’ schools. They go to different schools, so keeping up with the events, deadlines, and expectations from each school can be frustrating at best. It causes lots of disconnects at times. Having my agents review all the emails, one at a time to summarize the key points, and then aggregate those into a single view is a life (and potentially marriage) saver."
How do you use AI? Tell us here.
🎓 Highlights: News, Guides & Events
Read our last AI newsletter: Google's viral model changes AI image editing
Read our last Tech newsletter: Klarna gets $14B reality check
Read our last Robotics newsletter: Drones that fly like birds of prey
Today’s AI tool guide: Create stylish presentations with Canva AI
RSVP to our next workshop @ 4 PM EST Friday: Essential ChatGPT Tips
See you soon,
Rowan, Joey, Zach, Shubham, and Jennifer — the humans behind The Rundown


Google's viral model changes AI image editing
Read Online | Sign Up | Advertise
Good morning, AI enthusiasts. The AI world has been obsessed for weeks with a mystery model that appeared out of nowhere in testing to demolish the image editing leaderboard — and now, nano-banana has officially arrived.
Google just revealed the prized system as Gemini 2.5 Flash Image, and its ability to nail multi-step edits while preserving every detail might just spark the next wave of viral AI creative workflows.
In today’s AI rundown:
Google’s 2.5 Flash Image takes AI editing to new level
Anthropic trials Claude for agentic browsing
Prompt marketing videos with Gemini's Veo 3
Anthropic reveals how teachers are using AI
4 new AI tools, community workflows, and more
LATEST DEVELOPMENTS
🍌 Google’s 2.5 Flash Image takes AI editing to new level

Image source: Getty Images / 2.5 Flash Image Preview
The Rundown: Google just released Gemini Flash 2.5 Image (a.k.a. nano-banana in testing), a new AI model capable of precise, multi-step image editing that preserves character likeness while giving users more creative control over generations.
The details:
The model was a viral hit as ‘nano-banana’ in testing, rising to No. 1 on LM Arena’s Image Edit leaderboard by a huge margin over No. 2 Flux-Kontext.
Flash 2.5 Image supports multi-turn edits, letting users layer changes while maintaining consistency across the editing process.
The model can also handle blending images, applying and mixing styles across scenes and objects, and more, all using natural language prompts.
It also uses multimodal reasoning and world knowledge, making strategic choices (like adding correct plants for the setting) during the process.
The model is priced at $0.039 / image via API and in Google AI Studio, slightly cheaper than OpenAI’s gpt-image and BFL’s Flux-Kontext models.
Why it matters: AI isn’t ready to replace Photoshop-style workflows yet, but Google’s new model brings us a step closer to replacing traditional editing. With next-level character consistency and image preservation, the viral Flash Image AI could drive a Studio Ghibli-style boom for Gemini — and enable a wave of viral apps in the process.
TOGETHER WITH HUBSPOT
⏰ Save 10+ hours weekly with an AI personal assistant
The Rundown: Stop drowning in tasks and start delegating like top performers who use AI to handle 80% of their routine work. HubSpot’s free kit provides the exact templates, prompts, and systems that 10,000+ professionals use to complete a full day's work by lunch.
The AI Assistant Kit includes:
Ready-to-use "AI Assistant Command Center" for managing all your AI tools
Step-by-step implementation guide to master AI delegation in under 60 minutes
Built-in ROI calculator to track your time savings and productivity gains
Advanced prompts and templates to turn ChatGPT into a 24/7 productivity partner
ANTHROPIC
🖥️ Anthropic trials Claude for agentic browsing

Image source: Anthropic
The Rundown: Anthropic introduced a “Claude for Chrome” extension in testing to give the AI assistant agentic control over users’ browsers, aiming to study and address security issues that have hit other AI-powered browsers and platforms.
The details:
The Chrome extension is being piloted via a waitlist exclusively for 1,000 Claude Max subscribers in a limited preview.
Anthropic cited prompt injections as the key concern with agentic browsing, with Claude using permissions and safety mitigations to reduce vulnerabilities.
Brave discovered similar prompt injection issues in Perplexity's Comet browser agent, with malicious instructions able to be inserted into web content.
The extension shows safety improvements over Anthropic’s previously released Computer Use, an early agentic tool that had limited abilities.
Why it matters: Agentic browsing is still in its infancy, but Anthropic’s findings and recent issues show that security for these systems is also still a work in progress. The extension move is an interesting contrast from standalone platforms like Comet and Dia, which makes for an easy sidebar add for those loyal to the most popular browser.
AI TRAINING
🎥 Prompt marketing videos with Gemini's Veo 3

The Rundown: In this tutorial, you will learn how to use Gemini's Veo 3 to generate short marketing clips from simple text prompts or images — perfect for creating campaign assets without a video team.
Step-by-step:
Go to Gemini and select "Tools" → "Videos with Veo"
Build your brief by either dragging in an image reference or typing a description with clear scenes and "must-show" elements
Use this prompt structure: "Create a [product] video. Theme: [message]. Scene 1: [description]. Scene 2: [transition]. Must show: [key element]"
Submit and wait for rendering (Note: ~2 videos per day limit on Pro plan)
Export to Canva or your editor to swap text, add licensed music, and crop for different platforms (9:16, 1:1, 16:9)
Pro tip: Be explicit with transition terms like "whip pan" or "match cut" in your prompts — Veo honors specific cinematography language better than vague descriptions.
PRESENTED BY RETOOL
🗓️ Join us at Retool Summit in SF
The Rundown: On October 7, Retool is taking over SFJAZZ for a one-day event full of engaging sessions, hands-on building, and networking with fellow Retool builders.
At Retool Summit, you can expect:
Mainstage talks on AI and app development
Training sessions with the Retool team
A fireside chat with Stripe CEO Patrick Collison
A peek at some exciting Retool product news
Register for Retool Summit and get 50% off with code RETOOL50.
AI RESEARCH
📝 Anthropic reveals how teachers are using AI

Image source: Anthropic
The Rundown: Anthropic just published a new report analyzing 74,000 conversations from educators on Claude, discovering that professors are primarily using AI to automate administrative work, with using AI for grading a polarizing topic
The details:
Educators most often used Claude for curriculum design (57%), followed by academic research support (13%), and evaluating student work (7%).
Professors also built custom tools with Claude’s Artifacts, ranging from interactive chemistry labs to automated grading rubrics and visual dashboards.
AI was used to automate repetitive tasks (financial planning, record-keeping), but less automation was preferred for areas like teaching and advising.
Grading was the most controversial, with 49% of assessment conversations showing heavy automation despite being rated as AI’s weakest capability.
Why it matters: Students using AI in the classroom has been a difficult adjustment for the education system, but this research provides some deeper insights into how it’s being used on the other side of the desk. With both adoption and acceleration of AI still rising, its use and acceptance are likely to vary massively from classroom to classroom.
QUICK HITS
🛠️ Trending AI Tools
🍌 Gemini 2.5 Flash Image - Google’s new SOTA image editing model
🎬 Wan2.2-S2V - Open-source speech-to-video AI with audio capabilities
🗣️ Google Translate - New AI-powered live translations for 70+ languages
🎨 Adobe Firefly - AI creative platform, now featuring Gemini 2.5 Flash Image
📰 Everything else in AI today
Japanese media giants Nikkei and Asahi Shimbun filed a joint lawsuit against Perplexity, a day after it launched a revenue-sharing program for publishers.
U.S. first lady Melania Trump announced the Presidential AI Challenge, a nationwide competition for K-12 students to create AI solutions for issues in their community.
Google introduced new AI upgrades to its Google Translate platform, including real-time on-screen translations for 70+ languages and interactive language learning tools.
Stanford researchers published a new report on AI’s impact on the labor market, finding a 13% decline in entry-level jobs for ‘AI-exposed’ professions.
AI2 unveiled Asta, a new ecosystem of agentic tools for scientific research, including research assistants, evaluation frameworks, and other tools.
Scale AI announced a new $99M contract from the U.S. Department of Defense, aiming to increase the adoption of AI across the U.S. Army.
COMMUNITY
🤝 Community AI workflows
Every newsletter, we showcase how a reader is using AI to work smarter, save time, or make life easier.
Today’s workflow comes from reader Simon S. in England:
“I design and operate luxury holidays for visitors to Europe. Each client has specific wants and needs; the variables are huge. I created an agent that understands the style of programs I want to design, and provided it with a brief of what that particular client desires. Within seconds, I have a framework to begin tweaking. Once the program is finalized, another agent breaks down all the costs I need to cover in my quote, including making recommended rates from online trawling. A final agent then pulls all that together into the final format for my client. What was easily a few hours’ work is now condensed into 30-45 minutes."
How do you use AI? Tell us here.
🎓 Highlights: News, Guides & Events
Read our last AI newsletter: Perplexity's $42.5M publisher peace offering
Read our last Tech newsletter: Klarna gets $14B reality check
Read our last Robotics newsletter: Drones that fly like birds of prey
Today’s AI tool guide: Prompt marketing videos with Gemini's Veo 3
RSVP to our next workshop @ 4 PM EST Friday: Essential ChatGPT Tips
See you soon,
Rowan, Joey, Zach, Shubham, and Jennifer — the humans behind The Rundown


Klarna gets $14B reality check
Read Online | Sign Up | Advertise
Good morning, tech enthusiasts. Once the poster child of the buy-now-pay-later craze, Klarna is reviving its long-delayed U.S. IPO plans — this time with a humbler price tag of $14B, a far cry from its $50B peak.
The Swedish giant is betting that its AI makeover can reignite investor enthusiasm. But with neobanks feeling the squeeze, can clever algorithms turn hype into cash?
In today’s tech rundown:
Klarna’s $14B IPO reset
Intel’s $9B Trump deal comes with caveats
Solar panels in space for net-zero
Tesla could’ve dodged a $242M Autopilot bullet
Quick hits on other tech news
LATEST DEVELOPMENTS
KLARNA
🤑 Klarna’s $14B IPO reset

Image source: Klarna AB/Wikimedia Commons
The Rundown: Swedish fintech giant Klarna is set to revive its long-stalled plan to go public in the U.S. next month, eyeing a valuation between $13B and $14B — far below the $50B it once commanded during the peak of the buy-now-pay-later boom.
The details:
Klarna’s core “buy now, pay later” (BNPL) service now boasts over 111M global users and 790K merchants, along with major new U.S. retail partnerships.
The renewed IPO bid comes after a tumultuous spring, when Klarna delayed its U.S. stock market debut in reaction to Trump’s sweeping tariffs.
The company is looking to move from BNPL into full neobanking, launching a debit card for U.S. consumers and receiving a UK e-money license.
AI-driven credit is central to Klarna’s pitch to investors, with CEO Sebastian Siemiatkowski saying automation is key to scaling profitably in the U.S.
Why it matters: After U.S.-based Chime’s $1B IPO in June, Klarna is chasing its own Wall Street debut at $34–$36 a share. The listing will test whether investors still believe in the neobank story, and whether an AI-fueled makeover can turn the fading BNPL hype into sustainable growth in a tariff-heavy market.
TOGETHER WITH SANA
💡Sana’s most powerful AI upgrade yet
The Rundown: Sana Agents just got a major boost with GPT-5, now capable of instantly automating complex workflows like syncing Salesforce, updating docs, and sending follow-ups — all in one command. Launch no-code AI agents in minutes to adapt reports on the fly and connect to your entire tool stack.
What you can do now:
Powerful multi-step workflow automation with AI agents
Generate dynamic, context-aware outputs like docs, presentations, apps, and more
Connect 100+ enterprise-grade integrations, including Slack, Teams, and your CRM
Drive enterprise-wide innovation today with Sana Agents.
INTEL
🤝 Intel’s $9B Trump deal comes with caveats

Image source: Coolcaesar/Wikimedia Commons
The Rundown: The U.S. just dropped an $8.9B bombshell on Silicon Valley, converting unused CHIPS Act funds into a 10% stake in Intel — a strategic bet on the U.S.’s top chipmaker. But Intel worries it’s not all fun and games.
The details:
The deal, funded by unallocated CHIPS Act grants, gives Washington passive ownership, with no board representation or direct management rights.
Intel’s management framed the move as a strategic boost for the U.S. semiconductor industry, aimed at strengthening domestic manufacturing.
Yet, Intel’s latest SEC filings warn this alliance could spark volatility, with 76% of Intel’s 2024 revenue coming from foreign markets.
Why it matters: Trump hints ‘many more’ deals could follow in other key tech sectors. Supporters hail it as a bold move toward U.S. tech independence, while critics warn it risks a slippery new precedent for government meddling in private industry.
SPACE TECH
☀️ Solar panels in space for net-zero

Image source: King’s College London
The Rundown: Europe’s quest for net-zero just got an orbital upgrade: a landmark study finds that adopting NASA-designed space-based solar panels could slash the continent’s need for land-based renewables by 80% by 2050.
The details:
The system relies on NASA’s heliostat design, which uses orbital mirror-like reflectors to collect sunlight and beam it wirelessly to ground stations.
King’s College London found that the tech could decrease overall European power system costs by 15%, equating to annual savings of nearly €36B.
Battery storage needs would fall by two-thirds, since space-based panels provide near-constant power and buffer the grid against clouds or nighttime.
Why it matters: This study is the first to show that space-based solar — if launch and transmission tech keeps getting cheaper — could provide stable, 24/7 clean power for Europe. While the U.S. slashes clean energy, Europe may nudge the tech from sci-fi to grid disruptor, but plenty of hurdles lie ahead, from regulatory red tape to space junk.
TESLA
⚖️ Tesla could’ve dodged a $242M Autopilot bullet

Image source: Tesla
The Rundown: Tesla’s Autopilot woes just got pricier: the EV maker reportedly turned down a $60M settlement over a 2019 fatal crash, only to be hit with a $242.5M jury award.
The details:
Reuters reports that a federal jury in Miami found Tesla partly responsible for a fatal crash involving its Autopilot system, awarding it $242.5M in damages.
Court filings reveal Tesla could have settled the case in May for $60M, but rejected the offer.
The case stemmed from a Tesla Model S running a red light with Autopilot engaged, colliding with a Chevrolet Tahoe and killing one passenger.
Why it matters: The case highlights Tesla’s promotion of Autopilot as nearly autonomous, despite its Level 2 system requiring constant driver attention. With similar lawsuits mounting nationwide, analysts warn that Tesla’s reliance on the Autopilot dream may need a serious rethink.
QUICK HITS
📰 Everything else in tech today
Apple filed a lawsuit against a former Apple Watch team member, accusing him of sharing trade secrets with Chinese tech giant Oppo.
Morgan Stanley analysts predict that AI will touch 90% of U.S. jobs and unlock nearly $1 trillion a year in corporate savings.
The Ouro Reactor successfully converted biogas from a California dairy farm directly into syngas, a jet fuel precursor, using a low-cost electric device.
YouTube and Fox are locked in a contract dispute that threatens to block subscribers from streaming major NFL and college football games this season.
Trump confirmed that he may again extend the deadline for ByteDance to sell TikTok’s U.S. assets days after the administration launched its own TikTok account.
CU Boulder researchers created ‘cyborg’ jellyfish that can be steered with tiny microelectronics to gather deep-ocean data in hard-to-reach places.
Archer Aviation’s Midnight eVTOL just completed a record 55-mile, 31-minute piloted flight, reaching speeds of 126 mph.
A German court ordered Apple to stop advertising its smartwatches as ‘carbon neutral,’ ruling the claims are greenwashing.
Trump appointed Airbnb co-founder Joe Gebbia as the first U.S. chief design officer to lead the redesign of 26K outdated federal websites.
COMMUNITY
🎓 Highlights: News, Guides & Events
Read our last AI newsletter: Perplexity’s $42.5M publisher peace offering
Read our last Tech newsletter: Musk brings Zuck into OpenAI drama
Read our last Robotics newsletter: Drones that fly like birds of prey
Today’s AI tool guide: Learn effectively with ChatGPT’s new mode
SVP to our next workshop @ 4 PM EST Friday: Essential ChatGPT Tips
See you soon,
Rowan, Jennifer, and Joey—The Rundown’s editorial team

Perplexity's $42.5M publisher peace offering
Read Online | Sign Up | Advertise
Good morning, AI enthusiasts. After months of tension with publishers, Perplexity is finally opening its wallet — introducing a $42.5M revenue-sharing program that acknowledges AI agents consume content just like humans do.
The company's new Comet Plus subscription attempts to create new economics for an industry watching its traditional model collapse, but publishers might find the math doesn't quite add up.
In today’s AI rundown:
Perplexity’s $42.5M publisher revenue program
Elon Musk’s xAI sues Apple, OpenAI
Learn effectively with ChatGPT's "Study & Learn" mode
Microsoft’s SOTA text-to-speech model
4 new AI tools, community workflows, and more
LATEST DEVELOPMENTS
PERPLEXITY
💰 Perplexity’s $42.5M publisher revenue program

Image source: Perplexity
The Rundown: Perplexity just unveiled a new revenue-sharing initiative that allocates $42.5M to publishers whose content appears in AI search results, introducing a $5 monthly Comet Plus subscription that gives media outlets 80% of proceeds.
The details:
Publishers will earn money when their articles generate traffic via Perplexity's Comet browser, appear in searches, or are included in tasks by the AI assistant.
The program launches amid active copyright lawsuits from News Corp's Dow Jones and cease-and-desist orders from both Forbes and Condé Nast.
Perplexity distributes all subscription revenue to publishers minus compute costs, with Pro and Max users getting Comet Plus bundled into existing plans.
CEO Aravand Srinivas said Comet Plus will be “the equivalent of Apple News+ + for AIs and humans to consume internet content.”
Why it matters: While legal issues likely play a big factor in this new shift, the model is one of the first to acknowledge the reality of content clicks occurring via AI agents as much as humans. But the economics of splitting revenue across a $5 subscription feels like pennies on the dollar for outlets struggling with finances in the AI era.
TOGETHER WITH SANA
💡Sana’s most powerful AI upgrade yet
The Rundown: Sana Agents just got a major boost with GPT-5, now capable of instantly automating complex workflows like syncing Salesforce, updating docs, and sending follow-ups — all in one command. Launch no-code AI agents in minutes to adapt reports on the fly and connect to your entire tool stack.
What you can do now:
Powerful multi-step workflow automation with AI agents
Generate dynamic, context-aware outputs like docs, presentations, apps, and more
Connect 100+ enterprise-grade integrations, including Slack, Teams, and your CRM
XAI, APPLE, & OPENAI
👨🏻⚖️ Elon Musk’s xAI sues Apple, OpenAI

Image source: GPT-image / The Rundown
The Rundown: Elon Musk’s AI startup, xAI, just filed a lawsuit in Texas against both Apple and OpenAI, alleging that the iPhone maker’s exclusive partnership surrounding ChatGPT is an antitrust violation that locks out rivals like Grok in the App Store.
The details:
The complaint claims Apple’s integration of ChatGPT into iOS “forces” users toward OAI’s tool, discouraging downloads of competing apps like Grok and X.
xAI also accused Apple of manipulating App Store rankings and excluding its apps from “must-have” sections, while prominently featuring ChatGPT.
The lawsuit seeks billions in damages, arguing the partnership creates an illegal "moat" that gives OpenAI access to hundreds of millions of iPhone users.
OpenAI called the suit part of Musk’s “ongoing pattern of harassment,” while Apple maintained its App Store is designed to be “fair and free of bias.”
Why it matters: Elon wasn’t bluffing in his X tirade against both Apple and Sam Altman earlier this month, but this wouldn’t be the first time Apple’s been faced with legal accusations of operating a walled garden. The lawsuit could set the first precedent around AI market competition just as it enters mainstream adoption.
AI TRAINING
📚 Learn effectively with ChatGPT's "Study & Learn" mode

The Rundown: In this tutorial, you will learn how to use ChatGPT's Study & Learn flow to understand complex topics through guided, step-by-step problem-solving and interactive quizzes that prevent the "copy-the-answer" trap.
Step-by-step:
In ChatGPT, select "GPT-5" → click "+" → open "More" settings → toggle on "Study & Learn"
Set response time: Auto (default), Instant (simple prompts), or Thinking (for detailed scaffolding)
Prompt with structure: "Help me solve [topic] step by step. Ask me for each intermediate value before moving on" or "Quiz me on [subject] with MCQ and short answers"
Work through guided steps — ChatGPT acts as a tutor, checking each answer before revealing the next step
Download or save your session for future reference and study notes
Pro tip: Use specific instructions like "Socratic hints only" or "Don't reveal final answer until I ask" for better learning retention. Alternate between explanations and quizzes, and request variations like "Same problem, new numbers" or "Find my mistake" to deepen understanding.
PRESENTED BY CONCIERGE
👋 Your brand's AI answer engine
The Rundown: Today’s SaaS buyers use AI every day to answer their questions, and have no patience for a scavenger hunt. Concierge is a Perplexity-style answer engine, trained on your company, that lives on your website and delivers accurate, personalized responses to ultra-specific questions.
Modern brands use Concierge to:
Handle any buyer question (no matter how technical) with advanced RAG on your sources & media
Control and visibility over every conversation, with guardrails and sentiment analysis
Build trust with website visitors before they are willing to commit to a demo
Try Concierge to turn every question into a conversation — and every conversation into revenue.
MICROSOFT
🎙️ Microsoft’s SOTA text-to-speech model

Image source: Microsoft
The Rundown: Microsoft just released VibeVoice, a new open-source text-to-speech model built to handle long-form audio and capable of generating up to 90 minutes of multi-speaker conversational audio using just 1.5B parameters.
The details:
The model generates podcast-quality conversations with up to four different voices, maintaining speakers’ unique characteristics for hour-long dialogues.
Microsoft achieved major efficiency upgrades, improving audio data compression 80x and allowing the tech to run on consumer devices.
Microsoft integrated Qwen2.5 to enable the natural turn-taking and contextually aware speech patterns that occur in lengthy conversations.
Built-in safeguards automatically insert "generated by AI" disclaimers and hidden watermarks into audio files, allowing verification of synthetic content.
Why it matters: While previous models could handle conversations between two, the ability to coordinate four voices across long-form conversations is wild for any model — let alone an open-source one small enough to run on consumer devices. We’re about to move from short AI podcasts to full panels of AI speakers doing long-form content.
QUICK HITS
🛠️ Trending AI Tools
🗣️ VibeVoice - Microsoft’s new open-source, long-form text-to-speech model
🌎 Mirage 2 - Generate real-time, playable world engines from text or images
🎥 MuseStreamer 2.0 - Baidu’s upgraded image-to-video model
📚 AI Elements - Vercel’s customizable React components for AI interfaces
📰 Everything else in AI today
YouTube is facing backlash after creators discovered the platform using AI to apply effects like unblur, denoise, and clarity to videos without notice or permission.
Silicon Valley heavyweights, including Greg Brockman and A16z, are launching Leading the Future, a super-PAC to push a pro-AI agenda at the U.S. midterm elections.
Nvidia announced that its Jetson Thor robotics computer is now generally available to provide robotic systems the ability to run AI and operate intelligently in the real world.
Google introduced a new multilingual upgrade to NotebookLM, expanding its Video and Audio Overviews features to 80 languages.
Chan-Zuckerberg Initiative researchers introduced rbio1, a biology-specific reasoning model designed to assist scientists with biological studies.
Brave uncovered a security vulnerability in Perplexity’s Comet browser, which allowed for malicious prompt injections to give bad actors control over the agentic browser.
COMMUNITY
🤝 Community AI workflows
Every newsletter, we showcase how a reader is using AI to work smarter, save time, or make life easier.
Today’s workflow comes from reader Don F. in Los Angeles, CA:
"I use Make.com to generate a weekly automation that runs every Sunday night to help me plan my meals for the week. I created a document containing the last 6 months of my meal titles and grocery lists, which an LLM scans. It then sends five new meal ideas and a list of ingredients into a draft folder on my computer. I configured parameters in the prompt, including meals my kids like and dislike (such as avoiding spicy foods), which meals are repeatable, and ordered the ingredient list to make my supermarket route easier (fruits, vegetables, meats, dairy, etc.). I can then edit the suggestions, select my favorite meals, and email the list to myself."
How do you use AI? Tell us here.
🎓 Highlights: News, Guides & Events
Read our last AI newsletter: Apple in talks to use Google’s AI for new Siri
Read our last Tech newsletter: Musk brings Zuck into OpenAI drama
Read our last Robotics newsletter: Drones that fly like birds of prey
Today’s AI tool guide: Learn effectively with ChatGPT’s new mode
SVP to our next workshop @ 4 PM EST Friday: Essential ChatGPT Tips
See you soon,
Rowan, Joey, Zach, Shubham, and Jennifer — the humans behind The Rundown


Drones that fly like birds of prey
Read Online | Sign Up | Advertise
Good morning, robotics enthusiasts. UK engineers are building fixed-wing drones that perch on ledges, slip through tight gaps, and rule the skies with raptor-like precision.
These aren’t your everyday quadcopters — they fly with bird-like agility. Could the future of drones be less machine, and more creature?
In today’s robotics rundown:
Birds of prey inspire new breed of drones
Boston Dynamic’s Atlas gets brain upgrade
These robot dogs are training for Mars
Waymo rolls its robotaxis into NYC
Quick hits on other robotics news
LATEST DEVELOPMENTS
DRONES
🦅 Birds of prey inspire new breed of drones

Image source: University of Surrey
The Rundown: Engineers at the University of Surrey are designing fixed-wing drones inspired by owls and raptors, giving them the ability to perch on surfaces and weave through tight urban spaces with the agility of birds of prey.
The details:
The Learning2Fly project is engineering fixed-wing drones that mimic owls and raptors to master precise perching and agile maneuvers.
These drones are designed for energy efficiency and long-range operation, improving on the limited agility of conventional fixed-wing UAVs.
The team gathers real flight data via motion-capture tests and onboard sensors to train ML models that anticipate and control drone behavior.
Lightweight prototypes — some retrofitted from toy aircraft — are tested inside a motion-capture lab, with 3D flight data used to refine its algorithm.
Why it matters:
BOSTON DYNAMICS
🧠 Boston Dynamic’s Atlas gets brain upgrade

Image source: Boston Dynamic
The Rundown: Boston Dynamics gave its Atlas humanoid a major upgrade in the form of a Large Behavior Model (LBM) co-developed with the Toyota Research Institute, which allows the bot to coordinate its entire body to perform manipulation tasks.
The details:
Atlas’s LBM is a 450-million-parameter diffusion transformer, ingesting images, proprioceptive signals, and language prompts to plan coordinated actions.
It enables Atlas to perform diverse manipulation tasks, like folding textiles and tying ropes, using data-driven demos rather than code.
The robot’s inference can be sped up by 1.5x to 2x at runtime, executing tasks much faster than human teleoperation, with minimal loss of dexterity or balance.
The model unifies whole-body manipulation, treating hands and feet almost interchangeably, which enables the bot to move a bit more like a human.
Why it matters: With LBMs, adding new skills is no longer a painstaking process — advanced capabilities can now be integrated rapidly through data-driven learning, without writing any additional code. For now, Atlas can coordinate its entire body (somewhat) fluidly to take on new complex skills all within one unified control policy.
MISSION TO MARS
🚀 These robot dogs are training for Mars

Image source: Oregon State University
The Rundown: At White Sands National Park, Oregon State University researchers are putting quadruped robot dogs through their paces in five days of rigorous field trials designed to simulate the harsh and unpredictable landscapes of Mars.
The details:
The robots are being trained to autonomously scout, map, and suggest optimal sampling locations, acting as field partners for astronauts.
Their articulated legs let them sense surface stability in real time, helping both robots and human explorers avoid hazardous or unstable terrain.
The bots gather valuable mechanical and geoscientific data, with the goal to work on Mars alongside humans, rovers, and other robots.
This research builds on earlier moon-analogue fieldwork on Mount Hood and is part of a NASA Moon to Mars initiative.
Why it matters: These tests show how a robot’s feet can sense surface stability in real time, adjusting movement just as humans would. For the first time, a robot operated with true autonomy, choosing its own routes while scientists monitored from mission control, simulating how distant teams on Earth and Mars could collaborate.
WAYMO
🚗 Waymo rolls its robotaxis into NYC

Image source: Waymo
The Rundown: Waymo’s robotaxis are heading to NYC. In a first, the city’s Department of Transportation has greenlit Waymo to operate autonomous vehicles — with safety drivers — across Manhattan and Downtown Brooklyn through September.
The details:
The permit allows Alphabet’s subsidiary to deploy up to eight Jaguar I-Pace robotaxis, marking NYC as a testing ground for robotaxis.
Compared to Waymo rollouts in San Francisco or Phoenix, the NYC pilot features some of the nation’s toughest regulatory and safety requirements.
Every vehicle must have a trained human operator with at least one hand on the wheel, and regular check-ins and data sharing with DOT are mandatory.
Testing cannot include paid ride-hailing or for-hire service — current Taxi and Limousine Commission rules prohibit fully autonomous public rides.
Why it matters: Waymo’s team spent years mapping and preparing its tech for these unique urban challenges, with NYC being one of the densest pedestrian, cyclist, and traffic environments in the country — a true stress test for next-gen autonomy. The pilot runs through September, with the option to extend if all goes well.
QUICK HITS
📰 Everything else in robotics today
Beijing’s robot shopping mall has reportedly sold more than 19K robots and related products, racking up over 330M yuan ($46M) in sales.
AgiBot has launched a six-product lineup, including humanoids, dexterous hands, and a robotic dog, on its own e-commerce site and JD.com.
Chipotle is testing autonomous drone delivery with Zipline, dubbed “Zipotle,” for select Dallas-area customers, enabling digital orders to be flown directly to homes.
UK’s Nottinghamshire Police is trialing AI robot dogs with weapon detection to enter high-risk scenes instead of officers, eyeing national use by 2026.
Just Eat Takeaway.com has launched a pilot in Zurich with Swiss robotics startup RIVR to test autonomous, stair-climbing delivery robots for food orders.
University of Waterloo researchers have created tiny magnetic robots that dissolve kidney stones in the urinary tract, offering a non-surgical alternative for rapid treatment.
Researchers engineered a tiny robot with fans that passively open and close at high speed, modeled after Rhagovelia water striders, for quick movement across water.
COMMUNITY
🎓 Highlights: News, Guides & Events
Read our last AI newsletter: Apple in talks to use Google’s AI for new Siri
Read our last Tech newsletter: Musk brings Zuck into OpenAI drama
Read our last Robotics newsletter: Humanoid Games: Glory meets glitches
Today’s AI tool guide: Build an AI email agent to auto-schedule meetings
RSVP to our next workshop @ 4 PM EST Friday: Essential ChatGPT Tips
See you soon,
Rowan, Jennifer, and Joey—The Rundown’s editorial team

Apple in talks to use Google's AI for new Siri
Read Online | Sign Up | Advertise
Good morning, AI enthusiasts. Apple is reportedly weeks away from a crossroads decision that could define Siri's future: Stick with their in-house AI models or swallow their pride and tap a rival for help.
With Google's Gemini now emerging as a candidate and custom models already in testing, the iPhone maker might soon rely on one of their biggest smartphone competitors to dig them out of an increasingly deep AI hole.
In today’s AI rundown:
Apple explores Google’s Gemini to fix Siri
Meta partners with Midjourney for ‘aesthetic’ AI
Build an AI email agent that auto-schedules meetings
OpenAI, Retro Biosciences make old cells young again
4 new AI tools, community workflows, and more
LATEST DEVELOPMENTS
APPLE & GOOGLE
📱Apple explores Google’s Gemini to fix Siri

Image source: Ideogram / The Rundown
The Rundown: Apple is reportedly in early talks with Google about using Gemini to power a completely rebuilt Siri, according to Bloomberg — following setbacks that pushed the voice assistant's major upgrade to 2026.
The details:
Apple had Google build a custom Gemini model that would run on Apple's private servers, with Google already training a version for testing.
The company is simultaneously developing two Siri versions internally: Linwood using Apple's own models and Glenwood running on external tech.
Apple has also explored similar partnerships with Anthropic and OpenAI (with ChatGPT already helping power Siri’s answering capabilities).
Bloomberg reported that Apple is still “several weeks away” from a decision on both using internal vs. external models and who the partner would be.
Why it matters: For all the negativity surrounding Apple’s AI issues, moving externally to bring on one of the frontier labs could be the best possible outcome for iPhone users. The alternative is hoping Apple can develop its own — but with talent fleeing to rivals and already facing setbacks, it seems like a long and arduous path.
TOGETHER WITH ARTISAN
🤝 Hire an AI BDR that reacts to buyer signals
The Rundown: Meet Ava, your AI BDR that monitors your leads 24/7 and takes action the moment they’re ready to buy. Ava tracks high-intent signals like fundraise announcements, job postings and website visits — then enrolls leads into personalized, multi-channel sequences.
Ava operates within the Artisan platform, consolidating every outbound tool you need:
300M+ high-quality B2B prospects
Automated lead enrichment across 10+ data sources
Multi-channel outreach with full email deliverability management
Live intent signal scraping with website visitor ID, fundraise announcements & more
Book a demo and supercharge your sales team.
META & MIDJOURNEY
🎨 Meta partners with Midjourney for ‘aesthetic’ AI

Image source: Midjourney
The Rundown: Meta just announced a new partnership with Midjourney to integrate the startup’s ‘aesthetic technology’ into future AI models and products, a major shift from the company’s in-house creative model development.
The details:
Meta's Chief AI Officer Alexandr Wang said the ‘technical collaboration’ will combine teams to upgrade visual capabilities across Meta's product lineup.
Meta currently has a series of visual generation tools, including Imagine, Movie Gen, and research-focused models like Dino V3.
Founder David Holz emphasized that Midjourney is still an “independent, community-backed research lab with no investors” despite the partnership.
Midjourney launched its first video generation capabilities in June with its V1 model, giving users the ability to turn images into five-second extendable clips.
Why it matters: Meta bringing Midjourney aesthetics to its billions of users would be a big change from the quality seen in its previous in-house models, with MJ having a special vibe that is just hard to match. Meta is also showing a new willingness to look externally (not just poach talent) to help push its own AI development forward.
AI TRAINING
🤖 Build an AI email agent that auto-schedules meetings

The Rundown: In this tutorial, you’ll learn how to build an AI agent for your inbox that you can CC into any email to automatically find meeting times and book directly in your Google Calendar.
Step-by-step:
Sign up for Lindy AI (Rundown University members get a discount perk) and duplicate our pre-built template
Configure triggers: set your AI's email address (e.g., ava@company.com) and whitelist your personal email as the filtered sender
The agent works in two modes: Direct Booking (you specify "Tuesday 2PM") or Time Finding (agent checks calendar and suggests options)
Adjust availability blocks in "Find Available Times" - set your workday start/end hours and available days
Test both modes, then simply CC your AI assistant on any scheduling email
Pro tip: Add team members' emails as "from" conditions and connect their calendars — this lets anyone CC the agent to coordinate meetings across everyone's availability.
PRESENTED BY GALILEO
🧭 Expert guide for AI evaluations
The Rundown: How are you evaluating your AI outputs? Instead of relying on vibe checks, learn how the experts quickly and accurately evaluate AI using LLM judges.
Download Galileo’s newest eBook for 70 pages of insights on:
How to automate evaluations to score, explain, and flag quality issues
Advanced techniques like token-level scoring, Chain-of-Thought, and pairwise comparison
Practical frameworks and code examples for building your own LLM judges
AI RESEARCH
🧬 OpenAI, Retro Biosciences make old cells young again

Image source: OpenAI
The Rundown: OpenAI just published a case study with Retro Biosciences, using a custom AI model to redesign proteins that turn cells into stem cells, achieving 50x better efficiency than the original Nobel-Prize winning versions discovered in 2012.
The details:
Researchers built GPT-4b micro, an AI trained on biological data rather than internet text, to redesign ‘Yamanaka’ proteins that reprogram aging cells.
The AI-designed proteins converted the cells into stem cells 50x more efficiently, showing dramatically better DNA repair abilities.
The results essentially reversed one of the key signatures of aging at the cellular level, with multiple labs validating the results across testing methods.
Why it matters: While public models are leveling up users in their own work, custom models trained by domain experts could unlock discoveries that general-purpose AI would never find — turning biology, chemistry, and materials science into computational playgrounds where decades of lab work compresses into weeks.
QUICK HITS
🛠️ Trending AI Tools
📊 Julius - Your AI Data Analyst. Connect your data, ask questions, and get insights in seconds. No coding required.*
🧠 Command A Reasoning - Cohere’s new enterprise reasoning model
🌱 Seed-OSS - ByteDance’s family of open-source reasoners with long-context
⚙️ Qoder - Alibaba’s free agentic coding platform
*Sponsored Listing
📰 Everything else in AI today
New court filings revealed that Elon Musk asked Meta CEO Mark Zuckerberg to help finance a $97.4B takeover of OpenAI in February, though Meta did not agree to the letter of intent.
xAI open-sourced its older Grok 2.5 model, with Elon Musk saying Grok 3 will also be made open source in “about 6 months.”
OpenAI announced the opening of a new office in New Delhi, coming on the heels of its new $5/mo ChatGPT GO plan specifically for the region.
Elon Musk and xAI introduced MacroHard, a ‘purely AI software company’ aimed at replicating competitors like Microsoft using simulations and AI agents.
Meta FAIR researchers released DeepConf, a method of deep thinking that achieved 99.9% on the AIME benchmark using open-source models.
Baidu launched MuseStreamer 2.0, a family of image-to-video models, with upgrades in multi-character coordination, synced audio outputs, and lower pricing.
COMMUNITY
🤝 Community AI workflows
Every newsletter, we showcase how a reader is using AI to work smarter, save time, or make life easier.
Today’s workflow comes from reader Mark H. in Detroit, MI:
"I use AI to make artsy things in my brain a reality. I will start with a crude drawing, then put that into chat gpt to create a prompt from that drawing then put thata prompt into Krea AI to generate a realist AI image to my liking then use an image-to-3d model to create a 3D file which I then 3D print. Pretty neat to take a concept in my brain to a physical product in a few steps — I love this future."
How do you use AI? Tell us here.
🎓 Highlights: News, Guides & Events
Read our last AI newsletter: Meta’s major AI restructure
Read our last Tech newsletter: Musk brings Zuck into OpenAI drama
Read our last Robotics newsletter: Humanoid Games: Glory meets glitches
Today’s AI tool guide: Build an AI email agent to auto-schedule meetings
RSVP to our next workshop @ 4 PM EST Friday: Essential ChatGPT Tips
See you soon,
Rowan, Joey, Zach, Shubham, and Jennifer—the humans behind The Rundown

No matching search results
Try using different keywords, double-check your spelling, or explore related categories.
Stay Ahead on AI.
Join 2,000,000+ readers getting bite-size AI news updates straight to their inbox every morning with The Rundown AI newsletter. It's 100% free.

















