Get the latest AI news, understand why it matters, and learn how to apply it in your work — all in just 5 minutes a day. Join over 2,000,000+ subscribers.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
AI

Exclusive: Microsoft AI launches Copilot Vision

Rowan Cheung • 7 minutes

Welcome, AI enthusiasts.
We have an exclusive for you today.

On Thursday, Microsoft launched Copilot Vision in its Edge browser — a new AI that can see your screen and talk with you in real-time as you navigate the internet.

Long story short: it’s one of the most insane products we’ve tried this year.

So we partnered up with Microsoft and Mustafa Suleyman (CEO of Microsoft AI) to chat about his unique insights, infinite memory, AI companions, AI agents, and more.


In today’s AI rundown:

  • Copilot Vision: A new era of human-computer interaction

  • How Microsoft AI is differentiating from OpenAI

  • User data privacy with Copilot Vision

  • Living amongst a co-intelligence in 10+ years

  • Memory, learning, gaming, and AI agents

EXCLUSIVE Q&A MUSTAFA SULEYMAN

MICROSOFT COPILOT

👀 Copilot Vision: A new era of human-computer interaction

Image credits: Kiki Wu / The Rundown

The Rundown: Microsoft just launched its next-generation AI assistant, Copilot Vision, which can see everything on your screen, and speak back to you in real time on its Edge browser, marking a fundamental shift in how we interact with computers.

Cheung: "Can you give us a quick rundown of everything released and why this is an important moment for AI?"

Suleyman: “We’re launching Vision… and it’s really a magical experience that is quite different than any kind of AI or even general kind of computer interaction experiences that we’ve seen before.”

Suleyman added: "We are on a mission to create a true AI companion. And to me, an AI companion is one that can hear what you hear and see what you see and live life essentially alongside you."

Cheung: “When is Copilot Vision being rolled out?"

Suleyman: "It’s going to be available in Copilot Labs to paying Copilot subscribers, who will get special access to trial it, experiment with it, and give us feedback… Sometime in the early part of next year is when it will go into GA [general availability].”

Why it matters: Copilot Vision isn't just another AI feature—it's Microsoft's attempt to fundamentally transform how we interact with computers. By replacing traditional clicking and typing with voice and real-time screen understanding, Microsoft is betting that the future AI will be more like talking to a friend than operating a machine.

COPILOT VS CHATGPT

📎 How Microsoft AI is differentiating from OpenAI

Image credits: Kiki Wu / The Rundown

The Rundown: Microsoft is emphasizing its focus on creating a true AI companion that feels more personal and interactive, with Copilot Vision featuring emotional intelligence, Edge browser integration, and the ability to push back like a real friend.

Cheung: "Microsoft is a major investor in OpenAI, and ChatGPT and has a yet-to-be-released version of their vision-like product. How is Microsoft AI differentiating itself from other competitors?"

Suleyman: "The main thing is that we're really leaning into the idea of it being a proper companion. So just the fluency of our voice and how smooth it is, how fast it is, it's very interruptible, very easy to talk to."

Suleyman added: "Putting vision inside of the browser is the next step. Edge having it [Vision], and being there with you all the time able to watch, learn, and talk to you is a really big differentiator."

Cheung: "Something that really stood out to me as well, talking with Copilot Vision, was how personable it really was. It even gave me like some sass at some points."

Suleyman: "When it occasionally pushes back on you, that's a profound moment because a true friend would do that. No one wants a sycophantic AI that just always mirrors you and always obeys you. That's not going to be interesting for very long."

Suleyman added: “If you’re really dour and sad and you slow down the pace of your words, it will bring an appropriate vibe for that. But if you’re super fast and excited and enthusiastic, it will mirror that energy.“

Why it matters: Microsoft's approach focuses on creating an AI that feels more like a friend than a tool. By building an AI that can match your energy and emotion, push back with sass occasionally, and live inside your browser, Microsoft isn't just creating another chatbot—it's reimagining AI as a digital companion that truly understands you.

PRIVACY

🔒 User data privacy with Copilot Vision

Image credits: Kiki Wu / The Rundown

The Rundown: Microsoft is addressing privacy concerns around Copilot Vision by implementing session-based data deletion, with plans to develop more sophisticated privacy infrastructure as the technology continues to evolve.

Cheung: "With any sort of powerful AI application, such as Copilot Vision, it needs copious amounts of data to kind of really be accurate and helpful. But with this amount of personal data, there's always a new set of privacy concerns for users. How is Microsoft tackling this right now? How do users know that their data is safe?"

Suleyman: "We're keeping a very open mind on this. Some users will want to keep their ephemeral session. So at the moment, Copilot Vision throws away the contents of what it has seen at the end of the session."

Suleyman added: "If we are to [add memory], it's going to need a new privacy and security infrastructure to be able to store that kind of content because it's going to be very rich. It's going to describe in immense detail, not just moments in time, but strings of activity over hours and days."

Why it matters: Microsoft is taking a privacy-first approach with Copilot Vision—defaulting to session-based data deletion after every chat. This strategy lets Microsoft test Vision’s capabilities with users now, while building the secure infrastructure needed for persistent memory features in early 2025.

AI COMPANIONS

🤖 Living amongst a co-intelligence in 10+ years

Image credits: Kiki Wu / The Rundown

The Rundown: Suleyman predicts a future where AI companions become deeply integrated into our daily lives, understanding our emotions, preferences, and daily needs — potentially becoming "a new digital species."

Cheung: "Looking forward 10 years from now, what do you think these personal AI assistants will have in our lives?"

Suleyman: "I think of it as outsourcing a lot of the mental processing to a very reliable, highly accurate, completely interactive thought partner and companion that is going to help make me much smarter, more productive, feel more supported... it's very, very different to just using a computer in the way that we do today."

Suleyman added: "Your computer, or your AI, your Copilot, is clearly going to understand everything that you're bringing to the table—your emotional state, your intellectual state, what you need to get done that day, your interests, your hobbies, your personal knowledge graph, your family, your dislikes."

Suleyman added: “It’s going to feel…like a new digital species. It is going to feel like a member of the family.“

Why it matters: Suleyman sees AI evolving from basic tools that boost productivity into digital "family members"—understanding emotions, remembering preferences, and living alongside us. In his vision, AI companions won't just assist us with tasks, they'll become a new form of intelligence that experiences life with us.

WHAT’S NEXT

📈 Memory, learning, gaming, and AI agents

Image credits: Kiki Wu / The Rundown

The Rundown: Suleyman revealed Microsoft AI’s ambitious roadmap for Copilot—expanding beyond today's screen understanding into memory features, learning assistance, gaming integration, and agentic capabilities.

Cheung: "How deeply does Copilot Vision understand you as a user?"

Suleyman: "Memory is the key thing that is coming soon... it's important that it remembers your preferences and is able to reason over them to give you advice based on knowing you."

Cheung: "Something I'm excited for is when AI can guide me through learning new apps by controlling my screen—like when learning Photoshop."

Suleyman: "The future of Copilot Vision is definitely Copilot help, step by step, taking you through troubleshooting when you're trying to fix your computer or you're trying to learn a new piece of software."

Suleyman added: "Imagine having your Copilot talk to you about the worlds that you're building in Minecraft or hang out with you in Call of Duty... It's probably going to feel like an ever-present companion in whatever setting you're in."

Cheung: "Are there any plans for Copilot to become agentic and have the ability to take control of your computer and do regular tasks just like a human could?"

Suleyman: "We're working hard on how it navigates the browser, fills in forms, calls APIs... [with online shopping] it will populate your basket in advance. It will ask you if you want this or that, or it will find a bunch of prices and be like, 'You know, there's a better opportunity.'"

Why it matters: Microsoft's long-term goals for Copilot extend far beyond the screen and browser understanding launched this week. Suleyman hints at a roadmap where Copilot evolves into a full-on autonomous companion—learning your preferences, guiding you through software, and even gaming alongside you.

GO DEEPER

INTERVIEW

🎥 Watch the full interview live

In the full interview with Mustafa Suleyman & Rowan Cheung chat about:

  • Microsoft's vision for personalized AI companions

  • How Copilot adapts to your emotions in real-time

  • Why traditional computer interfaces will "get washed away"

  • Predictions for a future with billions of personalized AI agents

  • …and much more

Listen on YouTube, Twitter/X, Spotify, or Apple Music.

AI

OpenAI's o1 goes pro

Rowan Cheung • 6 minutes

Sign Up | Advertise | Podcast | AI University

Welcome, AI enthusiasts.

OpenAI just dropped day one of its holiday surprise series – delivering a big start with a souped-up o1 model alongside an elite (but pricey) Pro tier.

Will more powerful reasoning and unlimited access justify a hefty $200/m price tag? Let’s get into it…


In today’s AI rundown:

  • OpenAI launches full o1, new Pro mode

  • Microsoft launches Copilot Vision feature

  • Transform any lecture into a comprehensive study guide

  • Clone debuts realistic humanoid with synthetic organs

  • 5 new AI tools & 5 new AI jobs

  • More AI & tech news

Read time: 4 minutes

LATEST DEVELOPMENTS

OPENAI

🚀 OpenAI launches full o1, new Pro mode

Image source: OpenAI

The Rundown: OpenAI just released its o1 model out of preview during the first day of its ‘12 days of OpenAI’ event, alongside a new $200/m ChatGPT Pro subscription tier that includes enhanced access to the reasoning model’s most powerful features.

The details:

  • The full o1 now handles image analysis and produces faster, more accurate responses than preview, with 34% fewer errors on complex queries.

  • OpenAI’s new $200/m Pro plan includes unlimited access to o1, GPT-4o, Advanced Voice, and future compute-intensive features.

  • Pro subscribers also get exclusive access to 'o1 pro mode,' which features a 128k context window and stronger reasoning on difficult problems.

  • OpenAI’s livestream showcased o1 pro, tackling complicated thermodynamics and chemistry problems after minutes of thinking.

  • The full o1 strangely appears to perform worse than the preview version on several benchmarks, though both vastly surpassed the 4o model.

  • o1 is now available to Plus and Team users immediately, with Enterprise and Education access rolling out next week.

Why it matters: OpenAI is coming out hot with its first reveal of the holiday event — with the long-awaited full o1 and Pro mode providing a nice starting point to get the hype flowing. While the new $200 tier is a steep climb from previous plans and rivals, power users will likely be more than happy to scale up for more intensive tasks.

TOGETHER WITH NORTHERN DATA GROUP

📶 Elevate AI workloads with NVIDIA’s H200

The Rundown: Northern Data Group’s Taiga Cloud now offers instant access to NVIDIA’s H200 Tensor Core GPUs — delivering enhanced performance for LLMs, big data, scientific research and more.

With the H200 GPU, you'll experience:

  • Up to 2x better LLM inference performance

  • 110x faster time to results, speeding up projects

  • 50x reduction in energy use and TCO, optimizing costs and sustainability

  • Nearly double the memory capacity compared to the H100

Pre-register today to access next-level AI compute hardware for your business.

MICROSOFT

👀 Microsoft launches Copilot Vision feature

Image source: Microsoft

The Rundown: Microsoft just launched its Copilot Vision feature, which allows its assistant to see and interact with web pages a user is browsing in Edge in real-time — now available in preview to a limited number of its Pro user base.

The details:

  • Vision integrates directly into Edge's browser interface, allowing Copilot to analyze text and images on approved websites when enabled by users.

  • The feature can assist with tasks like shopping comparisons, recipe interpretation, and game strategy while browsing supported sites.

  • Microsoft previously revealed the feature in October alongside other Copilot upgrades, including voice and reasoning capabilities.

  • Microsoft emphasized privacy with Vision, making it opt-in only — along with automatic deletion of voice and context data after the end of a session.

Why it matters: This was one of the most insane products we’ve tried all year. The addition of real-time context and the ability for AI to ‘see‘ everything in your browser makes for a wild new form of AI that we’re likely to start seeing a lot more of in 2025.

In case you missed it, Rowan (founder of The Rundown) sat down with Microsoft AI CEO Mustafa Suleyman to discuss how Copilot Vision works, infinite memory, AI companions, agents, and more.

Watch the full interview here.

AI TRAINING

📚 Transform any lecture into a comprehensive study guide

The Rundown: Gemini’s Video Input feature allows you to convert your lecture recordings into detailed study materials with structured notes, key insights, and practice questions.

Step-by-step:

  1. Access Google AI Studio, select Gemini 1.5 Pro, and upload your lecture recording.

  2. Generate structured notes with main topics and key concepts.

  3. Create practice questions and real-world applications.

  4. Build a complete review system for effective studying.

Pro tip: Combine AI-generated study guides with class participation notes for the ultimate exam prep strategy.

We also wrote detailed prompts for premium Rundown University members that you can copy and paste from here.

PRESENTED BY VANTA

 Your checklist for AI compliance

The Rundown: Vanta’s trust management platform helps AI companies become ISO 42001 compliant — so you can build customer trust, secure more business, and differentiate from competitors.

Read Vanta's ISO 42001 compliance checklist to:

  • Understand the requirements for certification

  • Set expectations for your compliance journey

  • Streamline the process with Vanta's automated solutions

Download the checklist and take the first step toward AI compliance.

CLONE ROBOTICS

🤖 Clone debuts realistic humanoid with synthetic organs

Image source: Clone Robotics

The Rundown: Clone Robotics introduced Clone Alpha, an (extremely) humanoid robot featuring synthetic organs and water-powered artificial muscles, with 279 officially available for preorder in 2025.

The details:

  • The robot uses water-pressured "Myofiber" muscles instead of motors to move, mirroring natural movement patterns with synthetic bones and joints.

  • The company is taking orders for its first production run of 279 robots, though it has yet to publicly show a complete working version.

  • Alpha’s skills include making drinks and sandwiches, laundry, and vacuuming — also capable of learning new tasks through a ‘Telekinesis’ training platform.

  • The system runs on "Cybernet," Clone's visuomotor model, with four depth cameras for environmental awareness.

Why it matters: Clone Alpha is definitely a unique build (to say the least) compared to the other top humanoid robots on the market — with a more human-inspired approach allowing for more natural movement and dexterity. But until we see demos and more info released, a wait-and-see approach is probably smart before rushing to pre-order.

NEW TOOLS & JOBS

Trending AI Tools

  • 🗣️ Conversational AI by ElevenLabs - Build AI agents that speak for your website, app, or call center

  • 🫵 Pointer - AI editing co-pilot for Google Docs offering efficient, polished, real-time edits

  • 📊 Tables - Instantly transform unstructured data into actionable tables

  • 🤝 SDRx - An AI SDR that builds targeted lists, conducts account research, crafts personalized emails, and more

  • 🤖 Athina - An AI development platform to build, test, and monitor AI apps and agents

New AI Job Opportunities

  • 🏦 Hebbia - Engagement Manager, Equities

  • 💼 Kumo - Account Executive

  • 🤖 Deepmind - Software Engineer, Strategic Initiatives

  • 📋 Abridge - Senior Implementation Manager

  • 🧪 Mistral AI - QA Engineer

QUICK HITS

OpenAI’s ongoing 12-day event will include the launch of its Sora video generation model, according to a report from The Verge.

Google launched PaliGemma 2, the next-gen version of its vision-language model, which features enhanced capabilities across multiple model sizes, improved image captioning, and specialized task performance.

Elon Musk’s xAI officially secured $6B in new funding, set to help fund a reported massive expansion of its Colossus supercomputer to over 1M GPUs.

Humane introduced CosmOS, an AI operating system designed to work across multiple devices like TVs, cars, and speakers, following the negative reception of the startup’s AI pin device.

LA Times newspaper owner Soon-Shiong announced plans to implement an AI-powered 'bias meter' on news articles amid editorial board restructuring and staff protests.

Google also rolled out new Gemini 1.5 updates across Android, adding AI-powered photo descriptions in the Lookout app, Spotify integration for Gemini Assistant, and expanded phone controls and communications features.

THAT’S A WRAP

See you soon,

Rowan, Joey, Zach, and Alvaro—aka The Rundown Team

AI

OpenAI's holiday surprise

Rowan Cheung • 6 minutes

Sign Up | Advertise | Podcast | AI University

Welcome, AI enthusiasts.

The holidays are approaching, and OpenAI’s Sam Altman is prepping some shiny new AI presents for under the tree.

With a ‘12 Days of OpenAI’ event and new insights on AGI, Elon Musk, and more from Altman, the AI leader could be ready to finish 2024 with a bang. Let’s get into it…


In today’s AI rundown:

  • Altman’s DealBook insights, 12 Days of OpenAI

  • DeepMind’s Genie 2 turns images into playable worlds

  • Create professional thumbnails with Recraft

  • AI forecasting model crushes traditional weather systems

  • 5 new AI tools & 5 new AI jobs

  • More AI & tech news

Read time: 4 minutes

LATEST DEVELOPMENTS

OPENAI

🎄 Altman’s DealBook insights, 12 Days of OpenAI

Image source: NYT DealBook Summit

The Rundown: In an interview at the NYT DealBook Summit, Sam Altman touched on topics including Elon Musk, AGI, and Microsoft tension — with the OpenAI CEO also announcing a ‘12 Days of OpenAI’ stream of launches starting tomorrow.

The details:

  • Altman provided new numbers on ChatGPT’s adoption, including 300M weekly active users, 1B daily messages, and 1.3M U.S. developers on the platform.

  • The CEO also believes that AGI will arrive ‘a lot sooner than anyone expects,’ with the potential first glimpses coming in 2025.

  • While AGI may arrive sooner, Altman said the immediate impact will be subtle — but long-term changes and transition to superintelligence will be more intense.

  • Altman also admitted to some tension between OpenAI and Microsoft but said the companies are aligned overall on priorities.

  • He called the situation with Elon Musk “tremendously sad” but doesn’t believe Musk will use his new political power to harm AI competitors.

  • Altman revealed that OpenAI will be live-streaming new launches and demos over the next 12 days, including some ‘big ones’ and some ‘stocking stuffers.’

Why it matters: While it still feels like AI still has plenty of room to run on the adoption curve, those are some staggering numbers — and Altman interviews are always a must-watch (full interview here). With ‘12 Days of OpenAI’, the AI leader could set us up for fireworks to end an already wild year of AI progress on an even bigger note.

TOGETHER WITH ARTISAN

Automate your outbound with an AI BDR

The Rundown: Artisan unifies your outbound sales tools into one platform, featuring Ava — the AI Business Development Rep who manages it all.

With Artisan, you’ll benefit from:

  • Access to 400M+ high-quality B2B prospects, including Local Business Data & E-Commerce

  • Lead enrichment using 10+ intent-driven sources, including website visitor tracking

  • Advanced personalization via LinkedIn, Twitter, and web scraping

  • Automated outbound sequences across email and LinkedIn

  • Comprehensive email and LinkedIn deliverability management tools

Book a demo today to see Artisan in action.

GOOGLE DEEPMIND

🎮 DeepMind’s Genie 2 turns images into playable worlds

Image source: Google DeepMind

The Rundown: Google DeepMind just introduced Genie 2, a large-scale, multimodal foundation world AI model that converts single images into interactive, playable 3D environments with real-time physics, lighting effects, and player controls.

The details:

  • The model creates playable 3D environments from simple image prompts, complete with physics, lighting, and character controls that last up to a minute.

  • Genie 2 maintains spatial memory, remembering areas players have visited even when they're off-screen.

  • The system works with keyboard and mouse inputs, supporting first and third-person perspectives with 720p resolution output.

  • In testing, DeepMind's SIMA AI agent successfully navigated these generated environments, following natural language commands like "go to the red door."

  • The model can generate worlds from various image types, such as concept art and real-world photos, potentially accelerating game design prototyping.

Why it matters: Just days after World Labs’ release, DeepMind joins the world-generating party. Genie 2 offers the potential for unlimited, diverse training environments, a crucial step for developing more capable embodied AI agents —not to mention the massive implications for game prototyping and creative enhancements.

AI TRAINING

🎨 Create professional thumbnails with Recraft

The Rundown: Recraft lets you transform your basic layouts into professional thumbnails by combining images, text, and AI-generated elements all in one place.

Step-by-step:

  1. Create a free account at Recraft and click "Frame" in the top bar.

  2. Select your aspect ratio and draw your frame.

  3. Add your image and text elements exactly where you want them to appear.

  4. Select the entire frame to include all elements in the generation.

  5. Write your style prompt and click "Recraft" to transform your layout.

Pro tip: For better results, keep your text concise (3-5 words) and use specific style descriptions in your prompt.

PRESENTED BY INNOVATING WITH AI

💼 Start your career as an AI Consultant

The Rundown: Innovating with AI’s new program, AI Consultancy Project, equips AI enthusiasts with all the resources to capitalize on the rapidly growing AI consulting market – which is set to 8x to $54.7B by 2032.

The program offers:

  • Tools and framework to find clients and deliver top-notch services

  • A 6-month roadmap to build a 6-figure AI consulting business

  • Student landing their first AI client in as little as 3 days

Click here to request early access to The AI Consultancy Project.

GOOGLE DEEPMIND

🌦️ AI forecasting model crushes traditional weather systems

Image source: Ideogram / The Rundown

The Rundown: DeepMind just unveiled GenCast, an AI weather forecasting system that surpasses the accuracy of the world's leading forecasting model, producing reliable predictions for 15-day forecasts in minutes rather than hours.

The details:

  • GenCast outperformed the European Centre for Medium-Range Weather Forecasts model (ENS) on 97% of evaluation metrics for 15-day forecasts.

  • GenCast processes forecasts in just 8 minutes using a single AI chip, compared to the hours required by traditional supercomputers.

  • The model also accurately predicted extreme weather events, including tropical cyclones, heat waves, and wind conditions.

  • The system was trained on 40 years of historical weather data (1979-2018), and DeepMind open-sourced its full code for non-commercial research use.

Why it matters: AI’s prediction and data-crunching powers are being set loose on the weather — and the result is an absolute leap in how scientists will forecast both global weather and extreme events going forward. Like medicine and other data-heavy sectors, weather forecasting feels perfectly suited to be revolutionized in the AI era.

NEW TOOLS & JOBS

Trending AI Tools

  • 🗣️ Coval - Build reliable voice and chat agents faster with seamless simulation and evals

  • 🎥 Pollo AI - Create videos from text prompts, images, or videos with high resolution and quality

  • 📊 Plot - Unlock AI-powered consumer insights from social media videos

  • 📓 Pearl - AI-powered journal that visualizes and synthesizes your life to reflect on you

  • 🧠 Focu - Transform your relationship with work through AI-powered guidance, meaningful conversations, and periodic check-ins

New AI Job Opportunities

  • 📜 TaskUs - Senior Manager/Director, AI Data Operations

  • 🌊 Snorkel - Account Development Representative

  • 🚗 Waymo - Software Engineer, TaaS Infrastructure

  • 💹 DeepL - Lead Revenue Manager

  • 🤖 OpenAI - Product Manager, API Enterprise

QUICK HITS

Free event: Scott Galloway’s 2025 Predictions. Join 20,000+ for Scott’s highly anticipated unfiltered take on AI, business, and tech. RSVP for free.*

Amazon and Anthropic unveiled plans for Project Rainer, a massive AI supercomputer powered by hundreds of thousands of Trainium2 chips (5x larger than Anthropic’s current top model) that promises to be the world's largest AI system.

OpenAI announced that it’s hiring three prominent Google DeepMind computer vision experts, with Lucas Beyer, Alexander Kolesnikov, and Xiaohua Zhai set to work on multimodal AI and build out the company’s new Zurich, Switzerland office.

Luma AI revealed Ray 2, a next-gen video model capable of producing minute-long videos in just seconds, which was announced during Amazon’s AWS event alongside a new partnership to bring the model to the Amazon Bedrock platform.

Defense tech firm Anduril announced a new strategic partnership with OpenAI to develop AI-powered aerial defense systems to protect U.S. and allied forces from drone threats.

Spotify launched its annual viral ‘Wrapped’ user listening recap, featuring an additional AI-powered podcast feature using Google’s NotebookLM tool with music commentary from the two AI hosts.

EvolutionaryScale introduced ESM Cambrian, a powerful new protein language model family trained on Earth's protein sequences, achieving breakthrough performance in protein structure prediction.

*Sponsored listing

THAT’S A WRAP

See you soon,

Rowan, Joey, Zach, and Alvaro—aka The Rundown Team

AI

Amazon's new AI arsenal

Rowan Cheung • 6 minutes

Sign Up | Advertise | Podcast | AI University

Welcome, AI enthusiasts.

Amazon just unveiled its new Nova family of AI models spanning text, image, and video with capabilities that put them in the conversation with industry leaders.

With a massive customer base, unlimited resources, and now competitive new models, the retail giant may be ready to climb the AI ladder. Let’s get into it…


In today’s AI rundown:

  • Amazon releases Nova AI model family

  • Tencent unveils powerful open-source video AI

  • Build web apps without code using AI

  • Exa introduces AI database-style web search

  • 5 new AI tools & 5 new AI jobs

  • More AI & tech news

Read time: 4 minutes

Important: The Rundown is slowly being sent from a new email address. To ensure you get our newsletter, please add news@daily.therundown.ai to your contact list.

LATEST DEVELOPMENTS

AMAZON

🚀 Amazon releases Nova AI model family

Image source: Amazon

The Rundown: Amazon just announced Nova, a new family of AI models with text, image, and video generation capabilities, marking the retail giant’s biggest push into the consumer GenAI space.

The details:

  • The Nova lineup includes four text models of varying capabilities (Micro, Lite, Pro, and Premier), plus Canvas (image) and Reel (video) models.

  • Nova Pro is competitive with top frontier models on benchmarks, edging out rivals like GPT-4o, Mistral Large 2, and Llama 3 in testing.

  • The text models feature support across 200+ languages and context windows reaching up to 300,000 tokens — with plans to expand to over 2M in 2025.

  • Amazon’s Reel model can generate six-second videos from text or image prompts, and in the months ahead, the length will expand to up to two minutes.

  • Amazon also revealed that speech-to-speech and “any-to-any” modality models will be added to the Nova lineup in 2025.

Why it matters: Amazon got what feels like a later start into the AI race, but this release is the company’s biggest play yet. With a massive customer base, near unlimited war chest, and now highly competitive models, the retail giant could be a dark horse contender to quickly surge the AI power ladder.

TOGETHER WITH DYNAMIQ

🦾 Build AI agents in hours

The Rundown: Dynamiq is a secure, end-to-end platform that simplifies building, testing, and deploying AI agents – reducing development time to just hours.

With Dynamiq, you can:

  • Easily build custom AI agents with a low-code interface in under 1 hour

  • Orchestrate single and multi-agent applications for autonomous task execution

  • Deploy agentic applications on-premise, in the cloud, or in hybrid setups

Try Dynamiq free for 30 days with code RUNDOWN30.

TENCENT

🤖 Tencent unveils powerful open-source video AI

Image source: Tencent

The Rundown: Tencent just released HunyuanVideo, a new open-source, open-weights, 13B parameter AI video generation model that beats top closed rivals in testing — with the release also making it the largest publicly available model of its kind.

The details:

  • HunyuanVideo ranked above commercial competitors like Runway Gen-3 and Luma 1.6 in testing, particularly in motion quality and scene consistency.

  • In addition to text-to-video outputs, the model can also handle image-to-video, create animated avatars, and generate synchronized audio for video content.

  • The architecture combines text understanding, visual processing, and advanced motion to maintain coherent action sequences and scene transitions.

  • Tencent released HunyuanVideo’s open weights and code, making the model readily available for both researchers and commercial uses.

Why it matters: An open-source, open-weights video model is now as good (or better) than the top closed options, providing a wildly impressive foundation to build on. AI video is having a moment, and it’s hard to imagine how good these models will be in 2025, given the acceleration we are already seeing.

AI TRAINING

🚀 Build web apps without code using AI

The Rundown: Windsurf lets you create web applications with AI assistance, eliminating the need for manual coding.

Step-by-step:

  1. Create a new project using ‘npx create-next-app@latest your-app-name’.

  2. Open Cascade AI assistant in your Windsurf app project sidebar.

  3. Describe your desired features in natural language.

  4. Preview and refine through conversation with AI.

Pro tip: Be specific in your feature requests - the clearer your description, the better the result. For a more detailed workflow of Windsurf, you can access a full workshop here.

PRESENTED BY IBM

💻 IBM’s Most Compact AI Models Target Enterprises

The Rundown: Meet IBM’s new third generation of Granite with new open, compact, and efficient 2B and 8B language models.

Designed to give enterprises more ways to embed and scale AI in their businesses, these new 2B and 8B compact models are:

  • Trained with carefully curated data;

  • Cost-efficient;

  • Designed to run high-performance solutions.;

Learn more about how these models can help transform enterprise AI adoption. 

EXA

🔍 Exa introduces AI database-style web search

Image source: Exa

The Rundown: Search startup Exa just launched Websets, a new search engine that aims to transform the chaotic web into a structured database using embedding technology from large language models to create the ‘perfect web search’.

The details:

  • Unlike traditional keyword-based search engines, Exa encodes webpage content into embeddings that capture meaning rather than just matching terms.

  • The company has processed about 1B web pages, prioritizing depth of understanding over Google's trillion-page breadth.

  • Searches can take several minutes to process but return highly specific results lists spanning hundreds or thousands of entries.

  • The platform excels at complex searches, such as finding specific types of companies, people, or datasets that traditional search engines struggle with.

  • Websets is Exa’s first consumer-facing product, with the company also providing backend search services to enterprises.

Why it matters: While others race to weave AI models into classic search engines, Exa is rethinking search from the ground up. Though currently slower than normal search, this database-style approach could revolutionize how we find and organize web info — especially for surfacing deeper, specific patterns across the internet.

NEW TOOLS & JOBS

Trending AI Tools

  • 👨‍💻 Supabase - A global AI assistant with developer capabilities like Postgres schema design, data queries, charting, and error debugging

  • 🕒 Realtime AI - Keep users updated with real-time task progress and the ability to stream LLM responses directly to a frontend

  • 📣 Superads - AI-powered analytics for marketers and creative teams

  • 🤝 Roster - Hiring platform for content creators with AI-powered matchmaking

  • ❤️ Hypelist - Discover AI-personalized recommendations of places, movies, books, and everything you love

New AI Job Opportunities

  • 🖥️ xAI - AI Coding Tutor

  • 📦 Shield AI - Global Shipping & Receiving Specialist

  • 🔬 OpenAI - Sourcer, AI Research

  • 📊 Tempus - Junior Data Abstractor

  • 🌟 Notable - Head of Customer Success

QUICK HITS

ElevenLabs unveiled Conversational AI, a new tool that allows users to seamlessly add voice capabilities in 31 languages to AI agents with features like ultra-low latency, LLM flexibility, advanced turn-taking, and more.

Google announced that its VEO video generation model is now available in private preview on the company’s Vertex AI platform with its Imagen 3 text-to-image model launching to all users in the next week.

OpenAI appointed former Coinbase CMO Kate Rouch as its first Chief Marketing Officer to lead marketing efforts for both consumer and enterprise products.

Hailuo AI introduced l2V-01-Live, a new AI video model that brings 2D illustrations to life with smooth motion.

Amazon released Automated Reasoning checks on its Bedrock platform to combat AI hallucinations by validating responses against customer-provided data, alongside new Model Distillation and multi-agent collaboration features.

Meta posted a new blog detailing the company’s 2024 election integrity efforts, revealing that less than 1% of fact-checked misinformation involved AI content.

European defense tech startup Helsing unveiled its HX-2 attack drone, featuring AI-enabled autonomous capabilities — with plans to mass produce tens of thousands annually at lower costs than existing systems.

THAT’S A WRAP

See you soon,

Rowan, Joey, Zach, and Alvaro—aka The Rundown Team

AI

Step inside any image with AI

Rowan Cheung • 5 minutes

Sign Up | Advertise | Podcast | AI University

Welcome, AI enthusiasts.

The barrier between images and reality just got thinner with World Labs revealing tech that transforms static images into explorable 3D spaces.

Creating immersive digital environments is about to become as easy as taking a photo, and it could change how we think about virtual spaces forever. Let’s get into it…


In today’s AI rundown:

  • World Labs unveils explorable AI-generated worlds

  • OpenAI weighs ChatGPT advertising push

  • Bring characters to life with AI videos

  • Hume releases new AI voice customization tool

  • 5 new AI tools & 5 new AI jobs

  • More AI & tech news

Read time: 4 minutes

LATEST DEVELOPMENTS

WORLD LABS

🌎 World Labs unveils explorable AI-generated worlds

Image source: World Labs

The Rundown: ‘Godmother of AI’ Fei-Fei Li’s startup World Labs just revealed its first major project — an AI system that can transform any image into an explorable, interactive 3D environment that users can navigate in real-time through a web browser.

The details:

  • The system generates complete 3D environments beyond what's visible in the original image, maintaining consistency as users explore.

  • Users can freely move and look around a small area of the generated spaces using standard keyboard and mouse controls.

  • The tech also features real-time camera effects like depth-of-field and dolly zoom, plus interactive lighting and animation sliders to manipulate scenes.

  • The system works with photos and AI-generated images, allowing creators to combine it with everything from text-to-image tools to famous works of art.

Why it matters: World Labs' approach of generating actual explorable 3D environments opens up entirely new possibilities for areas like games, films, virtual experiences, and creative workflows. In the very near future, creating sophisticated worlds will be as accessible as generating images is today.

TOGETHER WITH HUBSPOT

💰 Turn AI into your income engine

The Rundown: HubSpot’s new “200+ AI-Powered Income Ideas” free guide offers actionable strategies to turn artificial intelligence into your own personal revenue generator — unlocking a gateway to financial innovation in the digital age.

With this guide, you can:

  • Explore hundreds of revenue-generating ideas across industries with real-world applications

  • Follow simple, step-by-step instructions that make AI accessible to everyone

  • Adopt cutting-edge strategies to keep you ahead in today's fast-paced market

Access your free guide here.

OPENAI

📢 OpenAI weighs ChatGPT advertising push

Image source: Grok

The Rundown: OpenAI is reportedly exploring the introduction of advertising into its AI products as it seeks new revenue streams, with CFO Sarah Friar confirming the company is evaluating an ads model despite previous hesitation from leadership.

The details:

  • OpenAI has quietly hired key execs from Meta and Google for an advertising team — including former Google search ads leader Shivakumar Venkataraman.

  • While bringing in $4B annually from subscriptions and API access, OpenAI faces over $5B in yearly costs from developing and running its AI models

  • OpenAI executives are reportedly divided on whether to implement ads, with Sam Altman previously speaking out against them and calling it a ‘last resort.’

  • Despite her initial comments about weighing ad implementation, Friar clarified there are "no active plans to pursue advertising" yet.

Why it matters: While ad integration may offset some massive AI development costs, it can also be a slippery slope (like Google’s over-saturation of promoted results). Depending on the implementation, having ads within models could also change the relationship between the user and AI and the ‘trust’ of the outputs.

AI TRAINING

🎥 Bring characters to life with AI videos

The Rundown: Rendernet now lets you create life-like videos with character consistency, perfect for storytelling and product marketing.

Step-by-step:

  1. Visit RenderNet AI and create a free account.

  2. Select ‘Create New’ in the main dashboard and follow the steps to create your character.

  3. Choose the character you created and type a prompt describing the scene and the character’s motion.

  4. Click ‘Generate Video’ to create your character video.

Pro tip: To have more control over the video, you can also generate an image of your character and convert it to video using the Video Anyone feature in Studio.

PRESENTED BY SECTION

🔮 2025 decoded: Scott Galloway's annual forecast

The Rundown: Section invites you to join NYU professor Scott Galloway's most anticipated event of the year — delivering his annual unfiltered takes on what's in store for AI, business, and tech in 2025.

Join 20k+ others on Dec. 12 to hear:

  • Scott’s raw, unfiltered predictions for the New Year

  • The trends he predicted accurately (and missed) in 2024

  • Key trends to navigate as we head into 2025

RSVP for free and secure your spot today. 

HUME

🗣️ Hume releases new AI voice customization tool

Image source: Hume

The Rundown: Hume AI just launched Voice Control, a new feature allowing developers to create consistent, custom AI voices by adjusting 10 intuitive sliders.

The details:

  • The system features 10 adjustable dimensions, including gender, assertiveness, confidence, and enthusiasm, that can be modified through a slider interface.

  • Rather than selecting from preset options, creators can make precise, continuous adjustments that remain consistent across different use cases.

  • Voice Control also isolates each voice characteristic, allowing users to adjust individual traits without impacting other qualities.

Why it matters: The future of AI speech isn't cloning—it's personalization. Creating custom voices will be as easy as creating a character in a video game, and tools like this could revolutionize how we think about AI speech development for use cases such as brand voices, NPCs in games, audiobook narration, and more.

NEW TOOLS & JOBS

Trending AI Tools

  • 💰Vela OS - Invest in startups with AI agents and an AI-native OS

  • 🎤 ACE Studio - AI workstation to generate studio-quality singing vocals

  • 🗣️ Voiser AI - Transcribe, summarize, and translate videos and recordings

  • ⚙️ Boost.Space 4.0 - Buy and sell AI-powered workflows and connect seamlessly with 2,000+ tools

  • 🤖 AgentPlace - Create AI-driven websites and apps through simple text instructions

New AI Job Opportunities

QUICK HITS

Cohere released Rerank 3.5, a new AI search model featuring enhanced reasoning capabilities, support for 100+ languages, and improved accuracy for enterprise data searching across documents, code, and more.

The Browser Company teased Dia, a new AI-integrated smart web browser with demos that included agentic actions, natural language prompting in the command bar, and built-in AI writing and search tools.

The U.S. Commerce Department unveiled new chip restrictions targeting China’s AI advances, blacklisting 140 entities and expanding controls on high-bandwidth memory chips.

AI chip startup Tenstorrent secured a $700M funding round led by Samsung and backed by Jeff Bezos, valuing the Nvidia competitor at $2.6B.

Nous Research launched a new distributed AI training effort, pre-training a 15B parameter language model over the internet and live-streaming the process.

Amazon Web Services announced major data center upgrades to support next-gen AI chips and genAI workloads, including new liquid cooling systems and improved electrical efficiency.

THAT’S A WRAP

See you soon,

Rowan, Joey, Zach, and Alvaro—aka The Rundown Team

AI

Musk battles OpenAI's for-profit push

Rowan Cheung • 5 minutes

Sign Up | Advertise | Podcast | AI University

Welcome, AI enthusiasts.

The Elon Musk and OpenAI legal drama is back for Round 4 — and this time, the billionaire is adding Microsoft and former board members to the mix.

Will these latest allegations put a roadblock in the path of OpenAI’s transformation into a for-profit powerhouse? Let’s get into it…


In today’s AI rundown:

  • Musk seeks to block OpenAI’s for-profit transition

  • DeepMind proposes ‘Socratic learning’ for AI self-improvement

  • How to connect Claude to the Internet

  • Adobe unveils AI-powered sound generation system

  • 5 new AI tools & 5 new AI jobs

  • More AI & tech news

Read time: 4 minutes

LATEST DEVELOPMENTS

ELON MUSK & OPENAI

🙅🏻‍♂️ Musk seeks to block OpenAI’s for-profit transition

Image source: Midjourney / The Rundown

The Rundown: Elon Musk just filed a preliminary injunction to stop OpenAI’s planned transition to a fully for-profit business structure, escalating the ongoing legal battle and marking the fourth legal action from the former co-founder and AI rival.

The details:

  • The injunction seeks to prevent OpenAI from converting its structure and transferring assets to preserve the company’s original ‘non-profit character.’

  • Multiple parties are targeted, including OpenAI, Sam Altman, Microsoft, and former board members — citing improper sharing of competitive information.

  • The action also points to OpenAI’s ‘self-dealing,’ such as using Stripe as its payment processor, in which Altman has ‘material financial investments.’

  • Musk also alleges that OpenAI has discouraged investors from backing its competitors like xAI through restrictive investment terms.

  • OpenAI called Musk’s fourth legal action a “recycling of the same baseless complaints” and “without merit.”

Why it matters: The Musk and OpenAI saga continues. While Elon’s actions have been viewed as vindictive in the past, OpenAI’s convoluted structure and intertwined dealings are no stranger to scrutiny. With the AI leader rushing to secure its $150B+ valuation and close investments, the injunction could throw a wrench in its plans.

TOGETHER WITH PROJECT IDX

🚀 Boost your development with Cloud + AI

The Rundown: Project IDX offers a complete cloud development environment integrated with Google's Gemini AI — all directly within your browser.

With Project IDX, you get:

  • A Linux-based virtual machine that you can access anytime, anywhere

  • Native apps in browser to build and test Flutter/React Native apps with a built-in Android Emulator

  • Pre-loaded environments to begin coding instantly with popular frameworks

Get up to 5 workspaces and accelerate your development today.

GOOGLE DEEPMIND

🧠 DeepMind’s ‘Socratic learning’ for AI self-improvement

Image source: Midjourney / The Rundown

The Rundown: Google DeepMind researchers just introduced a framework called ‘Boundless Socratic Learning’ that could enable AI systems to improve themselves through language-based interactions without requiring external data or human feedback.

The details:

  • The approach relies on ‘language games,’ structured interactions between AI agents that provide learning opportunities and built-in feedback mechanisms.

  • The system generates its own training scenarios and evaluates its performance through game-based metrics and rewards.

  • The researchers outline three levels of AI self-improvement: basic learning input/output learning, game selection, and potential code self-modification.

  • This framework could enable open-ended improvement beyond an AI's initial training, limited only by time and compute resources.

Why it matters: The top AI labs all talk about models eventually training themselves — and this framework outlines a blueprint for how systems can continue improving without human intervention even after initial training. The challenge will be maintaining alignment with human goals as models begin handling their own self-improvement.

AI TRAINING

🌐 How to connect Claude to the Internet

The Rundown: Claude’s new MCP feature lets you connect Claude to the internet using Brave Search API, enabling real-time information access and up-to-date responses.

Step-by-step:

  1. Download the latest Claude desktop app and create an account.

  2. Get your free Brave Search API key (2,000 monthly requests).

  3. Configure Claude's config_file with your API key (you can copy/paste the code from here).

  4. Restart and test with a current events query.

Pro tip: Always append "search the internet" to queries requiring real-time information for best results!

PRESENTED BY ASSEMBLY AI

🗣️ Tackle complex conversations with ease

The Rundown: Universal-2 takes on the toughest challenges in conversational data, delivering unmatched accuracy and actionable insights.

Build AI applications that can:

  • Capture and act on real-time insights from every conversation

  • Automate workflows and generate structured, actionable data

  • Transcribe technical phrases like “Q4 revenue target $3M” with precision

Start building smarter AI-driven applications today with Universal-2 Speech AI. 

ADOBE

🧪 Adobe unveils AI-powered sound generation system

Image source: Adobe

The Rundown: Adobe researchers just revealed MultiFoley, an AI system that automatically generates synchronized post-production sound effects for videos through text prompts, reference audio, or existing sound clips.

The details:

  • The system produces high-quality 48kHz audio that precisely syncs with on-screen action, achieving a synchronization accuracy of just 0.8 seconds.

  • MultiFoley was trained on a combined dataset of both internet videos and professional sound effect libraries to enable full-bandwidth audio generation.

  • Users can transform sounds creatively — for example, turning a cat's meow into a lion's roar — while still maintaining timing with the video.

  • MultiFoley achieves higher synchronization accuracy levels than previous models and rates significantly higher across categories in a user study.

Why it matters: While the quirky videos of Foley artists using all sorts of items to craft custom audio are a wild part of video production, AI’s time in professional sound design is here. Creating custom, synced soundtracks and effects is about to be as easy as typing to a chatbot — opening entirely new possibilities for creative workflows.

NEW TOOLS & JOBS

Trending AI Tools

  • 🎁 Confetti - AI-powered gifting to close more deals effortlessly*

  • 📣 Muku AI - AI influencer agency that creates UGC video ads with AI avatars

  • ⚙️ DataFuel - Turn websites into LLM-ready data and scrape entire knowledge bases in a single query

  • 💼 Elastyc - Match talent with job roles in seconds with AI

  • 🗣️ Kroto - Record and translate video guides in 60+ languages with AI

New AI Job Opportunities

  • 🏢 Meta - Business Development Manager, AI Partnerships

  • 🧠 OpenAI - Research Engineer, ChatGPT RLHF

  • 💼 Superannotate - Vice President of Sales

  • 📊 Waymo - Finance Operations Manager, Procure to Pay

  • 📈 Dataiku - Senior Data Scientist

    *Sponsored listing

QUICK HITS

AI image startup Black Forest Labs is reportedly in discussions to raise a $200M funding round led by A16z, which would value the four-month-old company at over $1B.

A group of Canadian media giants filed a joint lawsuit against OpenAI, alleging copyright infringement of news content used to train the company’s AI models.

Meta reportedly plans to build a $10B subsea cable system spanning 40,000+ kilometers to support growing internet traffic and AI initiatives.

OpenAI Policy Frontiers lead Rosie Campbell announced her departure from the company, citing ‘unsettling shifts’ and culture losses within the startup.

A new study from WIRED found that over half of longer English posts on LinkedIn are now AI-generated, aligning with the platform’s increasing embrace of AI writing tools.

A new AI-powered Death Clock app predicts individual death dates using data from longevity studies with 53M participants, using information about a user’s diet, exercise, and stress levels.

THAT’S A WRAP

See you soon,

Rowan, Joey, Zach, and Alvaro—aka The Rundown Team

AI

Amazon's new AI model, codenamed 'Olympus'

Rowan Cheung • 4 minutes

Sign Up | Advertise | Podcast | AI University

Welcome, AI enthusiasts.

Amazon is reportedly preparing to launch ‘Olympus’, its AI model that aims to compete with other models by focusing on advanced video analysis.

After staying quiet in the AI race and investing $8B in Anthropic, is the tech giant finally ready to make some noise? Let’s get into it…


In today’s AI rundown:

  • Amazon develops AI model codenamed Olympus

  • Tesla's Optimus gets major hand upgrade

  • Create an AI agent with Internet access

  • 5 new AI tools & 4 new AI jobs

  • More AI & tech news

Read time: 4 minutes

Friendly reminder: We’re moving to a new email sending address. Make sure to add news@daily.therundown.ai to your contacts to ensure you keep receiving our emails.

LATEST DEVELOPMENTS

OPENAI

⚡️ Amazon develops AI model codenamed Olympus

Image source: DALL-E 3 / The Rundown

The Rundown: Amazon has reportedly developed a new AI model codenamed Olympus, focusing on advanced video and image processing capabilities — with a potential release slated as early as next week.

The details:

  • The model reportedly excels at detailed video analysis, able to track specific elements like a basketball's trajectory or underwater drilling equipment issues.

  • While reportedly less sophisticated than OpenAI and Anthropic in text generation, Olympus aims to compete through specialized video processing and competitive pricing.

  • This development comes despite Amazon's recent $8 billion investment in Anthropic, suggesting a dual strategy of partnership and in-house AI development.

  • Amazon’s Olympus model was first spotted by The Rundown over a year ago, marking a long development cycle.

Why it matters: Amazon has been suspiciously quiet in the AI race — but it looks like they’re finally preparing to make some serious noise. By focusing on video analysis capabilities, Amazon is targeting a relatively untapped market segment that could appeal to sports analytics, media companies, and more.

TOGETHER WITH INNOVATING WITH AI

💼 Start your career as an AI Consultant

The Rundown: Innovating with AI’s new program, AI Consultancy Project, equips AI enthusiasts with all the resources to capitalize on the rapidly growing AI consulting market – which is set to 8x to $54.7B by 2032.

The program offers:

  • Tools and framework to find clients and deliver top-notch services

  • A 6-month roadmap to build a 6-figure AI consulting business

  • Student landing their first AI client in as little as 3 days

Click here to request early access to The AI Consultancy Project.

TESLA

🤖 Tesla's Optimus gets major hand upgrade

The Rundown: Tesla just showcased its humanoid robot Optimus with an upgraded hand, demonstrating new capabilities like catching balls in real-time — a significant leap in robotic dexterity.

The details:

  • The new hand-forearm system includes 22 degrees of freedom in the hand and 3 in the wrist/forearm, doubling previous capabilities.

  • All actuation mechanisms have been moved to the forearm, though this has also increased its weight.

  • The Tesla Optimus team is working on integrating extended tactile sensing, fine tendon controls, and reducing forearm weight by year-end.

  • While the demo was tele-operated (remote controlled), achieving smooth and accurate tendon control represents a complex engineering achievement.

Why it matters: More robotic dexterity brings us closer to humanoid robots that can perform human-like tasks with precision. Catching a ball might seem simple, but achieving this level of hand coordination in a robot requires an incredible amount of hardware engineering.

AI TRAINING

✍️ Create an AI agent with Internet access

The Rundown: Using Mistral AI's free agent feature (the ChatGPT competitor from France), you can now create a custom writing assistant with Internet access that matches your unique writing style, tone, and structure.

Step-by-step:

  1. Visit Mistral AI and create a free account.

  2. Create a new agent and select "Pixtral Large" model.

  3. Input your writing style instructions and examples. We wrote detailed instructions that premium members can copy and paste from here.

  4. Deploy to Le Chat and start using your agent with a simple @mention.

Pro tip: Keep refining your assistant's instructions based on the outputs you receive. The more specific your examples and guidelines, the better your results will be.

NEW TOOLS & JOBS

Trending AI Tools

  • 🎙️ Hume + Anthropic Computer Use - Allows developers to create apps to control a computer with just your voice

  • 🎯 snappy retro - Create a retro board in seconds, share the URL, and collaborate in real-time

  • ⚡️ ElevenLabs GenFM - Generate personal podcasts from PDFs, articles, eBooks, links or text in 32 languages with the ElevenReader iOS app

  • 👋 Replicate consistent-character - Create images of any given character in different poses

  • 💻 aisuite - An open-source Python package by Andrew Ng that makes it easy for developers to use LLMs from multiple providers

New AI Job Opportunities

QUICK HITS

TikTok owner ByteDance is reportedly suing a former intern for $1.1 million, alleging deliberate sabotage of its AI language model training infrastructure through code manipulation.

Databricks is reportedly raising at least $5 billion at a whopping $55 billion valuation, with the funding round aimed at helping employees cash out and potentially delaying IPO plans.

Google Labs launched GenChess, a new web experiment using Gemini Imagen 3 that lets users create custom chess pieces with AI image generation.

OpenAI filed to trademark its o1 'reasoning' models, revealing a strange earlier application in Jamaica months before the model's announcement.

Mistral AI announced its Mistralship Startup Program, offering selected startups 30K platform credits, dedicated support, and early access to new models over a 6-month period.

Meta's Chief AI Scientist Yann LeCun stated that human-level AI could be possible within 5-10 years after recent statements that AGI would take atleast a decade —aligning with similar timelines suggested by Sam Altman and Demis Hassabis.

THAT’S A WRAP

See you soon,

Rowan, Joey, Zach, and Alvaro—aka The Rundown Team

AI

Alibaba's o1 reasoning rival

Rowan Cheung • 5 minutes

Sign Up | Advertise | Podcast | AI University

Welcome, AI enthusiasts.

Chinese tech giant Alibaba just entered the reasoning race in a big way — with a new open o1 rival that matches the industry leader’s capabilities.

Open-source AI is officially competing with Silicon Valley’s finest, and OpenAI’s model moat is looking thinner by the day. Let’s get into it…


In today’s AI rundown:

  • Alibaba challenges o1 with open-source reasoning model

  • AI2 launches fully open Llama competitor

  • Create live web prototypes with Qwen Artifacts

  • AI outperforms experts at predicting scientific results

  • 5 new AI tools & 5 new AI jobs

  • More AI & tech news

Read time: 4 minutes

Important: The Rundown is slowly being sent from a new email address. To ensure you get our newsletter, please add news@daily.therundown.ai to your contact list.

LATEST DEVELOPMENTS

ALIBABA

🧠 Alibaba challenges o1 with open-source reasoning model

Image source: Alibaba

The Rundown: Alibaba's Qwen team just released QwQ-32B-Preview, a powerful new open-source AI reasoning model that can reason step-by-step through challenging problems and directly competes with OpenAI's o1 series across benchmarks.

The details:

  • QwQ features a 32K context window, outperforming o1-mini and competing with o1-preview on key math and reasoning benchmarks.

  • The model was tested across several of the most challenging math and programming benchmarks, showing major advances in deep reasoning.

  • QwQ demonstrates ‘deep introspection,’ talking through problems step-by-step and questioning and examining its own answers to reason to a solution.

  • The Qwen team noted several issues in the Preview model, including getting stuck in reasoning loops, struggling with common sense, and language mixing.

Why it matters: Between QwQ and DeepSeek, open-source reasoning models are here — and Chinese firms are absolutely cooking with new models that nearly match the current top closed leaders. Has OpenAI’s moat dried up, or does the AI leader have something special up its sleeve before the end of the year?

TOGETHER WITH EIGHT SLEEP

🧠 Sleep with AI-powered precision

The Rundown: Eight Sleep's Pod 4 Ultra is redefining sleep by combining AI, biometrics, and personalized climate control for the ultimate night’s rest —  bringing lab-grade sleep optimization to your bedroom.

The Pod 4 Ultra offers:

  • AI-driven temperature adjustments throughout the night

  • Detailed sleep analytics and daily sleep fitness scores

  • Advanced snore detection with automatic bed adjustments

Use code RUNDOWN at eightsleep.com/rundown for up to $600 off bundled purchases through December 14th.

AI2

🚀 AI2 launches fully open Llama competitor

Image source: AI2

The Rundown: Research institute AI2 just released OLMo 2, a new family of fully open-source language models that matches the performance of similar-sized competitors like Meta’s Llama.

The details:

  • The 7B and 13B models were trained on a 5T token dataset of high-quality academic content, filtered web data, and specialized instruction sources.

  • The OLMo models achieved similar or better results while using less computing power than competitors and being smaller in size.

  • The models are fully open, with AI2 providing access to source code, training data, and a dev package with training recipes and evaluation frameworks.

  • The release also includes instruction-tuned variants, which achieve competitive results against leading open models like Qwen 2.5.

Why it matters: While other open-source models release weights but remain heavily guarded, OLMo 2 proves that cutting-edge AI can be developed and released completely in the open — potentially setting a powerful new standard for how future systems are built and shared.

AI TRAINING

⚙️ Create live web prototypes with Qwen Artifacts

The Rundown: Qwen2.5-Coder’s new Artifact feature instantly transforms your web ideas into live, interactive prototypes.

Step-by-step:

  1. Visit Hugging Face and locate the Qwen2.5-Coder-Artifacts space.

  2. Enter your prototype description with specific design requirements.

  3. Click "Send" to generate and preview your prototype instantly.

  4. Refine the design and export the code for your project.

Pro tip: Start with basic layouts and gradually add features to build complex prototypes efficiently.

AI RESEARCH

🧪 AI outperforms experts at predicting scientific results

Image source: Ideogram

The Rundown: A new study from the University College of London just revealed that AI systems can predict scientific outcomes significantly better than expert neuroscientists — also uncovering ‘hidden’ patterns in research that could help better guide future studies.

The details:

  • A ‘BrainBench’ tool was used to test 15 AI models and 171 neuroscience experts’ ability to distinguish real vs. fake outcomes in research abstracts.

  • The AI models achieved 81% accuracy, compared to 63% for the experts — with a ‘BrainGPT’ trained on neuroscience papers scoring even higher at 86%.

  • The success suggests scientific research follows more discoverable patterns than previously thought, which AI can leverage to guide future experiments.

  • The researchers are developing tools to help scientists validate experimental designs before conducting studies, potentially saving time and resources.

Why it matters: While AI's pattern recognition capabilities aren't surprising, its ability to predict scientific outcomes could completely change how research is conducted. Using AI to validate experiments before spending any time in the lab could lead to faster research cycles, fewer dead ends, and accelerated scientific breakthroughs.

NEW TOOLS & JOBS

Trending AI Tools

  • 🎥 Magic Roll - Create viral shorts in one click with B-roll, motion graphics, and AI-powered captions

  • 🤝 OfferGenie - AI-powered career copilot with real-time guidance to ace every interview

  • 📸 Runway Frames - A new foundation model for image generation with style precision and visual world-building.

  • ⚙️ Foundry - Build, evaluate, and improve AI agents that can automate key parts of your business

  • 💬 Llms.txt Generator - Generate an llms.txt file for your website to provide information to help LLMs use your website at inference time

New AI Job Opportunities

  • 🚀 The Rundown - Head of Growth

  • 🛠️ Shield AI - Manufacturing Engineer

  • 💼 Cresta - Sales Development Representative, New York

  • 🧠 Writer - Director, AI Research

  • 🔬 Deepmind - Research Engineer, Materials Science

QUICK HITS

OpenAI temporarily suspended access to Sora for beta testers following Tuesday’s leak, with a group of artists creating an unauthorized public interface to the AI video tool.

xAI reportedly plans to release a standalone app to compete with OpenAI’s ChatGPT as early as December, marking the company’s first product outside of the X platform.

H Company showcased new demos of its Runner H agent, performing advanced web tasks, including real-time data extraction, complex interface navigation, and precision web scraping across multiple platforms.

ElevenLabs introduced GenFM podcasts, a new feature that allows users to generate AI-hosted conversations in 32 languages about uploaded PDFs, articles, eBooks, and more.

Elon Musk posted on X that he plans to start an AI game studio with xAI, saying he wants to “make games great again.”

Chinese self-driving startup Pony AI raised $260M at a $4.5B valuation as the autonomous taxi company’s U.S. IPO goes live for trading this week.

THAT’S A WRAP

See you soon,

Rowan, Joey, Zach, and Alvaro—aka The Rundown Team

No matching search results

Try using different keywords, double-check your spelling, or explore related categories.

Clear Search

Stay Ahead on AI.

Join 2,000,000+ readers getting bite-size AI news updates straight to their inbox every morning with The Rundown AI newsletter. It's 100% free.