Get the latest AI news, understand why it matters, and learn how to apply it in your work — all in just 5 minutes a day. Join over 2,000,000+ subscribers.
ChatGPT levels up with 'Canvas'
Sign Up | Advertise | Podcast | AI University
Welcome, AI enthusiasts.
OpenAI just painted a new picture of AI collaboration with its ‘Canvas’ feature — and ChatGPT may be about to level up in a major way.
Is this new feature a glimpse at the next stage for AI assistants? Let’s get into it…
In today’s AI rundown:
ChatGPT gets a collab boost with Canvas
Google rolls out ads in AI Overviews
Automate video analysis with Gemini AI
Black Forest Labs unveils Flux 1.1 Pro
5 new AI tools & 4 new AI jobs
More AI & tech news
Read time: 4 minutes
LATEST DEVELOPMENTS
OPENAI
🔥 ChatGPT gets a collab boost with Canvas

Image source: OpenAI
The Rundown: OpenAI just launched Canvas, a new ChatGPT interface release that enables more collaborative writing and coding projects beyond simple chat interactions with new editing features, shortcuts, and added contextual knowledge.
The details:
Canvas opens in a separate window alongside the chat, allowing users to directly edit and refine specific aspects of an output.
New features include inline feedback, targeted editing, and shortcuts for tasks like adjusting text length, changing reading levels, or debugging code.
In tests, using GPT-4o with Canvas led to a 30% accuracy and 16% quality boost compared to using the model without the interface.
Canvas is rolling out in beta to Plus and Team users, with a broader release expected later.
Why it matters: ChatGPT’s first major UI change takes a leap towards more nuanced, moldable interactions — while also inheriting novice-friendly features seen in other rivals with easy-to-use shortcuts. The simple chatbox was a good first step for human-AI interactions, but more power and capabilities require new collaborative processes.
TOGETHER WITH DECIDR
💼 Automate 80% of your business with AI
The Rundown: Decidr boosts conversions and efficiency by automating 80% of processes in finance, marketing, sales, HR, and more. Its AI solution helps businesses drive conversions and streamlines operations across industries.
Attend Decider’s Product Launch Day on Oct. 23 to learn how to:
Implement AI quickly with fast deployment options Optimize costs by streamlining operations with AI
Apply AI across every aspect of your business
Add real world-value through proven success stories across industries
Register now to start transforming your business with AI.
🔎 Google rolls out ads in AI Overviews

Image source: Google
The Rundown: Google just announced the introduction of ads to its AI Overview search summaries and the launch of several new AI-powered search capabilities, such as video understanding and voice input.
The details:
Ads will now appear within and alongside AI Overviews for ‘relevant queries’ on searches in the United States.
The redesigned AI Overview format will now add prominent in-text links to better source websites for the curated information.
New AI-organized search results pages are rolling out that surface relevant, more diverse content — starting with recipe and meal inspiration queries.
Google Lens is getting video understanding capabilities and voice input options for visual searches.
The Android ‘Circle to Search’ feature also lets users identify songs playing in videos or streaming content.
Why it matters: Google’s first AI Overview experience didn’t exactly go as planned. However, with heavy competition from Perplexity and chatbot rivals, Google’s search future clearly has AI at its core, regardless of the bumps along the way. But infusing paid ads into AI Overviews could be a slippery slope – will Gemini be next?
AI TRAINING
🎥 Automate video analysis with Gemini AI

The Rundown: Google Gemini on AI Studio can analyze videos and provide transcripts, tags, subtitles, and translations to simplify and speed up your content creation workflow.
Step-by-step:
Access Google Gemini on AI Studio and select "Gemini 1.5 Pro 002" from the Models menu.
Upload your video and use this prompt: "Analyze this video and provide the transcript, 5 title ideas, and categorized tags."
Follow up for improvements: "Suggest 5 content improvements, 3 promo clip ideas with timestamps, reach expansion tips."
Implement insights to optimize SEO, create promo clips, and expand your audience reach through translation.
Pro tip: Regularly analyze your video content with Gemini to track improvements and identify trends in your content over time.
PRESENTED BY POSTMAN
🔓 Unlock AI's API potential
The Rundown: Postman is hosting a free webinar on Oct. 24th to help you navigate the explosive growth of APIs and the crucial role they will play in shaping the AI revolution.
In this session, you'll learn to:
Understand the critical role APIs play in the AI landscape
Build high-quality APIs at scale
Maximize the success of your API products
BLACK FOREST LABS
🫐 Black Forest Labs unveils Flux 1.1 Pro

Image source: Black Forest Labs
The Rundown: Black Forest Labs just released Flux 1.1 Pro, a significantly upgraded version of the startup’s text-to-image AI model, and a new API for developers.
The details:
Flux 1.1 Pro generates images six times faster than Flux 1 Pro while improving quality and prompt output adherence.
The model tops the Artificial Analysis image arena leaderboard against rivals like Midjourney, Ideogram, and DALL-E, tested under the codename ‘blueberry.’
1.1 Pro will be a paid model available through partners like Together AI, Replicate, FAL AI, and Freepik, unlike the open-source Flux 1 that powers xAI’s Grok.
BFL’s API allows third parties to integrate the model into their apps, and the 1.1 Pro model costs .05c / image.
Why it matters: From OpenAI’s strawberry to BFL’s blueberry, fruit codenames are having a moment! 1.1 Pro looks to raise the already incredibly high text-to-image bar, continuing to push the boundaries of realism and image generation quality — now equipped with a turbocharged speed increase as well.
NEW TOOLS & JOBS
Trending AI Tools
🐝 Buzzabout - AI-driven insights from billions of discussions on social media
🤖 Base AI - Build serverless, autonomous AI agents with memory
💸 CostGPT - Estimate costs and time for your software project in less than 5 minutes
👀 Lookie AI - Consume, organize, and manage knowledge from YouTube
⏱️ Tackle AI - Automatic time tracking to align everyday actions with key priorities
New AI Job Opportunities
✍️ Writer - Senior Technical Sourcer
🏛️ Palantir Technologies - Account Executive
💼 Captions - Sales
🔗 Notable - Product Integrations Lead
QUICK HITS
OpenAI’s Sora research lead Tim Brooks announced on X that he is leaving the company to join Google DeepMind, where he will work on ‘video generation and world simulators.’
Google released Gemini 1.5 Flash 8B, a lightweight, cost-effective variation with a 50% cost reduction and 2x higher rate limits than 1.5 Flash.
Fourier launched GR-2, the company’s second-generation humanoid robot, which features improvements to battery life, hand dexterity, mobility, and a new developer kit.
The U.S. Commerce Department unveiled a plan to award $100M for AI semiconductor research, hoping to spur the development of more sustainable materials.
OpenAI secured a new $4B credit facility from major banks, boosting its total liquidity to over $10B to fuel future growth and innovation.
AI Coding startup Poolside announced a $500M Series B funding round to accelerate progress towards AGI, bringing the company’s valuation to $3B.
THAT’S A WRAP

A record-breaking AI funding round
Sign Up | Advertise | Podcast | AI University

Welcome, AI enthusiasts.
Despite the constant drama, leadership churn, and stiff competition, investors are still betting BIG on OpenAI as the golden goose of the AI boom.
With a $6.6B funding round at an eye-popping $157B valuation, the AI leader just got a record-breaking boost to fuel its reign at the industry's top. Let’s get into it…
In today’s AI rundown:
- OpenAI secures record-breaking $6.6B in funding
- Google developing reasoning AI to rival OpenAI
- Turn YouTube videos into AI-powered podcasts
- MIT’s ‘Future You’ taps AI to speak with older self
- 6 new AI tools & 4 new AI jobs
- More AI & tech news
Read time: 4 minutes
LATEST DEVELOPMENTS
OPENAI
💰 OpenAI secures record-breaking $6.6B in funding

Image source: Midjourney
The Rundown: OpenAI just closed a massive $6.6B funding round, valuing the company at an unprecedented $157B and solidifying its position as the most well-funded AI startup in the world.
The details:
- Thrive Capital led the round, which included participation from Microsoft, Nvidia, SoftBank, MGX, and others.
- OpenAI announced that it plans to use the funds to expand research, increase computing capacity, and develop new tools.
- OpenAI expects revenue increases to $25B by 2026 and $100B by 2029, according to investor documents.
- The company reportedly asked investors for exclusive arrangements, discouraging them from backing rivals like Anthropic and xAI.
- The move comes amid a corporate restructure to a for-profit entity, which, according to the NYT, will not happen until ‘sometime next year’.
Why it matters: The long-rumored funding round is finally official, and the numbers are staggering. Despite the drama, leadership churn, and heavy competition, the company’s giant’s sky-high valuation shows that investors still see OpenAI as the golden goose of the AI boom — regardless of the noise.
TOGETHER WITH ARTISAN
⚡Automate your outbound with an AI BDR

The Rundown: Artisan unifies your outbound sales tools into one platform, featuring Ava — the AI Business Development Rep who manages it all.
With Artisan, you’ll benefit from:
- Access to 300M+ high-quality B2B prospects
- Automated lead enrichment using 10+ data sources
- Advanced personalization via LinkedIn, Twitter, and web scraping
- Comprehensive email deliverability management tools
Book a demo today to see Artisan in action.
🤔 Google developing reasoning AI to rival OpenAI

Image source: Midjourney
The Rundown: Google is reportedly making significant strides in developing AI models with advanced reasoning capabilities similar to OpenAI’s o1 system, intensifying the rivalry between the two AI giants.
The details:
- Multiple teams at Google are working on AI that can solve complex, multi-step problems, according to Bloomberg.
- The AI uses chain-of-thought prompting, a technique created by Google, to tackle complex math and programming problems by ‘thinking’ before responding.
- Google is taking a more cautious approach to its releases than OpenAI but has already debuted math-focused reasoning models like AlphaProof and AlphaGeometry 2.
- Microsoft also infused reasoning capabilities into its Copilot assistant this week, leveraging OpenAI’s o1 model.
Why it matters: Human-like reasoning and agentic capabilities are clearly the two major developments on every AI firm’s roadmap, and the release of o1 may have signaled a new phase in the LLM race. The question is — will OpenAI’s speed keep it a step ahead, or is the competition for top-tier models about to get a whole lot tougher?
AI TRAINING
🎧 Turn YouTube videos into AI-powered podcasts

The Rundown: NotebookLM's latest update allows users to transform lengthy YouTube videos into concise AI-generated podcasts, saving time and enhancing study efficiency.
Step-by-step:
- Visit NotebookLM and create a new notebook.
- Click on "Link" in the source selection area, choose "YouTube" and paste your desired YouTube video URL.
- Select "Generate" in the Audio Overview section to create your AI podcast.
- Interact with your podcast by playing it, asking questions via chat, or generating additional study materials.
Pro tip: Use the chat feature to ask specific questions about the content, turning your AI podcast into an interactive study session!
PRESENTED BY GALILEO
⚙️ Master the art of RAG

The Rundown: Galileo's free 'Mastering RAG' eBook provides 200 pages of in-depth, expert insights into building powerful RAG systems for enterprise use.
In this guide, you'll learn how to:
- Minimize hallucinations and employ advanced chunking
- Choose optimal embedding and reranking models
- Navigate common challenges in RAG system development
- Optimize for production to enhance performance
Download your free copy today and take your AI projects to the next level.
AI RESEARCH
👴🏻 MIT’s ‘Future You’ taps AI to speak with older self

Image source: MIT
The Rundown: Researchers at MIT have developed an AI system called "Future You" that allows users to interact with and ask questions to a simulated version of their older selves.
The details:
- The system uses personal information provided by users to create a realistic future self-simulation, including generating an age-progressed photo.
- Users engage in text-based conversation with an AI-generated 60-year-old version of themselves, capable of answering questions and offering insights.
- In a study of 344 participants, those who used Future You reported decreased negative emotions and anxiety.
Why it matters: While aging simulation apps are constantly going viral, the implications of AI-driven psychological support are massive. With AI’s ability to create and simulate highly personalized, empathetic experiences, studies like Future You are only scratching the surface of the future of therapy and psychology.
NEW TOOLS & JOBS
Trending AI Tools
- 🎥 Pika 1.5 - AI video update with longer clips, cinematic outputs and new Pikaffects
- ⏱️ Semblian 2.0 - Outsource your time-consuming tasks to AI
- 🧠 Hedy AI - Real-time insights in meetings and classes
- 🏠 Vox - An AI voice agent built for the mortgage industry
- 🔎 Tilores - Customer data search, unification, and retrieval for LLMs
New AI Job Opportunities
- 👥 Waymo - HR Business Partner
- 🏢 UiPath - People Operations Specialist
- 📈 Meta - Growth Marketing Manager
- 🤝 Character AI - Head of Partnerships
QUICK HITS
Free event: The Executive Guide to Building AI Apps. Learn how to build AI apps that have a bottom-line moving impact within your org. RSVP.*
Microsoft announced a $4.8B investment into AI and cloud infrastructure in Italy, with plans to expand its data center in the region to become one of Europe’s largest cloud hubs.
Character AI is reportedly shifting its focus away from building AI models in the wake of its $2.7B deal with Google and prioritizing its consumer chatbot service.
Elon Musk posted ‘OpenAI is evil’ on X in response to reports that the AI giant asked investors to avoid funding competing AI firms like Anthropic and Musk’s xAI.
Accenture announced a new partnership with NVIDIA to accelerate enterprise AI adoption, launching a business group and AI Refinery platform to scale agentic AI systems across industries.
The Cancer AI Alliance formed a $40M collaboration between major medical institutions and tech giants like Microsoft, AWS, Nvidia, and Deloitte to advance AI-driven cancer care.
*Sponsored listing
THAT’S A WRAP
See you soon,
Rowan, Joey, Zach, and Alvaro—aka The Rundown Team

OpenAI's DevDay updates revealed
Sign Up | Advertise | Podcast | AI University
Welcome, AI enthusiasts.
OpenAI's DevDay may have skipped the spectacle this time with no live stream — but we caught the event live and secured exclusive details on new releases.
With four new major developer-focused announcements, and a private Rundown Q&A with OpenAI’s Head of Product, we’ve got a big one today. Let’s get into it…
In today’s AI rundown:
OpenAI makes 4 major announcements at DevDay
Microsoft Copilot gets voice, vision upgrade
Exclusive DevDay Q&A with OpenAI’s Olivier Godement
Extend images for free with HuggingFace
5 new AI tools & 4 new AI jobs
More AI & tech news
Read time: 4 minutes
LATEST DEVELOPMENTS
OPENAI
⚙️ OpenAI makes 4 major announcements at DevDay

Image source: Rowan Cheung @ Dev Day
The Rundown: OpenAI just held its DevDay 2024 event, unveiling a suite of new API features and improvements designed to make its AI systems more accessible, efficient, and cost-effective for developers to build with.
The details:
Realtime API enables speech-to-speech application building using the same model that powers Advanced Voice, with the ability to choose from six voices.
Model Distillation simplifies fine-tuning smaller models using outputs from larger ones, making training more accessible to developers.
Prompt Caching reduces costs by nearly 50% across models and speeds up responses by up to 80% when reusing recent input tokens in API calls.
New Vision Fine-Tuning allows models to be trained with both images and text, allowing developers to optimize tasks like image recognition and analysis.
Why it matters: While this year’s DevDay may have lacked the traditional hype of a typical OpenAI event, the releases are still set to have a tremendous impact. These API updates not only enable the creation of entirely new, exciting experiences but also lower the barrier to entry, for builders across OpenAI’s platform.
TOGETHER WITH SYNTHFLOW
🗣️ AI phone calls that sound human
The Rundown: Synthflow’s AI-powered phone calls enable interactions that are indistinguishable from human conversations — revolutionizing the way businesses handle customer service.
With Synthflow, you can:
Create lifelike AI voices that speak naturally in multiple languages
Design custom conversation flows to handle various scenarios
Integrate seamlessly with your existing systems for efficient call handling
Scale your customer service without compromising on quality
Try Synthflow today and experience the future of customer communication.
MICROSOFT
🚀 Microsoft Copilot gets voice, vision upgrade

Image source: Microsoft
The Rundown: Microsoft just announced a slew of AI upgrades coming to its Copilot assistant for Windows PCs, including new vision and voice capabilities, personalization enhancements, a re-release of the controversial Recall feature, and more.
The details:
Copilot Voice allows users to interact with natural speech, adding conversational and intuitive communication similar to OpenAI’s Voice Mode.
Copilot Vision enables the AI to understand and interact with web content a user is viewing, offering context-aware help within the Microsoft Edge browser.
‘Think Deeper’ gives Copilot new enhanced reasoning capabilities using chain-of-thought reasoning powered by OpenAI’s o1 model.
Microsoft’s ‘Recall’ feature is set to return, requiring an opt-in with upgraded privacy and security measures.
Microsoft AI CEO Mustafa Suleyman highlighted Copilot’s ability to ultimately ‘act on your behalf’ and adapt to user’s personal preferences and needs.
Why it matters: Microsoft is bringing the heat with these major Copilot upgrades, levelling up the assistant to align with the latest cutting-edge AI features across the industry — while bringing users one step closer to a truly agentic experience.
OPENAI DEVDAY
🎤 Exclusive DevDay Q&A with OpenAI’s Olivier Godement

Image source: Rowan Cheung / The Rundown
The Rundown: We caught up with OpenAI Head of Product Olivier Godement after he led the main keynote at Tuesday’s DevDay event for some exclusive insights on the new Realtime API (Godement’s responses are summarized for brevity).
On the Realtime API: Godement says that “Until right now, voice has been a second activity“, and that the Realtime API is going to make AI significantly more accessible because many people in the real world prefer to speak over reading or texting.
On real-world use cases: Godement believes the Realtime API will have a “no-brainer” impact on customer support, education, and coaching. He also believes there will be many ‘non-obvious‘ use cases that are hard to predict now.
On pricing: Converted to seconds, audio input is ~6 cents per minute, and output is ~24 cents per minute. While currently high, Godement confirmed that there are “huge pricing decreases on the roadmap.”
On the Twitter misinterpretation: Godement also mentioned a misinterpretation of pricing after the announcement—when users mentioned how much it costs per hour, they multiplied cost as if the input/output were constant. However, whenever humans talk, there is silence—it’s not a constant flow. The model won’t charge you for silence.
On future modalities: For now, Realtime API only supports text and audio. However, Godement believes that image and video are the next milestones on the road to agents that can perceive the world just like a human. He also mentioned that image and video understanding specifically, will “turbocharge customer support” when the model has the ability to understand pixels on a screen in real-time.
PRESENTED BY INNOVATING WITH AI
💼 Start your career as an AI Consultant
The Rundown: Innovating with AI’s new program, AI Consultancy Project, equips AI enthusiasts with all the resources to capitalize on the rapidly growing AI consulting market – which is set to 8x to $54.7B by 2032.
The program offers:
Tools and framework to find clients and deliver top-notch services
A 6-month roadmap to build a 6-figure AI consulting business
Student landing their first AI client in as little as 3 days
Click here to request early access to The AI Consultancy Project.
AI TRAINING
🖼️ Extend images for free with HuggingFace

The Rundown: Hugging Face's free AI image outpainting tool allows users to extend their images with custom aspect ratios for various use cases, such as optimizing images for any social media platform.
Step-by-step:
Visit the "diffusers-image-outpaint" Hugging Face space.
Upload your image to expand.
Set your desired aspect ratio and alignment (e.g., 1:1, middle).
Adjust advanced settings like output size and input image resize.
Click "Generate" and watch AI expand your image!
NEW TOOLS & JOBS
Trending AI Tools
🎥 Video SDK 3.0 - Build and integrate real-time multimodal AI characters
📭 Inbox Zero - An open-source, AI personal assistant for email
👩🏻💻 Graphite - Your AI code review companion
📚 Ello - An AI reading companion for children offering personalized support
🗣️ VivaChat - FaceTime video chat with realistic AI personas
New AI Job Opportunities
💼 Palantir Technologies - Mobility Tax Manager
📈 Databricks - Business Development Representative
🤖 C3 AI - Pre-Sales AI Director
🚀 Notable - Solution Delivery Manager
QUICK HITS
OpenAI founding member Durk Kingma announced that he is joining Anthropic, reuniting with several former OpenAI employees and highlighting the company’s mission of responsible AI development in his X post.
Pika Labs unveiled Pika 1.5, a new video generation model upgrade featuring enhanced effects, realistic movement, longer clip creation, and cinematic capabilities.
Anyscale unveiled major upgrades to its AI platform at Ray Summit 2024, including a GPU-native Ray architecture, RayTurbo for enhanced performance, Ray Data for unstructured data processing, and more.
U.S. AI chipmaker Cerebras officially filed for an IPO, with the Sam Altman-backed Nvidia competitor expected to be valued at between $7-8B.
Meta released the open-source code and developer suite for its Segment Anything Model (SAM) 2.1, an upgraded version of its image and video segmentation tool.
Nvidia introduced NVLM 1.0, an open-source family of multimodal models that achieve SOTA performance on vision-language and text tasks.
Pinterest launched Performance+, a suite of new AI tools for advertisers that includes the ability to create background images for products and automation features for ad campaigns.
THAT’S A WRAP

California blocks AI safety bill
Sign Up | Advertise | Podcast | AI University
Welcome, AI enthusiasts.
The tug-of-war between AI acceleration and safety just took a new turn — with California vetoing a controversial AI bill set to shake up the tech landscape.
Is this a decisive victory for Silicon Valley and Big Tech, or is the AI regulatory battle just getting started? Let’s get into it…
In today’s AI rundown:
California’s controversial AI safety bill vetoed
OpenAI secures SoftBank funding as Apple exits raise
Unlock multiple ChatGPT tools in one chat
Liquid AI unveils efficient new LFM models
5 new AI tools & 4 new AI jobs
More AI & tech news
Read time: 4 minutes
LATEST DEVELOPMENTS
AI REGULATION
❌ California’s controversial AI safety bill vetoed

Image source: Associated Press
The Rundown: California Governor Gavin Newsom just vetoed S.B. 1047, a groundbreaking AI safety bill that would have imposed stricter regulations on Silicon Valley AI firms and the release of new models in the state.
The details:
The bill would have required safety testing for AI models before their public release and held AI companies liable for any ‘severe harm’ (over $500M in damages) caused.
Tech giants, including OpenAI and Google, VCs, and politicians like Nancy Pelosi lobbied heavily against the bill, arguing it would stifle innovation.
The bill had notable support from Elon Musk, Anthropic, the ‘Godfather of AI’ Geoffrey Hinton, and over 120 Hollywood actors, directors, and workers.
Newsom said the bill was ‘well-intentioned’ but flawed, vowing to consult with AI experts to craft guardrails for future legislation efforts.
Why it matters: As the U.S. federal government continues to lag in AI regulation, states are stepping up to fill the void. While S.B. 1047 is shelved for now, the debate over AI governance is far from settled—and will likely continue to pit AI safety advocates against those pushing for rapid development throughout Silicon Valley.
TOGETHER WITH INNOVATING WITH AI
💼 Start your career as an AI Consultant
The Rundown: Innovating with AI’s new program, AI Consultancy Project, equips AI enthusiasts with all the resources to capitalize on the rapidly growing AI consulting market – which is set to 8x to $54.7B by 2032.
The program offers:
Tools and framework to find clients and deliver top-notch services
A 6-month roadmap to build a 6-figure AI consulting business
Student landing their first AI client in as little as 3 days
Click here to request early access to The AI Consultancy Project.
OPENAI
💰 OpenAI secures SoftBank funding as Apple exits raise

Image source: Midjourney
The Rundown: Despite Apple reportedly no longer participating in OpenAI’s upcoming funding round, the AI giant has secured billions of dollars from Japanese investment giant Softbank, Microsoft, and Thrive Capital.
The details:
OpenAI is rumored to be raising up to $6.5B via convertible notes, at an eye-popping $150B valuation.
Microsoft plans to participate with an additional $1B, adding to its previous $13B investment in the AI giant.
Investment firm Thrive Capital is also investing $1B, with a reported option to add an additional $1B the following year based on revenue goals.
The Wall Street Journal reported that Apple is no longer involved in the funding round, despite partnerships with OpenAI and its inclusion in Apple Intelligence.
The raise comes amid OpenAI’s controversial restructuring to a for-profit entity, with Sam Altman denying rumors that he will receive equity in the move.
Why it matters: OpenAI’s latest raise and for-profit turn is another saga in its convoluted and controversial business structure. Despite the recent high-profile departures and continued drama, the ChatGPT maker is still clearly seen as a top horse to bet on in the AI boom—and there is no shortage of major players who want in.
AI TRAINING
🧰 Unlock multiple ChatGPT tools in one chat

The Rundown: ChatGPT's new shortcut feature lets you instantly switch between image generation, web search, and advanced reasoning tools directly in one chat—avoiding the need to reset chats.
Step-by-step:
Start a new chat in ChatGPT and type "/" in the input field.
Choose from three options: Picture (DALL-E), Search (web), or Reason (GPT-o1).
For images, use "/picture [description]" (e.g., "/picture quantum computer").
For web searches, use "/search [query]" (e.g., "/search quantum computer").
For complex reasoning, use "/reason [task]" (e.g., "/reason Explain quantum computing").
Pro tip: When using the /search command, try adding "latest" or a specific year to your prompt.
PRESENTED BY SECTION
🏆 Build winning AI applications
The Rundown: Join Section and Ed Ortega of Machine + Partners on Oct. 29 for a free event tailored to leaders looking to build AI applications.
In this session, you’ll learn how to:
Prioritize which AI projects to tackle first
Avoid AI “traps” and build winning AI products
Get beyond the “hype” and get real ROI with AI
RSVP for free today and start making AI work for your business.
LIQUID AI
💧 Liquid AI unveils efficient new LFM models

Image source: Liquid AI
The Rundown: Liquid AI just introduced a new series of AI models called Liquid Foundation Models (LFMs), challenging the traditional transformer architecture while achieving state-of-the-art performance and enhanced memory efficiency at smaller model sizes.
The details:
The company released its LFMs in 1.3B, 3B, and 40B parameter sizes, based on a new architecture utilizing computational units rooted in dynamical systems rather than traditional transformers.
The models surpass transformer-based counterparts like Meta's Llama 3.2 and Microsoft's Phi-3.5 on major benchmarks like MMLU.
LFMs require significantly less memory for inference, particularly with long-context tasks — supporting up to 32k tokens while maintaining memory efficiency.
The models are not open-source and are only currently available via the company’s Lambda (Chat UI and API) and on Perplexity AI.
Why it matters: Liquid AI's LFMs are a significant shakeup from the transformer architecture standard that has dominated models since 2017. The benchmarks show that there is more than one formula for achieving state-of-the-art AI performance—and could open new possibilities for more efficient and accessible AI systems.
NEW TOOLS & JOBS
Trending AI Tools
🎤 Udio Lyric Editor - Create and refine song lyrics based on melody
📷 Expression Editor - Easily edit facial expressions
🚀 PandaETL - Automate document processes with AI and data
🤖 Gaia - Train and deploy neural machine translation models
🔍 Lumona - AI search engine leveraging social media insights
New AI Job Opportunities
👷♂️ Waymo - Principal Engineer
🤖 Weights & Biases - AI Engineer
⚙️ Sanctuary AI - Controls Software Engineer
💼 DeepL - Enterprise Sales Manager
QUICK HITS
Google agreed to invest $1B into Thailand to expand AI and cloud infrastructure in Southeast Asia, aiming to build new data centers amid increasing regional competition.
TikTok parent company ByteDance is reportedly planning to develop a new AI model primarily using Huawei chips, diversifying from U.S. suppliers like Nvidia to counteract export restrictions.
Artisan AI secured $7.3M in seed funding for its sales-focused AI virtual employees, with its first AI assistant Ava already assisting over 120 companies on the platform.
Luma Labs upgraded its Dream Machine AI video model speed, allowing for full-quality generations in under 20 seconds.
Qodo announced a $40M funding round for its AI-powered code testing software, with plans to expand services and target larger enterprise clients.
AI reading coach startup Ello launched ‘Storytime’, a new feature allowing kids to create personalized stories using AI.
THAT’S A WRAP

An exclusive look into Google's new AI models
Welcome, AI enthusiasts.
We have an exclusive for you today.
In case you missed it, last week Google released two new upgraded Gemini 1.5 models—achieving new, state-of-the-art performance across math benchmarks.
We partnered with Google to help explain what makes these new models so special for developers, real-world use cases, AI agents, and more. Let’s get into it…
In today’s AI rundown:
Google’s two new Gemini 1.5 models
Gemini 1.5 compared to other AI models
The age of the AI-first developer
Real-world use cases of Gemini 1.5
Proactive AI agent systems
– Rowan Cheung, founder
EXCLUSIVE Q&A WITH LOGAN KILPATRICK
GEMINI
✨ Google rolls out two new Gemini 1.5 models

Image credits: Kiki Wu / The Rundown
The Rundown: Google just released two new upgraded versions of Gemini 1.5 across the Gemini API, including 1.5 pro-002, which achieved state-of-the-art performance across math benchmarks, and 1.5-flash-002, which makes big gains in instruction following.
Cheung: “Can you give us the rundown on everything being released and why it actually matters?”
Kilpatrick: “Today, we're rolling out two new production-ready Gemini models and also improving rate limits, pricing for 1.5 Pro, and some of the filter settings enabled by default. Really, all these are focused on enabling developers to go in and build more of the stuff that they're excited about.”
Cheung: “What exactly makes the new models so unique?“
Kilpatrick: “Math, the ability for the models to code, which is obviously super important for people who care about developer stuff. It's been a lot of listening and sort of iterating on the feedback that we've been getting from the ecosystem.“
Kilpatrick added: “The linear amount of progress that we've seen with, and in some cases, exponential in different benchmarks with this iteration of Gemini models… has been incredibly exciting"
Why it matters: Google’s new Gemini 1.5-pro-002 model achieves state-of-the-art performance across challenging math benchmarks like AMC + AIME 24, and MATH. This means that the model is able to solve advanced mathematical problems and tasks that require deep domain expertise, a major hurdle from most previous AI models.
You can try AI Studio and the new Gemini 1.5 models for free here.
HEAD-TO-HEAD
💎 Gemini 1.5 compared to other AI models

Image credits: Kiki Wu / The Rundown
The Rundown: Google also announced significant improvements to accessibility for developers building with Gemini models, including a 50% reduced price on 1.5 Pro, 2x higher rate limits on Flash and 3x higher on 1.5 Pro, 2x faster output, and 3x lower latency.
Cheung: “In addition to the new updates, higher rate limits, expanded feature access, and high context windows, what other capabilities does Gemini 1 .5 offer that developers should be really excited about?“
Kilpatrick: "Part of my perspective is the financial burden to build with AI is one of the rate limiters of this technology being accessible… our strategy to combat this is we have the most generous free tier of any language model that exists in the world”
Kilpatrick added: "One of the big differentiators is you can come to AI Studio, fine-tune Gemini 1.5 Flash for free, and then ultimately put that model into production and pay the same extremely competitive, per million token cost. There's no incremental cost to use a fine-tuned model, which is super differentiated in the ecosystem.”
Why it matters: Google's latest Gemini updates significantly lower the financial barrier for AI development while boosting performance, especially in math. With these updates, Gemini now tops the LLM leaderboard in terms of performance-to-price ratio, context windows, video understanding, and other LLM benchmarks.
The pace of innovation: Google’s Gemini project is only around a year old. Google was the first to ship 1M context windows (and 2M) and context caching, and they’ve been making rapid progress ever since.
THE AI ERA
🚀 The age of the AI-first developer

Image credits: Kiki Wu / The Rundown
The Rundown: AI is helping developers tackle significantly harder problems faster while simultaneously lowering the entry barrier for non-developers to contribute to new innovation and even build their own AI apps.
Cheung: “I think what's really, cool with the age of AI, is seeing anyone, even people who are not technical, being able to build their own AI apps. If someone were to start from zero, is there a tool stack, documentation, courses, videos, or maybe tutorials from Google that you would recommend?“
Kilpatrick: "To your point…As someone who was formerly a software engineer, I really can go and tackle 10x more difficult problems now.”
Kilpatrick added: “For the person who's never coded before, they're now able to tackle like any problem with code because they have this co-pilot in their hands.”
Kilpatrick added: "[For beginners] ai.google.dev is our default landing page that also links out to the Gemini API documentation. On GitHub, we have a Quickstart repo where you can literally run four commands have a local version of AI Studio and Gemini running on your computer to play around with the models.”
Why it matters: With AI as an assistant, some developers are tackling 10x more challenging software problems—which also means 10x the speed of improvements and 10x the innovation, for those who use the tech wisely. Google also has great resources to help even complete beginners get started in less than 5 minutes.
USE CASES
🌎 Real-world use cases of Gemini 1.5

The Rundown: Gemini 1.5's multimodal capabilities allow a host of real-world applications that other models can't match, such as processing and analyzing hour-long videos or entire books—thanks to its impressive 2M token context window.
Cheung: “Can you share an example or some use cases of how customers are using these experimental models of Gemini in the real world?”
Kilpatrick: “Taking in video, I think, is one of the coolest things… Being able to go into an AI studio and just drop an hour-long video in there and ask a bunch of questions is such a mind-blowing experience. And to be able to try it for free.”
Kilpatrick added: "The intent was to build a multimodal model from the ground up…the order of magnitude of important use cases for the world, for developers and for people who want to build with this technology, so many of them are multimodal."
Why it matters: Gemini 1.5's 2M context window allows it to process and analyze long-form content like long videos, entire books, and lengthy podcasts, opening new possibilities for content analysis and interaction. For a full look at its potential, check out Google's list of 185 real-world gen AI use cases from leading organizations.
AI AGENTS
📈 Proactive AI agent systems

Image credits: Kiki Wu / The Rundown
The Rundown: The future of AI is likely to shift from reactive to proactive systems, with AI agents capable of initiating actions and asking for clarification or permission, much like human assistants do today.
Cheung: “What do you think the most surprising way AI will change our daily lives in the future?”
Kilpatrick: "With most AI systems today, it's one way. Sort of, I prompt the system and then it gives me a response back or I tell it to do something and it sort of does what I might instruct it to do.”
Kilpatrick added: “I think the future is, in the medium term, the system actually asking me for permission or clarification on things that I might want it to go do and really solving those problems.”
Kilpatrick added: “It's actually very interesting to me that very few AI systems, if any today, ask me how they can help in an actual, not surface-level way that ends up being meaningful.“
Why it matters: By shifting from purely reactive to proactive systems, AI could become more like a true “Her-like“ assistant, anticipating needs and offering solutions before being prompted. At the current state, no AI systems do this effectively, but as AI continues to advance with projects like Astra, this is likely the next stage for AI.
GO DEEPER
INTERVIEW
🎥 Watch the full interview live
In the full interview with Logan Kilpatrick & Rowan Cheung:
Dive deep into state-of-the-art math achievements of the new models
Talk about real-world use cases of Gemini 1.5, and exciting possibilities
Go in-depth on how to succeed and thrive in the new age of AI
Nerd out on the final form factors of AI and proactive AI agents
Listen on Twitter/X, Spotify, Apple Music, or YouTube.
Google's AI designs its own chips
Sign Up | Advertise | Tools | AI University
Welcome, AI enthusiasts.
Google’s new breakthrough AlphaChip method creates a powerful feedback loop: AI design chips, which train more advanced AI, which designs even better chips.
With this virtuous cycle, 10M context windows (in research), and upgraded Gemini models, is Google quietly pulling ahead in the AI race? Let’s get into it…
In today’s AI rundown:
Google revolutionizes chip design
YouTube support added to NotebookLM
Find viral clips with Canva’s ‘Highlights’
Archaeologists make big discovery using AI
5 new AI tools & 4 new AI jobs
More AI & tech news
Read time: 4 minutes
LATEST DEVELOPMENTS
GOOGLE DEEPMIND
🤖 Google revolutionizes chip design

Image source: Google
The Rundown: Google DeepMind just unveiled AlphaChip, an AI system that designs computer chips by using reinforcement learning to create superhuman chip layouts in hours rather than months.
The details:
AlphaChip has been used to design layouts for the last three generations of Google’s Tensor Processing Units (TPUs), improving performance and accelerating the design cycle.
The AI system uses a novel “edge-based” graph neural network to learn relationships between chip components and generalize across different chip designs.
Google is releasing a pre-trained checkpoint of AlphaChip, sharing the model weights to encourage further research and development in AI-assisted chip design.
AlphaChip’s impact extends beyond Google, with companies like MediaTek adopting the technology for their most advanced chips used in smartphones and other devices.
Why it matters: AlphaChip creates a powerful feedback loop: better AI models design better chips, which enable the training of even more advanced AI models, which design even better chips, and so on. This self-reinforcing cycle could dramatically accelerate AI progress, and at a level that any human will struggle to compete with.
TOGETHER WITH TELY.AI
🚀 AI-driven organic SEO traffic in 2 weeks
The Rundown: Tely AI revolutionizes niche B2B content creation with autonomous AI agents that research, write, and publish converting articles – delivering organic visitors without human intervention, offering a time and cost-efficient solution.
For $500, Tely AI:
Generates 100 high-quality, industry-specific articles monthly
Decreases marketing team spend to $0
Implements a hands-off SEO strategy with Google indexing in just 2 weeks
Request a demo now and start getting organic leads today.
YOUTUBE
▶️ YouTube support added to NotebookLM

Image source: Google
The Rundown: Google just upgraded its NotebookLM tool, adding support for YouTube videos and audio files, along with easier sharing of Audio Overviews—its latest viral AI hit that turns notes, PDFs, Google Docs, and more into AI-generated podcasts.
The details:
NotebookLM now supports public YouTube URLs and audio files, allowing users to analyze videos, lectures, and audio alongside existing text sources.
The tool leverages Gemini 1.5’s multimodal capabilities to summarize key concepts from videos and transcribe audio.
A new sharing feature allows users to generate public links for Audio Overviews, making collaboration even easier.
These updates aim to streamline tasks such as creating study guides, analyzing multiple perspectives on issues, and extracting important information from video, audio, and text.
Why it matters: It’s a big day for Google. The company’s viral hit with NotebookLM is now even more impressive with access to YouTube videos and audio files. YouTube is an endless treasure chest of how-to guides, lectures, documentaries, and entertainment—and now, anyone can consume hours worth of videos in minutes with AI.
AI TRAINING
🎥 Find viral clips with Canva’s ‘Highlights’

The Rundown: Canva's ‘Highlights’ feature uses AI to automatically select the best moments from your long-form videos, saving you tedious time spent on editing and increasing the quality of your content.
Step-by-step:
Open your video in Canva's editor (requires a paid subscription).
Click "Edit video" in the top menu, then select "Highlights" from the Tools section.
Let the AI analyze your long video and generate highlight clips.
Review the suggested highlights and select the viral short clips you want to use for Instagram, TikTok, and YouTube Shorts!
PRESENTED BY BRILLIANT
🧠 Accelerate your AI learning today
The Rundown: Brilliant breaks down complex AI Concepts into bite-sized, easily digestible lessons that fit into your busy schedule, offering expert-designed courses to enhance your AI skills in just minutes a day.
With Brilliant, you can:
Master complex AI principles with engaging micro-lessons
Sharpen your abilities through hands-on exercises in math, coding, and data analysis
Fast-track your learning with customized paths for in-demand skills
Start your 30-day free trial and join over 10 million learners worldwide.
AI RESEARCH
🪨 Archaeologists make big discovery using AI

Image source: University of Yamagata
The Rundown: Archaeologists from Japan’s Yamagata University, in collaboration with IBM Research, used AI to uncover 303 previously unknown geoglyphs near Peru’s famous Nazca Lines, nearly doubling the number of known figures at the site.
The details:
The newly discovered geoglyphs, dating back to 200 BC, depict various animals and humans, including parrots, cats, monkeys, killer whales, and even decapitated heads.
AI combined with low-flying drones dramatically accelerated the discovery process, accomplishing nearly a century’s worth of work in six months.
These smaller geoglyphs (10-25 feet across) provide new insights into the transition from the Paracas culture to the Nazca culture.
The findings, published in the Proceedings of the National Academy of Sciences, demonstrate AI’s ability to help greatly improve archaeological research.
Why it matters: Is there anything AI can’t help us accomplish? The amount of time saved using low-flying drones and artificial intelligence is worth repeating: 100 years worth of work in six months. The ways in which AI is going to impact our lives are still vast and largely unknown, as this discovery proves.
NEW TOOLS & JOBS
Trending AI Tools
🔎 AI Search Grader - Quickly analyze + improve your brand’s visibility and perception on AI search engines (free tool)*
📅 BeforeSunset AI 2.0 - Customizes your schedule with intelligent planning
🏡 Neolocus - AI renders for interior design
🪄 Clarity - AI image upscaler and enhancer
💻 Helicone - Open-source platform for monitoring and debugging AI projects
New AI Job Opportunities
💻 Palantir Technologies - Software Engineer
⚖️ Weights & Biases - Commercial Counsel
📈 Superannotate - Vice President of Marketing
📊 Shield AI - Product Manager
*Sponsored listing
QUICK HITS
Free workshop: 1 hour to AI proficiency. Get confident with prompting, see AI demos in action, and leave with use cases to apply immediately. Sign up.*
AstraZeneca partnered with Immunai, paying $18 million to use the biotech firm’s AI model of the immune system to enhance cancer drug trial efficiency.
Visa agreed to acquire AI-driven payments protection firm Featurespace to enhance its financial crime and fraud detection capabilities—the acquisition price was not disclosed.
Runway launched The Hundred Film Fund to provide grants of $5,000 to $1 million for filmmakers using AI in their projects.
Microsoft announced a $1.3 billion investment in Mexico to enhance AI infrastructure and skills training over the next three years.
Blackstone confirmed a $13.3 billion investment to build an AI data center in northeast England, creating 4,000 jobs including 1,200 in construction.
Hugging Face reached 1 million free public AI models on its platform, highlighting the trend towards specialized models for diverse use cases rather than a single dominant model.
*Sponsored listing
THAT’S A WRAP
SPONSOR US
Get your product in front of over 650k+ AI enthusiasts
Our newsletter is read by thousands of tech executives, investors, engineers, managers, and business owners around the world. Get in touch today.
FEEDBACK
If you have specific feedback or anything interesting you’d like to share, please let us know by replying to this email.
Meta reveals 'Orion' glasses!
Sign Up | Advertise | Tools | AI University
Welcome, AI enthusiasts.
Meta just unveiled the world’s most advanced AR x AI glasses alongside major AI updates, including Llama 3.2 and a new Voice Mode for its 500M Meta AI users.
Is Zuck’s vision of an AI-powered metaverse finally becoming a reality? Let’s get into it…
In today’s AI rundown:
Meta unveils AR x AI glasses, new models, and more
OpenAI CTO exits amid rumors of non-profit removal
Analyze images with AI in seconds
AI breakthrough in treating rare diseases
5 new AI tools & 4 new AI jobs
More AI & tech news
Read time: 4 minutes
LATEST DEVELOPMENTS
META
👓 Meta unveils AR xAI glasses, new model, and more

Image source: Reality Labs
The Rundown: At its Connect 2024 conference, Meta revealed a host of new AI announcements, including its new Orion AR x AI glasses, Llama 3.2, AI features for Reels, and major updates to Meta AI—including a new Voice mode.
The details:
Orion AR glasses prototype boasts a sub-100g weight, wide field of view displays, and advanced features like voice control and hand tracking, taking Meta over 10 years to build.
Meta introduced Llama 3.2, its first major vision model capable of understanding both images and text, with 11B and 90B parameter versions.
New super-small 1B and 3B parameter Llama models were also announced, optimized for on-device use in smartphones and potentially future glasses.
New AI features are coming to Instagram, including automatic video dubbing and lip-syncing for creators for any language and AI-generated content, ‘Imagined for you‘ on Feeds.
Meta announced Voice Mode, similar to ChatGPT’s recent Advanced Voice Mode, which allows users to use their voice to talk with Meta AI on Messenger, Facebook, WhatsApp and Instagram DMs.
Why it matters: It’s difficult to overstate the significance of Meta Connect 2024. With new open-source models, the most advanced AR glasses ever made, and nearly 500 million monthly active Meta AI users now getting AI Voice chat directly onto their favorite platforms—the tech giant is showing, once again, never bet against Zuck.
TOGETHER WITH DEFINED.AI
🌟 Supercharge your AI with Premium Speech Data
The Rundown: Defined.ai’s new Data Access Plan (DAP), offers high-quality AI speech training data at a significantly reduced cost – slashing costs by up to 55% compared to standard rates.
The Data Access Plan offers you:
Responsible, high-quality speech data across numerous languages and locales for all your fine-tuning and training needs
Customizable data plans that scale as your business grows
Instant access to off-the-shelf datasets curated by a Defined.ai expert
Join Defined.ai’s webinar on October 10th to learn more.
OPENAI
😱 OpenAI CTO exits amid rumors of non-profit removal

Image source: Fortune
The Rundown: Mira Murati, (previously) CTO at OpenAI, just announced her decision to leave the company after six and a half years amid rumors that the company is removing its non-profit control and giving CEO Sam Altman equity.
The details:
Murati is apparently leaving to create time and space for her own exploration, while focusing on ensuring a smooth transition.
This transition aligns with a recent report that OpenAI will restructure its core business into a for-profit benefit corporation with Sam Altman and the non-profit part of the company getting a minority equity position.
Murati expressed deep gratitude for her time at OpenAI, highlighting recent achievements like speech-to-speech technology and OpenAI o1.
CEO Sam Altman responded with appreciation for Murati’s contributions, saying the company will ‘soon’ announce transition plans.
Why it matters: This is another big shakeup for OpenAI after losing Andrej Karpathy in February, Ilya Sutskever in May, and Greg Brockman (who’s on a sabbatical) in August. With Sam Altman as the last high-profile leader left, the world has many questions—but has only been answered with the classic OpenAI-style ‘soon‘.
AI TRAINING
🖼️ Analyze images with AI in seconds

The Rundown: GroqCloud's new LLaVA v1.5 7B model allows users to analyze images and get AI-powered insights with near-instantaneous results.
Step-by-step:
Visit the GroqCloud Developer Console.
In the top-right corner, open the models dropdown menu and select "llava-v1.5-7b-4096-preview".
Upload your image by dropping it into the designated area or using the file browser.
Enter a prompt or question about the image in the "MESSAGE" field below.
Click "Submit" to generate AI analysis of your image.
PRESENTED BY DIGITALOCEAN
🔓 Unlock AI’s full potential with DigitalOcean GPUs
The Rundown: DigitalOcean’s cutting-edge platform empowers developers with scalable GPU solutions and user-friendly tools to help you bring your AI/ML apps to life quickly and affordably.
DigitalOcean offers:
Accessible GPU solutions to deploy, test, and improve AI applications
A comprehensive cloud computing platform tailored for seamless AI/ML development
Upcoming GPU Droplets powered by NVIDIA H100 GPUs for advanced AI training and deep learning
AI RESEARCH
🧪 AI breakthrough in treating rare diseases

Image source: Midjourney
The Rundown: Harvard Medical School researchers recently developed an AI model called TxGNN that can identify existing drugs for repurposing to treat rare and neglected diseases.
The details:
TxGNN identified drug candidates from nearly 8,000 existing medicines for over 17,000 diseases, many without current treatments.
The model outperformed leading AI drug repurposing tools by nearly 50% in identifying candidates and was 35% more accurate in predicting contraindications, a specific situation in which a medcine should not be used.
TxGNN uses a novel approach that identifies shared features across multiple diseases, allowing it to extrapolate from well-understood conditions to poorly understood ones.
The researchers have made the tool freely available to encourage its use by clinician-scientists in the search for new therapies, especially for rare and untreated conditions.
Why it matters: Another week, another insane medical breakthrough for AI. While we still need years of clinical validation and approvals before widespread use, TxGNN has the potential to save thousands of lives and improve the lives of people who likely thought a treatment for their specific disease would never come.
NEW TOOLS & JOBS
Trending AI Tools
🚀 Notion AI - Search and chat with documents across Notion, Slack, and Google Drive
📊 Rows AI Analyst 3.0 - An AI data analyst that visualizes and formats data
🖼️ Magnific Mystic V2 - Advanced AI generator that can output up to 4k resolution images
💡 Magic Patterns - Generate product design and React code
🎵 OpenMusic - Create custom tunes from text descriptions
New AI Job Opportunities
🖥️ Anyscale - Software Engineer
⚡ Waymo - Charging Infrastructure Program Manager
🧪 Mistral AI - Research Engineer
🩺 Curai - Physician (Telemedicine)
QUICK HITS
OpenAI is reportedly developing an improved version of its Sora AI video generation model, aiming for higher quality and longer clips than previously demonstrated.
Meta announced it will not immediately join the European Union’s voluntary AI Pact, instead focusing on compliance with the upcoming AI Act regulations.
Nvidia analysts predicted the company will produce around 450,000 Blackwell AI GPUs in Q4 2024, potentially generating over $10 billion in revenue despite initial production challenges.
Nebius Group revealed plans to invest over $1 billion in AI infrastructure across Europe by mid-2025, including GPU clusters and data centers.
The Federal Trade Commission announced enforcement actions against multiple companies for deceptive or unfair use of artificial intelligence in their practices.
OpenAI CEO Sam Altman said the Advanced Voice Mode rollout for ChatGPT has been completed early, except in jurisdictions requiring additional external review.
THAT’S A WRAP
SPONSOR US
Get your product in front of over 650k+ AI enthusiasts
Our newsletter is read by thousands of tech executives, investors, engineers, managers, and business owners around the world. Get in touch today.
FEEDBACK
If you have specific feedback or anything interesting you’d like to share, please let us know by replying to this email.
OpenAI's Voice Mode is finally here!
Sign Up | Advertise | Tools | AI University
Welcome, AI enthusiasts.
After a multi-month wait, OpenAI finally announced that Advanced Voice Mode is rolling out to all ChatGPT Plus and Team subscribers this week (outside of the EU).
But with no current o1 integration, image upload option, or live video, will it live up to the hype? Let’s get into it…
In today’s AI rundown:
OpenAI rolls out Advanced Voice Mode
Google releases production-ready models
Customize images fast with PuLID-FLUX
James Cameron joins Stability AI’s board
6 new AI tools & 4 new AI jobs
More AI & tech news
Read time: 4 minutes
LATEST DEVELOPMENTS
OPENAI
🗣️ OpenAI rolls out Advanced Voice Mode

Image source: OpenAI
The Rundown: OpenAI is finally rolling out an enhanced Advanced Voice Mode (AVM) to all ChatGPT Plus and Teams subscribers this week, featuring new voices and improved functionality to make AI interactions feel more natural and personalized.
The details:
The initial rollout for OpenAI’s new Advanced Voice Mode started in July, but it only ever reached a select few ChatGPT users.
During the delay, OpenAI updated its AVM to integrate Custom Instructions and Memory, allowing for more personalized interactions and conversation recall.
OpenAI also improved AVM’s ability to understand accents and claims smoother, faster conversations, while adding five new nature-inspired voices (and removing the “Sky” voice that sounded like Scarlett Johansson).
AVM will not yet be available in several regions, including the EU, the UK, Switzerland, Iceland, Norway, and Liechtenstein.
Why it matters: With OpenAI CEO Sam Altman writing about AI agents and superintelligence, ChatGPT Advanced Voice Mode feels more relevant than ever. If we’re going to interact with AI every day—it has to sound and feel human—which is exactly what AVM is attempting to accomplish.
Editors note: If you still don’t have access to Advanced Voice Mode on your ChatGPT app, try uninstalling and reinstalling the app.
TOGETHER WITH HUBSPOT
🚀 AI-Powered Productivity: Transform Your Day
The Rundown: Maximize your workplace potential with HubSpot’s ChatGPT guide – your roadmap to AI-driven success for curious minds.
With this guide, you’ll learn to:
Streamline daily tasks with AI, from inbox management to product planning
Apply ChatGPT effectively and ethically in our work environment
Stay ahead of the curve by integrating AI tools into your skill set
Download your free guide now and start transforming your workday with ChatGPT.
✨ Google releases production-ready models

Image source: Google
The Rundown: Google just announced significant updates to its Gemini AI models, including performance improvements, cost reductions, and increased accessibility for developers.
The details:
Two new production-ready models came out today: Gemini-1.5-Pro-002 and Gemini-1.5-Flash-002, offering improved quality across various tasks, including a 20% boost in math-related benchmarks.
Pricing for Gemini 1.5 Pro has been reduced by over 50% for both input and output on prompts under 128K tokens, while rate limits have been increased significantly.
The models boast 2x faster output and 3x lower latency compared to previous versions, with improvements in long context understanding and vision capabilities.
Google also updated its default filter settings, giving developers more control over model configuration for their specific use cases.
Why it matters: Google is iterating quickly and pushing the boundaries of affordability for developers building with AI. While this isn’t Gemini 2 — it is a significant upgrade over the experimental models and will help builders create faster, smarter, cheaper applications.
AI TRAINING
🖼️ Customize images fast with PuLID-FLUX

The Rundown: Hugging Face’s new PuLID-Flux space offers a tuning-free solution for quick image customization with your own likeness using just one reference photo.
Step-by-step:
Visit PuLID-Flux Hugging Face space (also available in Replicate)
Upload your reference image in the "ID Image" section
Play with the different parameters it offers and write a descriptive prompt for your desired output
Click "Generate" and refine as needed
Pro tip: Adjust the "timestep to start inserting ID" parameter to balance fidelity and creativity.
PRESENTED BY SECTION
🗣️ Free Event: Section’s inaugural AI – ROI Conference
The Rundown: Section is hosting a free, virtual, full-day conference focused on getting and proving AI ROI.
Listen into:
Candid sessions led by AI leaders, experts, and ethicists
Real AI success stories and case studies
AI ROI from every perspective, from CFO to VCs
RSVP now for this free event on November 14, 2024.
STABILITY AI
🎥 James Cameron joins Stability AI’s board
Image source: Midjourney
The Rundown: James Cameron, the acclaimed director of Titanic, Avatar, and The Terminator, recently joined the board of directors at Stability AI, the company behind the popular Stable Diffusion text-to-image AI model.
The details:
Cameron, known for pushing technological boundaries in filmmaking, sees the convergence of generative AI and CGI as “the next wave” in visual media creation.
Stability AI’s CEO, Prem Akkaraju, formerly led visual effects company WETA Digital, highlighting the firm’s focus on creative applications of AI.
The move comes as Hollywood grapples with AI’s potential, with some studios embracing the technology while others express concerns over content rights.
Why it matters: Just days after Lionsgate teamed up with AI startup Runway to create a custom video generation model, this move by one of Hollywood’s biggest directors could signal a significant shift in how influential filmmakers are thinking about navigating AI.
NEW TOOLS & JOBS
Trending AI Tools
🎨 Adobe GenStudio - Helps marketing teams measure on-brand content
🔎 FactBot by Snopes - Fact-checking for urban legends and misinformation
💸 JustPaid - Automate invoice follow-ups and payment tracking
💻 ell - A lightweight prompt engineering framework for language models
🧪 Pathway - Helps product teams test UX solutions and gather insights
🎥 Tubit AI - AI that summarizes YouTube videos for a deeper understanding
New AI Job Opportunities
QUICK HITS
Warner Bros. Discovery adopted Google Cloud’s AI for caption generation, aiming to cut production time and costs for unscripted programming.
Intel launched Xeon 6 processors and Gaudi 3 AI accelerators, doubling performance for AI workloads and offering improved price and performance compared to Nvidia’s H100.
OpenAI increased API access for o1 models, adding tier 4 to the list of authorized users at 100 requests per minute and upping tier 5 users to 1000 requests per minute.
Suno AI announced a new cropping feature available to AI-generated songs, allowing Pro and Premier users to adjust the start and end of their creations.
Duolingo introduced AI-powered Adventures mini-games and a Video Call feature to enhance language learning through immersive, practical experiences for its users.
Apple unveiled its plan to roll out Siri’s major AI-powered updates gradually, with the most significant enhancements expected in iOS 18.3, likely launching in January 2025.
THAT’S A WRAP
SPONSOR US
Get your product in front of over 650k+ AI enthusiasts
Our newsletter is read by thousands of tech executives, investors, engineers, managers, and business owners around the world. Get in touch today.
FEEDBACK
If you have specific feedback or anything interesting you’d like to share, please let us know by replying to this email.
No matching search results
Try using different keywords, double-check your spelling, or explore related categories.
Stay Ahead on AI.
Join 2,000,000+ readers getting bite-size AI news updates straight to their inbox every morning with The Rundown AI newsletter. It's 100% free.




















