
Build an Automated Ad Generator With This New Tool (Eleven Labs Flows)
Try Text
The Rundown In this guide, you will learn how to turn a product photo into a finished video ad using ElevenLabs Flows , a new node-based canvas that bundles image, video, voice, and music models in one place. Build the canvas once, upload references of your real product, and every ad after that is a quick swap instead of a full rebuild. Who This Is Useful For Ecommerce founders who want ads that show their actual product, not a generic AI stand-in In-house marketers shipping paid social creative who are tired of bouncing between Midjourney, Runway, and Suno in four separate tabs Agency owners and content creators who want one repeatable canvas they can clone across clients and product lines What You Will Build A reusable Flow that takes 1-3 reference photos of your real product, drops it into a generated scene, animates it, and exports a finished video ad with an audio jingle. Once it works for one product, duplicate the canvas, swap the references, and the next ad is done in minutes. What You Need to Get Started An ElevenLabs paid plan with Flows enabled 1-3 clean photos of your real product from different angles (front, 3/4, top down) A short idea for the scene you want your product placed in Step 1 Open Flows and Name the Canvas Log into ElevenLabs, click ElevenCreative in the sidebar, then Flows . Click + New Flow and name it something reusable like [your product line] ad template . You are building a template, not a one-shot render, so the name matters. Pro tip: We tested this on the $5 Starter plan and it works. Video generation with flows is paid only, but you do not need Creator or Pro to run the image and video nodes in this guide. You can test image generation for free. Step 2 Add an Image Node With Reference Photos Click the plus on the canvas and add an Image Generation node. Click Reference Images and upload a photos of your real product. Repeat with different reference photos to help the AI lock in on the product’s design. Now write a prompt that describes the scene , not the product. The references handle the product for you. The product (see references) on a clean seamless studio backdrop, soft diffused lighting from above, subtle shadow underneath, shallow depth of field, photorealistic product shot. Run the node. If the scene is off, tweak the prompt and rerun. Only this node updates, so you aren't paying for the rest of the pipeline while you iterate on the look. Pro tip: Most people try to describe the product in the prompt. Don't. Let the references do it. That separation is what keeps your real product consistent across every variation. Step 3 Connect a Video Node to the Image Add a Video Generation node to the canvas. Drag a line from the image node's output into the video node's start frame input. Drop in a short motion prompt and run it. Slow cinematic push-in on the product, subtle camera drift, soft light shifting across the surface, shallow depth of field. Because the start frame came from your reference-based image, your real product carries through into the clip without ever being described in the motion prompt. Pro tip: The most fun part of this workflow is generating different prompts each run. Click the text button on any node to have AI write a different prompt each time. Step 4 Swap Models, Export, and Clone Every node in Flows has a model picker in its settings. On the image node, switch between the image models bundled into Flows to see which one handles your product references best. Do the same on the video node for motion, and on the voice node when you add audio later. This is the hidden value of Flows: you can test one model against another without rebuilding the canvas or leaving the app. Remember that clicking Run fires that single node. If you click the Run dropdown you can click Run till here re-fire all previous nodes too. When the output looks right, click Export on the video node to save the MP4. Then use the canvas menu to Duplicate Flow . Swap the reference photos and the scene prompt for your next product and the second ad is done in minutes. Going Further If you want audio on the clip, add a Text to Speech or Music node and connect it into a Mix Audio node alongside the video. That is how Flows layers voice or a soundtrack onto the clip without leaving the canvas. Once you have one template working, build a small library of Flows for the formats you use most: a 15 second hero ad, a 30 second explainer, a UGC talking head. Each one is a canvas you only build once, then clone forever.
Tools

AI training for the future of work.
Get access to all our AI certificate courses, hundreds of real-world AI use cases, live expert-led workshops, an exclusive network of AI early adopters, and more.



.png)



