Generate Photorealistic Car Visuals in seconds!
Built the first automotive GenAI image platform using Stable Diffusion. Drove adoption across top car brands and led the company to acquisition in under 12 months.
Overview
Role:
Co-Founder. Led Design, Product, and Business Strategy
Responsability:
Identified a market gap, validated with an MVP, and iterated to product-market fit
Timeline (1 year):
Feb 2023 - Mar 2024
About The Project
I built Creatorloop, an Image GenAI platform that enabled auto dealerships and manufacturers to generate photorealistic vehicle visuals in seconds. The tool leveraged Stable Diffusion to eliminate the need for costly photoshoots, helping users create high-quality marketing assets effortlessly.

I led product and design end-to-end, scaled adoption across major brands, and drove the company to acquisition within 12 months.
Highlight
I started exploring diffusion models in September 2022 and quickly developed a strong passion for the space. While building Creatorloop, I began experimenting with AI video before it became mainstream.

Post-acquisition, I’ve remained deeply involved in the space 🙂
Endorsed by Google’s Head of Auto Retail
Early Experiments in AI Video
How It All Started
While building Node App, we discovered Stable Diffusion in October 2022 and saw it as an opportunity for our e-commerce brands to create content without influencers. It eventually evolved into its own product.
Overall Process
Spark
Initial Hypothesis
Our hypothesis was that brands would stop hiring influencers and instead use GenAI tools for social media content.
To test this, we:
  • Interviewed 30 businesses (10 agencies & 20 brands)
  • Conducted a competitive analysis of 25 GenAI products
Interview
After interviewing 30 marketing managers from companies like McCann Worldgroup, Jarritos, and Hershey, we learned:
  • Agencies like the idea of using AI for ideation
  • Brands were excited about the opportunity to create content on-demand
  • A lot of skepticism about what AI can do
  • Very few had experimented with other GenAI tools
  • Large organizations don’t know where they’d fit it in their budget
Competitive Analysis
Testing existing AI-image tools revealed a lot of inconsistencies between how brands and agencies produce content and how these tools operate. The quick development of these products highlighted a gap in the market, showing a huge opportunity to better address user needs.

Here are a few products I tested:
Takeaway From Research
Testing existing AI-image tools revealed a lot of inconsistencies between how brands and agencies produce content and how these tools operate. The quick development of these products highlighted a gap in the market, showing a huge opportunity to better address user needs.
Prototype
About The Experimentation
Despite brand excitement, a few aspects of the business model still needed validation. Instead of building software, we launched an AI-image generation service as an MVP. That’d allow us to stay close to customers and gain invaluable insights for the software development phase.
Objectives
The goal of the service was to understand:
  • How to sell GenAI to brands and validate the ideal customer profile (ICP)
  • The limitations of Stable Diffusion
  • The types of requests and outputs brands would want from AI
  • How much they’d be willing to pay for it
Service Blueprint
We conceptualized the service in a few hours and launched 5 pilots within the same week. I interfaced with the brands via Slack, while our data scientist used Stable Diffusion in the background to create content.
Learnings
Operating this service was a grind, but we learned so much! It gave us the conviction to start building the software.
Here are a few learnings:
  • Automotive brands have a higher willingness to pay than e-commerce brands
  • Background swapping fulfills more use cases than hyper-localized model creation (Dreambooth)
  • Using the Img2Img functionality is the best way to minimize bad outputs
  • Achieving the right lighting in an image involves using the product’s color as the base in the generation
  • If you can achieve an outcome without AI, don't use it. AI is unpredictable and produce random outputs
Key Decision
After running the MVP, we decided to focus on automotive brands because:
  • High willingness to pay: Dealerships spend thousands per month on visual content and have a stronger budget appetite than typical e-commerce brands.
  • Lack of in-house creative resources: Most don’t have access to studios or design teams. We validated demand with 15+ pilot partners across Canada and the U.S.
  • Expensive production costs: Automotive photoshoots are significantly more costly than those in other verticals.
  • Repeatable use cases: Dealerships often use standardized image templates, making it easier to optimize AI outputs for Google Display Ads and CRM content.
Building
Overview
The initial goal of V1 was to speed up the AI-generation service. The AI workflows in the product mirrored those used in our AI-service. The design and inner workings were also heavily inspired by automating our manual image-generation process.
This section is divided into 3 parts:
  • Turn Stable Diffusion WebUI into an intuitive UI
  • Create good lighting via color extraction
  • Final Designs (Everything was done by me)
Turn Stable Diffusion WebUI into an intuitive UI
Stable Diffusion UI - Automatic1111
Settings Breakdown
Prompt: Partly customizable
Negative Prompt: No user edit needed
Steps: No user edit needed
Sampler: No user edit needed
CFG Scale: Depends on user needs
Seed: No user edit needed
Size: No user edit needed
Model: Depends on user needs
Denoising strength: Depends on user needs
Mask blur: Depends on user needs
Our analysis of the settings revealed that adjusting the prompt and model is essential for desired outcomes. The CFG scale, denoising strength, and mask blur will vary with the model selected. Therefore, the UI is focused on the prompt and model settings.
Create good lighting via color extraction
The color extraction step occurs outside of Stable Diffusion. It ensures that the background lighting matches the uploaded image.

Here’s a hypothetical example:
Final Designs
UI Designs
A selection of final UI screens from CreatorLoop. You can find the Figma file below for more details.
Motion Design
All motion UIs were fully designed and animated by me to bring the product experience to life.
Takeaways & Future
GPU Learnings
Working on Creatorloop highlighted the importance of considering compute when designing for generative AI applications. By accounting for the cost of GPUs and potential wait times, designers can create more cost-effective experiences and thoughtfully design waiting states for users.
What I’d do differently
1. Designing for Long Waits
  • I wish I had been more intentional in designing wait states. I could have used them to convey product information or delight customers in various ways.
2. Switched from Automatic1111 (WebUI) to ComfyUI
  • We were proud of the workflows we built, but in hindsight, switching away from Automatic1111 sooner would’ve saved time and enabled faster iteration.
  • We developed an Img2Vid workflow, but couldn’t launch it—generation was too slow and critical Stable Diffusion settings weren’t supported in Automatic1111.
  • ComfyUI wasn’t available when we started, but it would’ve been ideal for dealership-specific styles. It would’ve made it easier to experiment with LoRA fine-tuning and scale model customization efficiently.