GET COMFY WITH LAIA:
Let Your Curiosity Get The Better Of You.
Hey {{first_name | Friend}} ๐
Adam here, back with another spectacular student highlight.
What happens when AI isnโt a shortcutโฆ But a collaborator?
Trace of One is one of the most ambitious projects to come out of Lighthouse AI Academy: a full fashion film, from concept to color grade, built with AI.
Marta Musial and her team created an entire fashion film (from concept to color grade) using nothing but AI tools.
Yes, the model and outfits were as real as I am.ย
Everything else? Generated.
And the kicker: itโs repeatable. Below is the exact pathโconcept, LoRAs, image and video generation, sound, and 6K finishing.
Curious how far AI can take you when you slow down and do it right?
Keep reading, my friends ๐
๐๏ธ Trace of One: A Step-by-Step Breakdown of a Fully AI-Generated Fashion Film
Letโs walk through the process that brought it all to life and unpack the many lessons from this massive artistic undertaking.

Time to play: setting the mood
Step 1: Concept Comes First
The team spent over two weeks researching Mai Gidahโs brand identity, sketching references, and gathering poems, films, and cultural visuals onto a shared Miro board.

On to the Miro board
They used ChatGPT as a thinking partner, drafting treatments, refining storyboards, and building out the journey of a character moving through four lands โ each one altering the design and geometry of his outfit.

Sketching the scenes
The strongest ideas often come after giving space for messy, playful exploration.
Step 2: The LoRA Shoot (And Why Itโs Different)
Unlike a traditional fashion campaign, the dataset photoshoot focused on training material, not final images.
Neutral backdrops. Minimal lighting variation. Dozens of full-body, medium, and close-up shots of model Yoshua in each outfit.
The aim: Consistency. Expression. Range.
But details matter: a wrinkled shirt, a smudge of makeup โ these become part of the dataset and replicate across generations.

Collecting real-life data from a real photo shoot
Next time, Iโd include a stylist just to catch the details. A single fold can live on in every result.
Step 3: Training LoRAs That Actually Work
The team trained multiple LoRAs: one for the modelโs identity and four for the outfits.
Each needed to balance flexibility with fidelity โ consistent results, across scenes.
They experimented with captioning strategies and lighting conditions, noting how even small biases in the dataset would affect the final look.

Colorful LoRA experimentation
Even one image too many can tip a LoRA toward overfitting. Knowing when to stop matters just as much as knowing how to start.
Step 4: Image Generation Is a Film Pipeline Now
The image stage wasnโt about perfection, but rather about flow.
The team thought in sequences. Every frame was designed to work in motion.
They used controlnets (pose, depth, canny) to ground consistency, inpainting for refining character and outfit accuracy, and ComfyUI as the central tool for merging all layers.
Upscaling was a key hurdle: subtle changes in texture or tone could completely shift the garment or face. A second round of inpainting was often required just to hold visual consistency before video gen.

The path to visual consistency is key
The goal wasnโt perfect images. It was consistent material that could survive inpainting, upscaling, and motion.
Step 5: Bringing It to Life โ Video, Sound & Story
They used Kling 1.6 for most videos, later testing Kling 2 when it became available. Prompting was everything: to simulate real fabric movement, each outfit was described in careful detail, from texture to weight.

Tests and iteration
Every shot required iteration. The team would test 10 versions of a clip, compare outcomes, and edit only the strongest into the final film.
Postproduction followed a surprisingly traditional path: smart upscaling, DaVinci Resolve for color, soundtracks blended between stock and ElevenLabs-generated music.

And voila - we have arrived at postproduction
At the end of the day, itโs still storytelling. AI didnโt remove the need for an editor, it just gave me different raw materials.
Why It Matters ๐
Trace of One is much more than an AI experiment; itโs a production-grade proof of concept.
It shows what happens when creators donโt just prompt, but craft.
It proves that AI workflows can carry emotion, identity, and consistency if you treat them with the same rigor youโd apply to any creative job.
Creativity doesnโt get replaced by AI. It gets expanded by it.
Watch the Full Film: Trace of One in 4K โถ๏ธย
Youโve read the process, now itโs time to see the result.
This 2.5-minute fashion campaign was built entirely with AI: from concept to camera movement, from character to color grade.
๐ฝ๏ธ Watch on YouTube ย
AI Image & Film Credits: Built by a Global Team of Creators ๐
Trace of One wasnโt built by one person: it was a collaboration across disciplines, continents, and craft.ย
Here's the team behind the magic:
๐ง Homework: Creating Your Trace
Concept before prompts: 30-minute messy moodboard (words, visuals, music). No judgment.
Dataset logic: Mini shoot (neutral backdrop, angles, expressions). Note what a LoRA would learn.
Caption test: Duplicate 12 images; caption one set descriptively, one minimally. Compare outputs.
Sequence thinking: Pick 5 images and check if they read as a scene, not just singles.
Thatโs your challenge. Not to finish anything but to practice the workflow, stage by stage.
Iteration = mastery ๐คบ
โ๏ธ 2026 Cohorts & Affiliate Program Now Open
Applications are officially open for all three Lighthouse AI tracks:
Basic ComfyUI ๐คธ
Build visual workflows from scratch with ComfyUIโno prior node experience required.
Who itโs for: Artists, designers, and creatives who want solid AI foundations.
๐ Basic ComfyUI Course
ย Advanced ComfyUI ๐งย
Scale from prototype to pipeline: batching, automation, quality control.
Who itโs for: Technical artists, pipeline builders, VFX supervisors, and advanced users.
๐ Advanced ComfyUI Course
AI for Creative Leaders ๐คนย
Design systems, not just outputs: workflow design, ethics, team integration, portfolio project.
Who itโs for: Creative directors, studio leads, strategists, innovation teams.
๐ AI for Creative Leaders
Affiliate Program: Everybody Wins! ๐ธ
Your network gets 10% off any LAIA course.
You receive 10% back for every enrollment you refer.
Weโve seen one alum earn โฌ1,200 in tuition credits by sharing seats.

Simply put your referrerโs name on the application form & weโll handle the rest
Your referrals can write your name down in the application process, and we will take care of the rest.
This is community-driven growth. And by helping others step into AI, you earn in the process.
๐ Closing Sentiments
This issue was a simple truth in motion:ย
AI doesnโt replace your craft; it extends it.ย
Start with concept, design for consistency (datasets โ LoRAs โ ControlNets โ inpainting โ upscaling), and think in sequences, not single frames.ย
Most of all, create together; collaboration makes the work deeper and a whole lot more enjoyable, too!
Thatโs it for now: Thanks for reading and for building this new era with us.ย
AI moves fast, but thereโs always room to make something honest, useful, and yours.
If youโre ready to level up in 2026, applications are open for Basic ComfyUI, Advanced ComfyUI, and AI for Creative Leaders.ย
Bring a friend through the Affiliate Program (they get 10% off, you earn 10% back) and grow alongside people who care about creating as much as you do.
Step by step, node by node โ and away we go.
Keep creating and always remember to have fun.
โ Adam & the Lighthouse AI Academy Team โ๏ธ

Small Team, Big Dreams