Get Comfy with LAIA
Ready to Build Your Way to the Future?
Join AI for Architects Starting September 22nd!
Hey {{first_name | Friend}} ๐
What a phenomenal start!
Lighthouse AI Academyโs ComfyUI cohort kicked off last week, and the thrill our students experienced from stepping into new creative paradigms was only rivalled by the joy we felt at the prospect of molding fresh minds.ย
Picture 20+ people โ VFX supervisors, software architects, senior visualizers, advertising creative directors โ all showing up, eager to learn and ready to build.
This was so much more than just another online class.ย
It felt like the start of something bigger: a group of the worldโs top creative minds diving into generative AI together, side by side, node by node.
Led by Maged Elbanna (architect + AI researcher) with support from Dario van Houwelingen (3D visualization + AI product developer), the first session set the stage for whatโs to come:ย
13 weeks of deep practice, collaboration, and projects that push the boundaries of what AI, along with design, can do.
What We Covered on Day 1
The fastest way through the fog is together.
The goal: To provide everyone with a working mental model of how GenAI generates images and how that maps one-to-one to ComfyUIโs nodes.

What is GenAI?
Core concepts covered (theory)
Transformers (text) โ prompts become vectors the model can process.
CLIP (textโimage) โ aligns words and pictures so text can โaimโ images.
Diffusion โ add noise (training), then remove it (generation) to reveal an image.
Guidance (CFG) โ controls how tightly the image follows your prompt.

How humans describe โ how models encodeย
The text prompt acts as a guidance mechanism in stable diffusion, directing the denoising process from noise towards an image that corresponds to the text.
How that shows up in ComfyUI (practice)
Load Checkpoint (model)
K Sampler (the engine; takes model + conditioning + latent)
Positive / Negative Conditioning (your prompts)
Latent (the canvas) โ VAE Decode โ Preview Image
ComfyUI is node-based, like Houdini/Grasshopper.
If you know inputs/outputs, youโll feel at home.
The Basic Text-To-Image Workflow (We Built It Together)
Step 1 โ Load your model
Add Load Checkpoint.
Pick your checkpoint (e.g., SDXL).

Switching checkpoints: why the second SD model changes results
Step 2 โ Set the sampler
Add K Sampler.
Choose sampler and scheduler (how noise is removed over time).
Tip: ~40 steps is a solid default; going >50 often brings diminishing returns.

Sampler & scheduler: the time plan for denoising
Step 3 โ Add prompts (guidance)
Create CLIP Text Encode (positive) and CLIP Text Encode (negative).
Connect both to K Sampler (required by design).
Negatives we used in class: watermark, low quality, nsfw.
Step 4 โ Define the canvas (latent)
Add Empty Latent Image (e.g., 1024ร1024 for SDXL).
Connect it to K Sampler.
Step 5 โ Decode + preview
Add VAE Decode โ Preview Image from the samplerโs latent output.
Run. Watch the noise clear into an image.

End-to-end graph: Load Checkpoint โ K Sampler โ VAE Decode โ Preview
๐ Pro tip: Pack multiple edits into one pass; upscale at the end to avoid โcopy-of-a-copyโ degradation.
Security & Setup Notes (From Live Q&A)
Open-source safety: use isolated/dev machines for installs/custom nodes; prefer reputable sources; safe tensors helps mitigate risk.
Mac vs CUDA: Mac is fine for learning, but not practical for production speed. Windows + NVIDIA (CUDA) is recommended for control/perf.
Cloud GPUs: good option for Mac; weigh convenience vs data/privacy.
Dev โ Prod: keep a breakable dev Comfy instance; promote stable graphs to prod.
Why It Matters ๐ก
ComfyUI isnโt just another tool. Itโs the backbone of modern generative AI workflows: modular, open, and endlessly adaptable.
By mastering its fundamentals, this cohort is going well beyond learning how to push buttons; theyโre learning a new creative language.
And with such a high-caliber group (spanning film, gaming, architecture, design, and advertising), the cross-pollination of ideas is already incredible.
๐ง Homework: Build & Test Your First Workflow
This week, try this exercise:
Rebuild the basic text-to-image workflow from scratch in ComfyUI.
Experiment with two different samplers and two different schedulers โ note how the images change.
Compare 30 vs. 40 vs. 50 steps โ capture both the results and the generation time.
Share one before/after + tag us on LinkedIn
Youโll be surprised how much confidence comes just from rebuilding and tweaking โ repetition turns the theory into muscle memory.

Practice time: rebuild the graph, prompt + negative, 1024ร1024 (SDXL)
Whatโs Next
Next session: Samplers & Schedulers in depthโhow/when to choose.
Then: Conditioning (regional, control nets) and model components (LoRAs, IP-Adapters).
Upcoming Cohorts: Enrol Now
๐ทโโ๏ธ AI for Architects โ Sept 22โDec 12, 2025 (limited seats)
What Youโll Build:
A concept-to-visualization pipeline: sketches โ cinematic renders โ client-ready masterplans.
A repeatable workflow using ComfyUI + advanced pipelines.
A signature look via LoRA training (your own style model).
Who Itโs For:
Architects, viz specialists, urban planners, studio owners, computational designers, and creative technologists.
Outcomes:
Portfolio-ready project
Faster iterations with better control
A network of peers pushing the field forward
๐ Apply for AI for Architects before seats run out!
๐คน AI for Creative Leaders โ Sept 9
Lead teams that ship repeatable, commercial AI workflows.
๐ AI for Creative Leaders
๐ Closing Sentiments
If the first session is anything to go by, this cohort will be something special.ย
The energy, the talent, the curiosity: itโs all there!
Weโll keep sharing the journey, but for now: hereโs to new workflows, new connections, and a new season of creativity.
Thatโs it for now: Thanks for reading and building this new era with us.
We know this will be our most successful series of cohorts yet. Donโt miss out and come learn with the best; weโd love to have you.
Step by step, node by node โ and away we go!
Keep creating and always remember to have fun.
โ Adam & the Lighthouse AI Academy Team โ๏ธ

Small Team, Big Dreams