Get Comfy with LAIA

Ready to Build Your Way to the Future?
Join AI for Architects Starting September 22nd!

Hey {{first_name | Friend}} 👋

What a phenomenal start!

Lighthouse AI Academy’s ComfyUI cohort kicked off last week, and the thrill our students experienced from stepping into new creative paradigms was only rivalled by the joy we felt at the prospect of molding fresh minds. 

Picture 20+ people — VFX supervisors, software architects, senior visualizers, advertising creative directors — all showing up, eager to learn and ready to build.

This was so much more than just another online class. 

It felt like the start of something bigger: a group of the world’s top creative minds diving into generative AI together, side by side, node by node.

Led by Maged Elbanna (architect + AI researcher) with support from Dario van Houwelingen (3D visualization + AI product developer), the first session set the stage for what’s to come: 

13 weeks of deep practice, collaboration, and projects that push the boundaries of what AI, along with design, can do.

What We Covered on Day 1

The fastest way through the fog is together.

— Nejc Susec

The goal: To provide everyone with a working mental model of how GenAI generates images and how that maps one-to-one to ComfyUI’s nodes.

What is GenAI?

Core concepts covered (theory)

  • Transformers (text) → prompts become vectors the model can process.

  • CLIP (text–image) → aligns words and pictures so text can “aim” images.

  • Diffusion → add noise (training), then remove it (generation) to reveal an image.

  • Guidance (CFG) → controls how tightly the image follows your prompt.

How humans describe → how models encode 

The text prompt acts as a guidance mechanism in stable diffusion, directing the denoising process from noise towards an image that corresponds to the text.

— Maged Elbanna

How that shows up in ComfyUI (practice)

  • Load Checkpoint (model)

  • K Sampler (the engine; takes model + conditioning + latent)

  • Positive / Negative Conditioning (your prompts)

  • Latent (the canvas)VAE DecodePreview Image

ComfyUI is node-based, like Houdini/Grasshopper.
If you know inputs/outputs, you’ll feel at home.

— Maged Elbana

The Basic Text-To-Image Workflow (We Built It Together)

Step 1 — Load your model

  • Add Load Checkpoint.

  • Pick your checkpoint (e.g., SDXL).

Switching checkpoints: why the second SD model changes results

Step 2 — Set the sampler

  • Add K Sampler.

  • Choose sampler and scheduler (how noise is removed over time).

  • Tip: ~40 steps is a solid default; going >50 often brings diminishing returns.

Sampler & scheduler: the time plan for denoising

Step 3 — Add prompts (guidance)

  • Create CLIP Text Encode (positive) and CLIP Text Encode (negative).

  • Connect both to K Sampler (required by design).

  • Negatives we used in class: watermark, low quality, nsfw.

Step 4 — Define the canvas (latent)

  • Add Empty Latent Image (e.g., 1024×1024 for SDXL).

  • Connect it to K Sampler.

Step 5 — Decode + preview

  • Add VAE DecodePreview Image from the sampler’s latent output.

  • Run. Watch the noise clear into an image.

End-to-end graph: Load Checkpoint → K Sampler → VAE Decode → Preview

🌞 Pro tip: Pack multiple edits into one pass; upscale at the end to avoid “copy-of-a-copy” degradation.

Security & Setup Notes (From Live Q&A)

  • Open-source safety: use isolated/dev machines for installs/custom nodes; prefer reputable sources; safe tensors helps mitigate risk.

  • Mac vs CUDA: Mac is fine for learning, but not practical for production speed. Windows + NVIDIA (CUDA) is recommended for control/perf.

  • Cloud GPUs: good option for Mac; weigh convenience vs data/privacy.

  • Dev → Prod: keep a breakable dev Comfy instance; promote stable graphs to prod.

Why It Matters 💡

ComfyUI isn’t just another tool. It’s the backbone of modern generative AI workflows: modular, open, and endlessly adaptable.

By mastering its fundamentals, this cohort is going well beyond learning how to push buttons; they’re learning a new creative language.

And with such a high-caliber group (spanning film, gaming, architecture, design, and advertising), the cross-pollination of ideas is already incredible.

🧠 Homework: Build & Test Your First Workflow

This week, try this exercise:

  1. Rebuild the basic text-to-image workflow from scratch in ComfyUI.

  2. Experiment with two different samplers and two different schedulers — note how the images change.

  3. Compare 30 vs. 40 vs. 50 steps — capture both the results and the generation time.

  4. Share one before/after + tag us on LinkedIn

You’ll be surprised how much confidence comes just from rebuilding and tweaking — repetition turns the theory into muscle memory.

Practice time: rebuild the graph, prompt + negative, 1024×1024 (SDXL)

What’s Next

  • Next session: Samplers & Schedulers in depth—how/when to choose.

  • Then: Conditioning (regional, control nets) and model components (LoRAs, IP-Adapters).

Upcoming Cohorts: Enrol Now

👷‍♂️ AI for Architects — Sept 22–Dec 12, 2025 (limited seats)

What You’ll Build:

  • A concept-to-visualization pipeline: sketches → cinematic renders → client-ready masterplans.

  • A repeatable workflow using ComfyUI + advanced pipelines.

  • A signature look via LoRA training (your own style model).

Who It’s For:

Architects, viz specialists, urban planners, studio owners, computational designers, and creative technologists.

Outcomes:

  • Portfolio-ready project

  • Faster iterations with better control

  • A network of peers pushing the field forward

👉 Apply for AI for Architects before seats run out!

🤹 AI for Creative Leaders — Sept 9

Lead teams that ship repeatable, commercial AI workflows.

👉 AI for Creative Leaders

🌄 Closing Sentiments

If the first session is anything to go by, this cohort will be something special. 

The energy, the talent, the curiosity: it’s all there!

We’ll keep sharing the journey, but for now: here’s to new workflows, new connections, and a new season of creativity.

That’s it for now: Thanks for reading and building this new era with us.

We know this will be our most successful series of cohorts yet. Don’t miss out and come learn with the best; we’d love to have you.

Step by step, node by node — and away we go!

Keep creating and always remember to have fun.

Small Team, Big Dreams

Keep Reading

No posts found