Installing ComfyUI on Ubuntu 24.04 is straightforward but if you want to do it the clean and professional way, this guide shows how to deploy it in /opt/ComfyUI
, configure it for GPU acceleration, and run it automatically on system boot with systemd
.
Whether you're an artist or an engineer, this setup keeps things modular and system-integrated.
Prerequisites
- Ubuntu 24.04 (fresh or updated)
- An NVIDIA GPU (with at least 4 GB VRAM)
sudo
access
1. Install System Dependencies
sudo apt update
sudo apt install git python3 python3-venv python3-dev build-essential libgl1 ssh systemd-timesyncd -y
2. Install NVIDIA GPU Drivers
sudo ubuntu-drivers list --gpgpu
sudo apt update
sudo ubuntu-drivers install --gpgpu
sudo apt install nvidia-cuda-toolkit
sudo reboot
After reboot, verify with:
nvidia-smi
3. Clone ComfyUI to /opt
sudo git clone https://github.com/comfyanonymous/ComfyUI.git /opt/ComfyUI
sudo chown -R $USER:$USER /opt/ComfyUI
4. Set Up Python Virtual Environment
cd /opt/ComfyUI
python3 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip wheel
pip install -r requirements.txt
5. (Optional) Install GPU-Accelerated PyTorch
pip install torch torchvision

6. Create Model Folders
sudo mkdir -p /opt/ComfyUI/models/checkpoints
sudo mkdir -p /opt/ComfyUI/models/vae
sudo mkdir -p /opt/ComfyUI/models/clip
sudo mkdir -p /opt/ComfyUI/models/upscale_models
sudo mkdir -p /opt/ComfyUI/models/controlnet
sudo chown -R comfyui:comfyui /opt/ComfyUI/models
7. (Optional) Download Default Models
# Stable Diffusion 1.5 Checkpoint
sudo -u comfyui wget -O /opt/ComfyUI/models/checkpoints/v1-5-pruned-emaonly.safetensors \
https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors
# VAE for SD1.5
sudo -u comfyui wget -O /opt/ComfyUI/models/vae/vae-ft-mse-840000-ema-pruned.safetensors \
https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors
# ControlNet Canny
sudo -u comfyui wget -O /opt/ComfyUI/models/controlnet/control_canny-fp16.safetensors \
https://huggingface.co/lllyasviel/ControlNet/resolve/main/models/control_canny-fp16.safetensors
8. (Optional) Install Custom Nodes
sudo -u comfyui git clone https://github.com/ltdrdata/ComfyUI-Manager.git /opt/ComfyUI/custom_nodes/ComfyUI-Manager
sudo -u comfyui git clone https://github.com/Fannovel16/comfyui-reactor-node.git /opt/ComfyUI/custom_nodes/comfyui-reactor-node

9. Create a Systemd Service
Create the user:
sudo groupadd comfyui
sudo useradd -m -g comfyui comfyui
sudo usermod -a -G comfyui $USER
sudo chown -R comfyui:comfyui /opt/ComfyUI
sudo chmod -R 775 /opt/ComfyUI
Create the systemd service file:
sudo nano /etc/systemd/system/comfyui.service
Paste this:
[Unit]
Description=ComfyUI Service
After=network-online.target
Wants=network-online.target
[Service]
User=comfyui
WorkingDirectory=/opt/ComfyUI
ExecStart=/opt/ComfyUI/.venv/bin/python3 main.py --listen 0.0.0.0
Restart=always
[Install]
WantedBy=multi-user.target
Enable and start it:
sudo systemctl daemon-reload
sudo systemctl enable comfyui
sudo systemctl start comfyui
10. Access ComfyUI
If everything started correctly, go to:
http://<your-server-ip>:8188

Integrating ComfyUI into n8n Workflows
One of the most powerful aspects of this setup is how it integrates into my broader automation system using n8n a powerful, open-source workflow automation platform.
I use n8n to trigger image generation jobs in ComfyUI based on external events, such as:
- New RSS feed items from blogs or news
- Webhook calls triggered by my apps or devices
- Timed schedules to generate daily AI art themes
- AI agents creating prompts automatically and passing them to ComfyUI
Workflow Example
A typical workflow looks like this:
- Trigger: n8n receives a webhook, RSS post, or GPT-generated idea.
- Prompt Creation: A GPT agent (via OpenAI or Ollama) constructs an image prompt.
- ComfyUI Call: n8n sends an HTTP POST to a custom ComfyUI API node I created.
- Image is generated: ComfyUI renders the output and saves it to a known location.
- Webhook return or upload: The result is sent to Discord, Telegram, or an email.
Why Use ComfyUI + n8n?
This hybrid setup gives me complete flexibility:
- I keep my models local, fast, and private.
- I chain AI tools without coding everything manually.
- I can automate hundreds of images from data streams.
If you're an automation geek or artist, this combo is a creative superpower.