AI image cleanup: ChatGPT, Leonardo, Stable Diffusion, and friends

End-to-end recipe for cleaning up AI-generated designs from ChatGPT, Leonardo, Stable Diffusion, and similar. Per-source tuning notes.

Different AI tools generate at different sizes, with different background styles, and produce different edge artifacts. The cleanup pipeline is mostly the same β€” what changes is the per-source tuning.

This article covers ChatGPT, Leonardo, Stable Diffusion, and other AI generators. For Midjourney specifically, see Midjourney cleanup β€” same pipeline, slightly different defaults.

The shared pipeline

Same shape across AI sources:

StepToolWhat it does
1Color RemovalRemoves the AI-generated background
2TrimCuts off the empty edges
3Speckle RemoverCleans up tiny stray AI pixels
4Transparency CleanerRemoves faint halos
5RepositionCenters on the target canvas

What changes per AI source: the Color Removal settings and the source pixel sizes.

Per-source defaults

ChatGPT (DALL-E successor / GPT-Image)

  • Output size: 1024 Γ— 1024 (default), 1024 Γ— 1792 (portrait), 1792 Γ— 1024 (landscape)
  • Backgrounds: typically clean and well-composed β€” usually easier for Color Removal than Midjourney
  • Color Removal settings:
  • Tolerance: 30 (standard)
  • Edge Feather: 0-1 (ChatGPT subjects tend to have crisper edges than Midjourney)
  • Watch out: text inside ChatGPT-generated images usually doesn't survive cleanup. The Color Removal step can eat into faint text strokes. If your design has text, generate the text separately (or add it via Watermark Text after the pipeline).

Leonardo

  • Output sizes: widely variable (512 Γ— 512 to 1536 Γ— 1536 typically, depending on model)
  • Backgrounds: vary by model β€” some Leonardo models produce clean backgrounds, others produce painterly textures
  • Color Removal settings:
  • Tolerance: 35-40 (Leonardo backgrounds are often gradients)
  • Edge Feather: 2-3 (Leonardo subjects often have softer edges than ChatGPT)
  • Watch out: Leonardo's built-in Alchemy upscaler produces much cleaner output than the base generator. Use Alchemy upscale in Leonardo before downloading if your Leonardo plan supports it β€” gives you a much better starting point for the ReadyPixl pipeline.

Stable Diffusion (local installs, hosted services like Replicate, ComfyUI)

  • Output sizes: wildly variable depending on your model and settings (768 Γ— 768 typical for SD 1.5, 1024 Γ— 1024 typical for SDXL)
  • Backgrounds: vary entirely by your prompt and model
  • Color Removal settings: depend heavily on what you generated. Use the View mode in Color Removal to test settings on one image before running the batch.
  • Watch out: Stable Diffusion outputs at low step counts or low CFG produce more artifacts. Quality of source matters most for SD batches. Good SDXL output cleans up beautifully; rushed SD output amplifies its own artifacts.

Other AI generators (Adobe Firefly, Recraft, Pika, etc.)

The pipeline shape is the same. Test Color Removal settings on one output before running on a batch β€” every generator has slightly different background and edge characteristics.

Step-by-step (any AI source)

  1. Generate in your AI tool. Download the outputs to a folder.
  1. Open ReadyPixl. Drop the folder in.
  1. Add Color Removal with the per-source settings above.
  1. Add Trim (defaults).
  1. Add Speckle Remover (defaults β€” Max Cluster Size 50).
  1. Add Transparency Cleaner. Set its slider to 30-40 to catch faint halos around the subject.
  1. Add Reposition for your target:
  • POD shirts: 4500 Γ— 5400 @ 300 DPI
  • Etsy listing: 2000 Γ— 2000 @ 72 DPI
  • Phone wallpaper: device-specific (1170 Γ— 2532 for iPhone 13/14)
  • Social post: 2048 Γ— 2048 @ 72 DPI
  1. Click Download All. Output ready.
  1. Save the pipeline as a preset per AI source. "ChatGPT β†’ Merch," "Leonardo β†’ Etsy," etc. Switching sources is one click.

Tips

  • Test on 1-2 images first. AI outputs vary a lot β€” what works for one batch might not work for the next.
  • Generate extras. AI quality is uneven. Generate 3-5 variants per concept, pipeline-run them all, pick the best at the end.
  • Save originals. Your ReadyPixl outputs are deliverables. Your raw AI generations are seeds.
  • Match cleanup pipeline to AI source. Don't run a Midjourney-tuned pipeline on ChatGPT output. The settings will be off.
  • Combine sources thoughtfully. A mixed batch of ChatGPT + Leonardo + Stable Diffusion outputs in one pipeline run will use the same Color Removal settings for all of them β€” usually one source comes out worse than the others. Either separate by source or pick a middle-ground Color Removal Tolerance.

⚠️ Same upscale gap as Midjourney

AI sources tend to output at small sizes (512-1024 px). Print-on-demand sites want 4500 Γ— 5400+. Until ReadyPixl's AI Upscale premium tool ships, the workaround is:

  • Use the AI tool's built-in upscaler before downloading (ChatGPT has variations, Leonardo has Alchemy, Stable Diffusion has many upscaler nodes).
  • Or accept that the design will be smaller relative to the target canvas. Reposition centers your subject β€” most POD sites accept this and print at the actual subject size, not the canvas size.

When AI Upscale ships, add it as a step between Color Removal and Reposition in the pipeline.

What AI cleanup can't fix

  • Bad anatomy / warped subjects β€” generate again in your AI tool with a better prompt
  • Misspelled / garbled text β€” generate without text, add text via Watermark Text after
  • Highly transparent subjects (glass, smoke, liquid) β€” these confuse Color Removal regardless of source. Use AI Background Removal (15 credits per use)
  • Very low-quality source generations β€” pipeline magnifies the source. Bad in = bad out.

What to read next