Why Try AI

Why Try AI

7 Text-To-Image AI Models: Tested

I look at the seven main players in AI image generation.

Daniel Nest's avatar
Daniel Nest
Dec 14, 2023
∙ Paid

Hey, remember when I demoed six text-to-video sites?

Cat genitals, psychedelics, creepy chimeras, and other shenanigans? Ring a bell?

Today, I want to do the same for AI images. (Hopefully with 100% fewer cat penises.)

By my latest count, we now have seven primary public text-to-image models1:

  • DALL-E 3 (OpenAI)

  • Emu (Meta)

  • Firefly Image 2 (Adobe)

  • Ideogram (Ideogram)

  • Imagen (Google)

  • Midjourney 5.2 (Midjourney)

  • SDXL (Stability AI)2

Let’s check out the images they generate and learn more about the models.

Why Try AI? is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

The process

This won’t be a deep-dive showdown like my SDXL 1.0 vs. Midjourney 5.2 post.

Instead, I’ll briefly introduce each model and showcase the visuals it generates. To keep things consistent and comparable, I’ll be using the same 6 prompts for each model:

  1. Tulips in a meadow, golden hour, watercolor painting

  1. Parrot on a branch, wildlife photography, National Geographic

  1. P…

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Daniel Nest
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture