<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Why Try AI: Hot Takes]]></title><description><![CDATA[Ad hoc tests and commentary on emerging AI models, tools, and so on.]]></description><link>https://www.whytryai.com/s/hot-takes</link><generator>Substack</generator><lastBuildDate>Wed, 06 May 2026 18:21:17 GMT</lastBuildDate><atom:link href="https://www.whytryai.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Daniel Nest]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[whytryai@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[whytryai@substack.com]]></itunes:email><itunes:name><![CDATA[Daniel Nest]]></itunes:name></itunes:owner><itunes:author><![CDATA[Daniel Nest]]></itunes:author><googleplay:owner><![CDATA[whytryai@substack.com]]></googleplay:owner><googleplay:email><![CDATA[whytryai@substack.com]]></googleplay:email><googleplay:author><![CDATA[Daniel Nest]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Gemini 3: Google’s Silent Knockout Punch]]></title><description><![CDATA[Google proves that a "quiet" launch is enough, as long as you bring receipts.]]></description><link>https://www.whytryai.com/p/gemini-3</link><guid isPermaLink="false">https://www.whytryai.com/p/gemini-3</guid><dc:creator><![CDATA[Daniel Nest]]></dc:creator><pubDate>Thu, 20 Nov 2025 12:21:14 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/e6112eb2-455d-4de1-812d-47b1aeec5e7a_1456x1048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>TL;DR</h2><p>Google launched the world&#8217;s best model with all the fanfare of a firmware update for a toaster, but the consensus speaks for itself.</p><h2>What is it?</h2><p>Simply put, <a href="https://blog.google/products/gemini/gemini-3/">Gemini 3 Pro</a> is the best language model by virtually every measure. This isn&#8217;t a subjective value judgement. <a href="https://blog.google/products/gemini/gemini-3/">Here are the benchmarks</a>:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!toXD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09e12839-61f2-409f-a7e1-b682f5bd9976_2420x2212.gif" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!toXD!,w_424,c_limit,f_webp,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09e12839-61f2-409f-a7e1-b682f5bd9976_2420x2212.gif 424w, https://substackcdn.com/image/fetch/$s_!toXD!,w_848,c_limit,f_webp,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09e12839-61f2-409f-a7e1-b682f5bd9976_2420x2212.gif 848w, https://substackcdn.com/image/fetch/$s_!toXD!,w_1272,c_limit,f_webp,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09e12839-61f2-409f-a7e1-b682f5bd9976_2420x2212.gif 1272w, https://substackcdn.com/image/fetch/$s_!toXD!,w_1456,c_limit,f_webp,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09e12839-61f2-409f-a7e1-b682f5bd9976_2420x2212.gif 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!toXD!,w_1456,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09e12839-61f2-409f-a7e1-b682f5bd9976_2420x2212.gif" width="1456" height="1331" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/09e12839-61f2-409f-a7e1-b682f5bd9976_2420x2212.gif&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1331,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Table of AI benchmark results comparing Gemini 3 Pro, Gemini 2.5 Pro, Claude Sonnet 4.5, and GPT-5.1 across academic reasoning, math, multimodal understanding, coding, OCR, long-horizon tasks, multilingual Q&amp;A, commonsense, and performance tests, with Gemini 3 Pro leading most categories.&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Table of AI benchmark results comparing Gemini 3 Pro, Gemini 2.5 Pro, Claude Sonnet 4.5, and GPT-5.1 across academic reasoning, math, multimodal understanding, coding, OCR, long-horizon tasks, multilingual Q&amp;A, commonsense, and performance tests, with Gemini 3 Pro leading most categories." title="Table of AI benchmark results comparing Gemini 3 Pro, Gemini 2.5 Pro, Claude Sonnet 4.5, and GPT-5.1 across academic reasoning, math, multimodal understanding, coding, OCR, long-horizon tasks, multilingual Q&amp;A, commonsense, and performance tests, with Gemini 3 Pro leading most categories." srcset="https://substackcdn.com/image/fetch/$s_!toXD!,w_424,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09e12839-61f2-409f-a7e1-b682f5bd9976_2420x2212.gif 424w, https://substackcdn.com/image/fetch/$s_!toXD!,w_848,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09e12839-61f2-409f-a7e1-b682f5bd9976_2420x2212.gif 848w, https://substackcdn.com/image/fetch/$s_!toXD!,w_1272,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09e12839-61f2-409f-a7e1-b682f5bd9976_2420x2212.gif 1272w, https://substackcdn.com/image/fetch/$s_!toXD!,w_1456,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09e12839-61f2-409f-a7e1-b682f5bd9976_2420x2212.gif 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Oof! Letting Claude Sonnet 4.5 get away with a 1% lead on SWE-Bench Verified? How embarrassing!</figcaption></figure></div><p>Artificial Analysis Intelligence Index tells the same story:</p>
      <p>
          <a href="https://www.whytryai.com/p/gemini-3">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Sora 2: Amazing Model. Dubious Rollout.]]></title><description><![CDATA[OpenAI chose to release its top-tier video model inside a gimmicky social app.]]></description><link>https://www.whytryai.com/p/sora-2</link><guid isPermaLink="false">https://www.whytryai.com/p/sora-2</guid><dc:creator><![CDATA[Daniel Nest]]></dc:creator><pubDate>Thu, 02 Oct 2025 11:50:46 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/927b53eb-25de-4503-9268-8a4b4b169715_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>TL;DR</h2><p>Sora 2 is a genuinely impressive video model, but it&#8217;s baked into a social media app built for ultra-short meme clips, which undersells its creative potential.</p><h2>What is it?</h2><p>When OpenAI first teased its original Sora model back in February 2024, <a href="https://www.whytryai.com/i/141610093/openais-sora-ushers-in-a-new-era-of-text-to-video">people went nuts</a>. But Sora wouldn&#8217;t launch to the public until December, and by that time, it was well behind many <a href="https://www.whytryai.com/p/free-ai-image-to-video-tools-tested">top-tier video models</a>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>With Sora 2, OpenAI is convincingly back in the game:</p><div id="youtube2-gzneGhpXwjU" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;gzneGhpXwjU&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/gzneGhpXwjU?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>There&#8217;s <a href="https://openai.com/index/sora-2/">a lot to like about Sora 2</a>.</p><p>It can handle complex physics, including intricate scenes like gymnastics and fight sequences. It can create videos in many different styles. And, like Veo 3, Sora 2 natively generates its own audio effects and dialogue.</p><p>While there are no third-party benchmarks or leaderboard rankings yet, Sora 2 feels roughly on par with Veo 3 (depending on the use case).</p><p>Here are three quick comparisons of my own:</p><blockquote><p><strong>Prompt #1</strong><em>: Over-excited blonde influencer is holding a smartphone with the Substack feed on it. She near-screams: &#8220;You guys have to check out Why Try AI. It&#8217;s so, so good. But what do I know? I don&#8217;t even exist!&#8221;</em></p></blockquote><p><strong>Veo 3:</strong></p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;b57adf0a-2900-41f2-8169-0bc090958ce5&quot;,&quot;duration&quot;:null}"></div><p><strong>Sora 2:</strong></p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;576c01ed-cdc9-42c8-87d0-efcc2daba459&quot;,&quot;duration&quot;:null}"></div><p>I don&#8217;t know why both models decided to throw in a semi-psychotic giggle at the end, entirely unprompted, but here we are.</p><blockquote><p><strong>Prompt #2</strong><em>: A drunken 1800s pirate tries to use a modern laptop but can&#8217;t figure it out. In frustration, he bangs the keys and says &#8220;Blast this shiny chest o&#8217; letters&#8212;won&#8217;t open no matter how I pound it!&#8221;</em></p></blockquote><p><strong>Veo 3:</strong></p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;54697c45-a2f1-47f8-9cbb-279496f772e8&quot;,&quot;duration&quot;:null}"></div><p><strong>Sora 2:</strong></p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;8efd4de4-adff-4b18-af0f-bfe52e908998&quot;,&quot;duration&quot;:null}"></div><p>I love the bonus details, from Veo 3&#8217;s exploding laptop to Sora 2&#8217;s off-script method acting improv.</p><blockquote><p><strong>Prompt #3</strong><em>: A horse wearing a top hat tap dances to music on its hind legs</em></p></blockquote><p><strong>Veo 3:</strong></p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;e152e1ab-8538-4a42-854c-ce07a51a544e&quot;,&quot;duration&quot;:null}"></div><p><strong>Sora 2:</strong></p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;b5661328-7827-44ee-bd1c-84d752262cf4&quot;,&quot;duration&quot;:null}"></div><p>Veo 3 looks more realistic, but I like how Sora 2 came up with an entire jingle. No true dancing on hind legs from either model, however.</p><p>The bottom line is that both are impressive and not too far apart.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.whytryai.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.whytryai.com/subscribe?"><span>Subscribe now</span></a></p><h2>How do you use it?</h2>
      <p>
          <a href="https://www.whytryai.com/p/sora-2">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Seedream 4.0: Nano Banana Killer?]]></title><description><![CDATA[AI image wars continue, with bonus rumors of an upcoming OpenAI image update.]]></description><link>https://www.whytryai.com/p/seedream-4</link><guid isPermaLink="false">https://www.whytryai.com/p/seedream-4</guid><dc:creator><![CDATA[Daniel Nest]]></dc:creator><pubDate>Thu, 11 Sep 2025 08:43:31 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/1dd9361b-aed6-4feb-8e2a-b3fbecd5b083_1248x832.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>TL;DR</h2><p>ByteDance&#8217;s Seedream 4.0 gives Nano Banana a run for its money with top-tier image generation and editing.</p><h2>What is it?</h2><p>Just when you thought we were done, folks!</p><p>It&#8217;s only been two weeks since Google&#8217;s <a href="https://www.whytryai.com/p/nano-banana">Nano Banana (Gemini 2.5 Flash Image)</a> &#8220;killed&#8221; Photoshop, but we already have a shiny new kid on the block: <a href="https://seed.bytedance.com/en/seedream4_0">Seedream 4.0</a>.</p><p>The latest model from ByteDance goes toe-to-toe with Nano Banana on image editing tasks, while also being on par with <a href="https://www.whytryai.com/p/openai-4o-native-image-generation">GPT image generation</a> at making <em>new</em> images.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_eEL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6483cad1-f4c3-4fd8-aece-a2ac1e84a2c8_1601x799.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_eEL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6483cad1-f4c3-4fd8-aece-a2ac1e84a2c8_1601x799.png 424w, https://substackcdn.com/image/fetch/$s_!_eEL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6483cad1-f4c3-4fd8-aece-a2ac1e84a2c8_1601x799.png 848w, https://substackcdn.com/image/fetch/$s_!_eEL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6483cad1-f4c3-4fd8-aece-a2ac1e84a2c8_1601x799.png 1272w, https://substackcdn.com/image/fetch/$s_!_eEL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6483cad1-f4c3-4fd8-aece-a2ac1e84a2c8_1601x799.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_eEL!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6483cad1-f4c3-4fd8-aece-a2ac1e84a2c8_1601x799.png" width="1200" height="599.1758241758242" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6483cad1-f4c3-4fd8-aece-a2ac1e84a2c8_1601x799.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:727,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:386029,&quot;alt&quot;:&quot;MagicBench: Multi-Dimensional Evaluation In comparison with other models, Seedream 4.0 performed well across core dimensions including prompt adherence, alignment, and aesthetics. Text-to-Image Radar Chart  Achieved high scores in text-to-image tasks for prompt following, aesthetics, and text-rendering. Single-Image Editing Radar Chart  Achieved a good balance between prompt following and alignment with the source image in single-image editing tasks. Also reached the first place in the internal Elo evaluation.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.whytryai.com/i/173256017?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6483cad1-f4c3-4fd8-aece-a2ac1e84a2c8_1601x799.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="MagicBench: Multi-Dimensional Evaluation In comparison with other models, Seedream 4.0 performed well across core dimensions including prompt adherence, alignment, and aesthetics. Text-to-Image Radar Chart  Achieved high scores in text-to-image tasks for prompt following, aesthetics, and text-rendering. Single-Image Editing Radar Chart  Achieved a good balance between prompt following and alignment with the source image in single-image editing tasks. Also reached the first place in the internal Elo evaluation." title="MagicBench: Multi-Dimensional Evaluation In comparison with other models, Seedream 4.0 performed well across core dimensions including prompt adherence, alignment, and aesthetics. Text-to-Image Radar Chart  Achieved high scores in text-to-image tasks for prompt following, aesthetics, and text-rendering. Single-Image Editing Radar Chart  Achieved a good balance between prompt following and alignment with the source image in single-image editing tasks. Also reached the first place in the internal Elo evaluation." srcset="https://substackcdn.com/image/fetch/$s_!_eEL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6483cad1-f4c3-4fd8-aece-a2ac1e84a2c8_1601x799.png 424w, https://substackcdn.com/image/fetch/$s_!_eEL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6483cad1-f4c3-4fd8-aece-a2ac1e84a2c8_1601x799.png 848w, https://substackcdn.com/image/fetch/$s_!_eEL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6483cad1-f4c3-4fd8-aece-a2ac1e84a2c8_1601x799.png 1272w, https://substackcdn.com/image/fetch/$s_!_eEL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6483cad1-f4c3-4fd8-aece-a2ac1e84a2c8_1601x799.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: <strong><a href="https://seed.bytedance.com/en/seedream4_0#:~:text=MagicBench%3A%20Multi%2DDimensional%20Evaluation">ByteDance</a></strong></figcaption></figure></div><p>This video has plenty of helpful side-by-side image editing tests:</p>
      <p>
          <a href="https://www.whytryai.com/p/seedream-4">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Nano Banana Is Fantastic, But Don't Bury Photoshop Just Yet]]></title><description><![CDATA[As awesome as Nano Banana is at editing images, it won't replace professionals.]]></description><link>https://www.whytryai.com/p/nano-banana</link><guid isPermaLink="false">https://www.whytryai.com/p/nano-banana</guid><dc:creator><![CDATA[Daniel Nest]]></dc:creator><pubDate>Thu, 28 Aug 2025 09:26:15 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/240b32dd-45f4-4986-b859-92eb13331135_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p><em>It&#8217;s time for another Thursday &#8220;<a href="https://www.whytryai.com/s/hot-takes">Hot Take</a>.&#8221;</em></p></blockquote><h2>TL;DR</h2><p>Gemini&#8239;2.5 Flash Image (aka &#8220;Nano Banana&#8221;) is now the <a href="https://lmarena.ai/leaderboard/text-to-image">world&#8217;s best image model</a> that also <a href="https://lmarena.ai/leaderboard/image-edit">excels at detail-preserving edits</a>, but let&#8217;s not equate &#8220;good enough for the average Joe&#8221; with &#8220;Photoshop extinction event.&#8221;</p><h2>What is it?</h2><p>Gemini 2.5 Flash Image is Google's latest image model, which made <a href="https://www.whytryai.com/p/sunday-rundown-108-china-strikes-back#:~:text=the%20requested%20changes.-,%E2%80%9CNano%20Banana%E2%80%9D,-is%20a%20new">quite a splash last week</a> under its pre-release nickname, "Nano Banana."<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> Two days ago, Google <a href="https://blog.google/intl/en-mena/product-updates/explore-get-answers/nano-banana-image-editing-in-gemini-just-got-a-major-upgrade/">officially claimed ownership of the model</a>.</p><p>While Nano Banana is topping leaderboards for text-to-image generation, what truly makes it special is its ability to make precise edits to existing images while keeping characters consistent and preserving details.</p><p>Here's a quick taste:</p>
      <p>
          <a href="https://www.whytryai.com/p/nano-banana">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[HeyGen Avatar IV: Deepfakes Are Point-and-Click Now.]]></title><description><![CDATA[All you need is just one image. Is this a problem?]]></description><link>https://www.whytryai.com/p/heygen-avatar-iv-deepfakes</link><guid isPermaLink="false">https://www.whytryai.com/p/heygen-avatar-iv-deepfakes</guid><dc:creator><![CDATA[Daniel Nest]]></dc:creator><pubDate>Thu, 08 May 2025 16:21:07 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/56ff9f43-d0c7-4b3e-9cde-37f90e7c92eb_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>TL;DR</h2><p>HeyGen&#8217;s Avatar IV creates lifelike, expressive talking avatars from a single image that can lip-sync to any audio or script&#8212;are deepfakes too easy now?</p><h2>What is it?</h2><p>Avatar IV is a new AI avatar model from HeyGen:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://x.com/joshua_xu_/status/1919765489775231401" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!YGxy!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63a374bd-c909-41c4-8f2f-f174f38ea78a_580x506.png 424w, https://substackcdn.com/image/fetch/$s_!YGxy!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63a374bd-c909-41c4-8f2f-f174f38ea78a_580x506.png 848w, https://substackcdn.com/image/fetch/$s_!YGxy!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63a374bd-c909-41c4-8f2f-f174f38ea78a_580x506.png 1272w, https://substackcdn.com/image/fetch/$s_!YGxy!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63a374bd-c909-41c4-8f2f-f174f38ea78a_580x506.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!YGxy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63a374bd-c909-41c4-8f2f-f174f38ea78a_580x506.png" width="520" height="453.6551724137931" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/63a374bd-c909-41c4-8f2f-f174f38ea78a_580x506.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:506,&quot;width&quot;:580,&quot;resizeWidth&quot;:520,&quot;bytes&quot;:45967,&quot;alt&quot;:&quot; NEW: HeyGen Avatar IV is here.  Our most advanced AI avatar model yet.  &#128248; One photo. &#128221; One script. &#127911; Just your voice.  Most avatars sync to your words. Avatar IV interprets them.  Built on a diffusion-inspired audio-to-expression engine, it analyzes your vocal tone, rhythm, and emotion &#8212; then synthesizes photoreal facial motion with temporal realism.  &#127917; Head tilts. Pauses. Cadences. Micro-expressions.  &#10145;&#65039; A single image &#8594; a video that feels real, not rendered.  Rolling out to all users now.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://x.com/joshua_xu_/status/1919765489775231401&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.whytryai.com/i/155321899?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63a374bd-c909-41c4-8f2f-f174f38ea78a_580x506.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt=" NEW: HeyGen Avatar IV is here.  Our most advanced AI avatar model yet.  &#128248; One photo. &#128221; One script. &#127911; Just your voice.  Most avatars sync to your words. Avatar IV interprets them.  Built on a diffusion-inspired audio-to-expression engine, it analyzes your vocal tone, rhythm, and emotion &#8212; then synthesizes photoreal facial motion with temporal realism.  &#127917; Head tilts. Pauses. Cadences. Micro-expressions.  &#10145;&#65039; A single image &#8594; a video that feels real, not rendered.  Rolling out to all users now." title=" NEW: HeyGen Avatar IV is here.  Our most advanced AI avatar model yet.  &#128248; One photo. &#128221; One script. &#127911; Just your voice.  Most avatars sync to your words. Avatar IV interprets them.  Built on a diffusion-inspired audio-to-expression engine, it analyzes your vocal tone, rhythm, and emotion &#8212; then synthesizes photoreal facial motion with temporal realism.  &#127917; Head tilts. Pauses. Cadences. Micro-expressions.  &#10145;&#65039; A single image &#8594; a video that feels real, not rendered.  Rolling out to all users now." srcset="https://substackcdn.com/image/fetch/$s_!YGxy!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63a374bd-c909-41c4-8f2f-f174f38ea78a_580x506.png 424w, https://substackcdn.com/image/fetch/$s_!YGxy!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63a374bd-c909-41c4-8f2f-f174f38ea78a_580x506.png 848w, https://substackcdn.com/image/fetch/$s_!YGxy!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63a374bd-c909-41c4-8f2f-f174f38ea78a_580x506.png 1272w, https://substackcdn.com/image/fetch/$s_!YGxy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63a374bd-c909-41c4-8f2f-f174f38ea78a_580x506.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: <strong><a href="https://x.com/joshua_xu_/status/1919765489775231401">X</a></strong></figcaption></figure></div><p>Not so long ago, creating a custom avatar with HeyGen or Synthesia required you to record a training video and submit it for professional processing.</p><p>Now, it takes one image, one script (written or recorded), and a few minutes. That&#8217;s it:</p>
      <p>
          <a href="https://www.whytryai.com/p/heygen-avatar-iv-deepfakes">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Don’t Sleep On Genspark’s Super Agent]]></title><description><![CDATA[Genspark is better than the much-hyped Manus, yet it has sailed under the radar.]]></description><link>https://www.whytryai.com/p/genspark-super-agent</link><guid isPermaLink="false">https://www.whytryai.com/p/genspark-super-agent</guid><dc:creator><![CDATA[Daniel Nest]]></dc:creator><pubDate>Thu, 10 Apr 2025 11:41:24 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/303b0c70-da98-470b-8936-a43034393faf_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p><em>Yet another Thursday post in the &#8220;<a href="https://www.whytryai.com/s/hot-takes">Hot Takes</a>&#8221; format.</em></p></blockquote><h2>TL;DR</h2><p>Genspark&#8217;s Super Agent is a genuinely competent general agent that reasons and performs complex tasks, but it&#8217;s been overlooked in the avalanche of agent hype.</p><h2>What is it?</h2><p>Last week, <a href="https://mainfunc.ai/blog/genspark_super_agent">Genspark announced</a> its &#8220;fast and reliable&#8221; Super Agent:</p>
      <p>
          <a href="https://www.whytryai.com/p/genspark-super-agent">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[OpenAI Launches 4o Image Generation, Ruins Reve’s Rad Reveal.]]></title><description><![CDATA[Reve AI picked the absolute worst day to announce its best-in-class image model.]]></description><link>https://www.whytryai.com/p/openai-4o-native-image-generation</link><guid isPermaLink="false">https://www.whytryai.com/p/openai-4o-native-image-generation</guid><dc:creator><![CDATA[Daniel Nest]]></dc:creator><pubDate>Wed, 26 Mar 2025 12:27:15 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d2c65244-9075-4887-b98d-3471a1f1654f_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>For inspiration:</strong> <em>Grab <a href="https://www.whytryai.com/i/160256477/sunday-bonus-use-cases-for-gpt-o-image-generation-swipe-file">my swipe file</a> with 90+ use cases for GPT-4o image generation.</em></p>
      <p>
          <a href="https://www.whytryai.com/p/openai-4o-native-image-generation">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Gemini 2.0 Flash Makes Mediocre Images...But That's Not The Point!]]></title><description><![CDATA[Image quality is a red herring. We're finally witnessing true multimodality.]]></description><link>https://www.whytryai.com/p/gemini-2-0-flash-native-image-generation</link><guid isPermaLink="false">https://www.whytryai.com/p/gemini-2-0-flash-native-image-generation</guid><dc:creator><![CDATA[Daniel Nest]]></dc:creator><pubDate>Thu, 13 Mar 2025 13:13:09 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d89c5141-8340-4377-acb6-77c31a6aec00_1408x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Today&#8217;s post is also a developing story, so the &#8220;<strong><a href="https://www.whytryai.com/s/hot-takes">Hot Take</a></strong>&#8221; format fits nicely.</em></p><h2>TL;DR</h2><p>Gemini 2.0 Flash Experimental can create and edit images <em>natively</em>.</p><h2>What is it?</h2><p>Yesterday, Google&#8217;s Logan Kilpatrick announced the release of Gemini 2.0 Flash with native image generation:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://x.com/OfficialLoganK/status/1899853465922175427" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HSl5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b781c5-f4ac-4cc7-8dc2-ac0b37adbe10_583x654.png 424w, https://substackcdn.com/image/fetch/$s_!HSl5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b781c5-f4ac-4cc7-8dc2-ac0b37adbe10_583x654.png 848w, https://substackcdn.com/image/fetch/$s_!HSl5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b781c5-f4ac-4cc7-8dc2-ac0b37adbe10_583x654.png 1272w, https://substackcdn.com/image/fetch/$s_!HSl5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b781c5-f4ac-4cc7-8dc2-ac0b37adbe10_583x654.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HSl5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b781c5-f4ac-4cc7-8dc2-ac0b37adbe10_583x654.png" width="583" height="654" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/90b781c5-f4ac-4cc7-8dc2-ac0b37adbe10_583x654.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:654,&quot;width&quot;:583,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:79967,&quot;alt&quot;:&quot; Logan Kilpatrick @OfficialLoganK Native image generation with Gemini 2.0 Flash is now available to all developers via an experimental release in the Gemini API and Google AI Studio!!  The chat based image editing and creation is so much fun to play with &quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://x.com/OfficialLoganK/status/1899853465922175427&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.whytryai.com/i/158977694?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b781c5-f4ac-4cc7-8dc2-ac0b37adbe10_583x654.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt=" Logan Kilpatrick @OfficialLoganK Native image generation with Gemini 2.0 Flash is now available to all developers via an experimental release in the Gemini API and Google AI Studio!!  The chat based image editing and creation is so much fun to play with " title=" Logan Kilpatrick @OfficialLoganK Native image generation with Gemini 2.0 Flash is now available to all developers via an experimental release in the Gemini API and Google AI Studio!!  The chat based image editing and creation is so much fun to play with " srcset="https://substackcdn.com/image/fetch/$s_!HSl5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b781c5-f4ac-4cc7-8dc2-ac0b37adbe10_583x654.png 424w, https://substackcdn.com/image/fetch/$s_!HSl5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b781c5-f4ac-4cc7-8dc2-ac0b37adbe10_583x654.png 848w, https://substackcdn.com/image/fetch/$s_!HSl5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b781c5-f4ac-4cc7-8dc2-ac0b37adbe10_583x654.png 1272w, https://substackcdn.com/image/fetch/$s_!HSl5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b781c5-f4ac-4cc7-8dc2-ac0b37adbe10_583x654.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: <strong><a href="https://x.com/OfficialLoganK/status/1899853465922175427">X</a></strong></figcaption></figure></div><p>Gemini can now create multi-step illustrated stories from a single prompt, edit existing images directly, rework uploaded images, and more.</p><p>The best part?</p><p>It&#8217;s 100% free to try.</p><h2>How do you use it?</h2><p>The easiest way to try the new model is via <a href="https://aistudio.google.com/">Google AI Studio</a>.</p><p>Here&#8217;s the step-by-step process:</p><ol><li><p>Go to <a href="https://aistudio.google.com/">aistudio.google.com</a> and log in with your Google account.</p></li><li><p>Select &#8220;Gemini 2.0 Flash Experimental&#8221; from the model picker. Note: You want the <strong>gemini-2.0-flash-exp </strong>model, not the default Gemini 2.0 Flash. (I know, I know.)</p></li></ol>
      <p>
          <a href="https://www.whytryai.com/p/gemini-2-0-flash-native-image-generation">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Claude 3.7 Sonnet: Fantastic Model Held Back by Lack of Native Internet Access]]></title><description><![CDATA[The new hybrid model from Anthropic could really benefit from web browsing.]]></description><link>https://www.whytryai.com/p/claude-3-7-sonnet-internet-access</link><guid isPermaLink="false">https://www.whytryai.com/p/claude-3-7-sonnet-internet-access</guid><dc:creator><![CDATA[Daniel Nest]]></dc:creator><pubDate>Tue, 25 Feb 2025 09:48:36 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ce8d34cb-65c5-49cb-bd98-d98f90be25ba_1280x896.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>TL;DR</h2><p>Anthropic just launched the impressive Claude 3.7 Sonnet, but it&#8217;s still locked inside the old interface without built-in access to the web.</p><h2>What is it?</h2><p>Claude 3.7 Sonnet is a <a href="https://www.anthropic.com/news/claude-3-7-sonnet">new state-of-the-art model</a> that incorporates a traditional LLM and a reasoning model in one.</p><p>It goes toe-to-toe with or outperforms frontier models like Grok 3, DeepSeek-R1, and OpenAI&#8217;s o-family on most benchmarks:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!D2Cl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ea5703a-01fc-4eeb-98f1-5d0dbe2fe937_2600x2360.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!D2Cl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ea5703a-01fc-4eeb-98f1-5d0dbe2fe937_2600x2360.webp 424w, https://substackcdn.com/image/fetch/$s_!D2Cl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ea5703a-01fc-4eeb-98f1-5d0dbe2fe937_2600x2360.webp 848w, https://substackcdn.com/image/fetch/$s_!D2Cl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ea5703a-01fc-4eeb-98f1-5d0dbe2fe937_2600x2360.webp 1272w, https://substackcdn.com/image/fetch/$s_!D2Cl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ea5703a-01fc-4eeb-98f1-5d0dbe2fe937_2600x2360.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!D2Cl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ea5703a-01fc-4eeb-98f1-5d0dbe2fe937_2600x2360.webp" width="1456" height="1322" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8ea5703a-01fc-4eeb-98f1-5d0dbe2fe937_2600x2360.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1322,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Benchmark table comparing frontier reasoning models against Claude 3.7 Sonnet&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Benchmark table comparing frontier reasoning models against Claude 3.7 Sonnet" title="Benchmark table comparing frontier reasoning models against Claude 3.7 Sonnet" srcset="https://substackcdn.com/image/fetch/$s_!D2Cl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ea5703a-01fc-4eeb-98f1-5d0dbe2fe937_2600x2360.webp 424w, https://substackcdn.com/image/fetch/$s_!D2Cl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ea5703a-01fc-4eeb-98f1-5d0dbe2fe937_2600x2360.webp 848w, https://substackcdn.com/image/fetch/$s_!D2Cl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ea5703a-01fc-4eeb-98f1-5d0dbe2fe937_2600x2360.webp 1272w, https://substackcdn.com/image/fetch/$s_!D2Cl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ea5703a-01fc-4eeb-98f1-5d0dbe2fe937_2600x2360.webp 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: <strong><a href="https://www.anthropic.com/news/claude-3-7-sonnet">Anthropic</a></strong></figcaption></figure></div><h2>How do you use it?</h2><p>Simple: Just go to the usual <a href="https://claude.ai/">claude.ai</a> website.</p><p>If you&#8217;ve never used Claude before, you&#8217;ll need to create an account.</p><p>If you have, Claude 3.7 Sonnet will now be the default model:</p>
      <p>
          <a href="https://www.whytryai.com/p/claude-3-7-sonnet-internet-access">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[OpenAI Joins the “Deep Research” Trio]]></title><description><![CDATA[A quick look at three somewhat similar agents from Google, Genspark, and OpenAI.]]></description><link>https://www.whytryai.com/p/openai-deep-research</link><guid isPermaLink="false">https://www.whytryai.com/p/openai-deep-research</guid><dc:creator><![CDATA[Daniel Nest]]></dc:creator><pubDate>Mon, 03 Feb 2025 20:26:56 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c340f6d5-6ea5-48e0-86d5-a8e4530952a7_1280x896.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>TL;DR</h2><p>OpenAI just released a <a href="https://openai.com/index/introducing-deep-research/">&#8220;Deep Research&#8221; agent</a> that can autonomously research and reason about complex topics. It&#8217;s the third &#8220;Deep Research&#8221; tool in less than two months, but it&#8217;s arguably the most capable.</p><h2>What is it?</h2><p>OpenAI describes it as &#8220;an agent that uses reasoning to synthesize large amounts of online information and complete multi-step research tasks.&#8221;</p><p>Basically, you ask Deep Research a question or give it a research task, and it sets off on its own to create a plan, read through relevant literature, explore different research avenues based on its findings, and seek additional information to produce the final report.</p><p>By early accounts it&#8217;s quite impressive, offering a &#8220;<a href="https://www.oneusefulthing.org/p/the-end-of-search-the-beginning-of">near PhD-level analysis</a>&#8221; according to Ethan Mollick.</p><p>It&#8217;s also, confusingly, the <em>third</em> such agent with <em>exactly</em> the same name. </p><p>Here&#8217;s the timeline:</p><ul><li><p>December 11, 2024: Google released a &#8220;<a href="https://blog.google/products/gemini/google-gemini-deep-research/">Deep Research</a>&#8221; assistant for paying Gemini Advanced users.</p></li><li><p>January 27, 2025: Genspark launched a &#8220;<a href="https://mainfunc.ai/blog/genspark_autopilot_agent_deep_research">Deep Research</a>&#8221; agent with similar&#8230;</p></li></ul>
      <p>
          <a href="https://www.whytryai.com/p/openai-deep-research">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[No, DeepSeek’s Janus-Pro-7B Doesn’t Make Better Images Than DALL-E 3.]]></title><description><![CDATA[DeepSeek is an incredible AI lab, but let's not elevate it to godlike status just yet.]]></description><link>https://www.whytryai.com/p/deepseek-janus-pro-7b-is-not-better-than-dalle-e3</link><guid isPermaLink="false">https://www.whytryai.com/p/deepseek-janus-pro-7b-is-not-better-than-dalle-e3</guid><dc:creator><![CDATA[Daniel Nest]]></dc:creator><pubDate>Tue, 28 Jan 2025 11:26:36 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/fd5fdeba-491d-41ca-84a5-1b9db5ef52d3_1278x406.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p><em><strong>Hot Takes</strong> are occasional timely posts that focus on fast-moving news and releases, in addition to my regular Thursday and Sunday columns.</em></p><p><em>If <strong>Hot Takes </strong>aren&#8217;t your cup of tea, simply go to your account at <strong><a href="https://www.whytryai.com/account">www.whytryai.com/account</a> </strong>and toggle the &#8220;Notifications&#8221; settings accordingly:</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rr-K!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f82283-80c1-4310-bb47-4792fa43f9d6_745x268.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rr-K!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f82283-80c1-4310-bb47-4792fa43f9d6_745x268.png 424w, https://substackcdn.com/image/fetch/$s_!rr-K!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f82283-80c1-4310-bb47-4792fa43f9d6_745x268.png 848w, https://substackcdn.com/image/fetch/$s_!rr-K!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f82283-80c1-4310-bb47-4792fa43f9d6_745x268.png 1272w, https://substackcdn.com/image/fetch/$s_!rr-K!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f82283-80c1-4310-bb47-4792fa43f9d6_745x268.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rr-K!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f82283-80c1-4310-bb47-4792fa43f9d6_745x268.png" width="745" height="268" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/44f82283-80c1-4310-bb47-4792fa43f9d6_745x268.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:268,&quot;width&quot;:745,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:20141,&quot;alt&quot;:&quot;Managing Notification settings in Substack - Why Try AI section toggles&quot;,&quot;title&quot;:&quot;Managing Notification settings in Substack - Why Try AI section toggles&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Managing Notification settings in Substack - Why Try AI section toggles" title="Managing Notification settings in Substack - Why Try AI section toggles" srcset="https://substackcdn.com/image/fetch/$s_!rr-K!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f82283-80c1-4310-bb47-4792fa43f9d6_745x268.png 424w, https://substackcdn.com/image/fetch/$s_!rr-K!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f82283-80c1-4310-bb47-4792fa43f9d6_745x268.png 848w, https://substackcdn.com/image/fetch/$s_!rr-K!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f82283-80c1-4310-bb47-4792fa43f9d6_745x268.png 1272w, https://substackcdn.com/image/fetch/$s_!rr-K!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f82283-80c1-4310-bb47-4792fa43f9d6_745x268.png 1456w" sizes="100vw" loading="lazy" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div></blockquote><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.whytryai.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.whytryai.com/subscribe?"><span>Subscribe now</span></a></p><h2>TL;DR</h2><p>DeepSeek released an image model called Janus-Pro-7B, which some claim makes better images than DALL-E 3 and Stable Diffusion, but it&#8217;s nowhere near in my tests.</p><h2>What is it?</h2><p>DeepSeek <a href="https://huggingface.co/deepseek-ai/Janus-Pro-7B">describes Janus-Pro-7B</a> as &#8220;a novel autoregressive framework that unifies multimodal understanding and generation.&#8221;</p><p>In short, this means you can use the same model to process image inputs <em>and</em> generate new images. This makes Janus-Pro-7B quite flexible, combining the capabilities of task-specific models in a single one.</p><p>That&#8212;and the fact that it&#8217;s yet another open-source model&#8212;is worthy of praise.</p><p>But DeepSeek also shared a few benchmarks that show Janus-Pro-7B outperforming OpenAI&#8217;s DALL-E 3 and Stable Diffusion 3 Medium:</p>
      <p>
          <a href="https://www.whytryai.com/p/deepseek-janus-pro-7b-is-not-better-than-dalle-e3">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[DeepSeek-R1: The Free o1 Alternative]]></title><description><![CDATA[How to use the new DeepSeek-R1 model and how it compares to o1.]]></description><link>https://www.whytryai.com/p/deepseek-r1-free-openai-o1-alternative</link><guid isPermaLink="false">https://www.whytryai.com/p/deepseek-r1-free-openai-o1-alternative</guid><dc:creator><![CDATA[Daniel Nest]]></dc:creator><pubDate>Tue, 21 Jan 2025 12:03:50 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/75496041-a132-4ef2-af47-2f3de147e409_1344x896.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p><em><strong>Hot Takes</strong> are occasional timely posts that focus on fast-moving news and releases, in addition to my regular Thursday and Sunday columns.</em></p><p><em>If <strong>Hot Takes </strong>aren&#8217;t your cup of tea, simply go to your account at <strong><a href="https://www.whytryai.com/account">www.whytryai.com/account</a> </strong>and toggle the &#8220;Notifications&#8221; settings accordingly:</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rr-K!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f82283-80c1-4310-bb47-4792fa43f9d6_745x268.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rr-K!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f82283-80c1-4310-bb47-4792fa43f9d6_745x268.png 424w, https://substackcdn.com/image/fetch/$s_!rr-K!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f82283-80c1-4310-bb47-4792fa43f9d6_745x268.png 848w, https://substackcdn.com/image/fetch/$s_!rr-K!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f82283-80c1-4310-bb47-4792fa43f9d6_745x268.png 1272w, https://substackcdn.com/image/fetch/$s_!rr-K!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f82283-80c1-4310-bb47-4792fa43f9d6_745x268.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rr-K!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f82283-80c1-4310-bb47-4792fa43f9d6_745x268.png" width="745" height="268" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/44f82283-80c1-4310-bb47-4792fa43f9d6_745x268.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:268,&quot;width&quot;:745,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:20141,&quot;alt&quot;:&quot;Managing Notification settings in Substack - Why Try AI section toggles&quot;,&quot;title&quot;:&quot;Managing Notification settings in Substack - Why Try AI section toggles&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Managing Notification settings in Substack - Why Try AI section toggles" title="Managing Notification settings in Substack - Why Try AI section toggles" srcset="https://substackcdn.com/image/fetch/$s_!rr-K!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f82283-80c1-4310-bb47-4792fa43f9d6_745x268.png 424w, https://substackcdn.com/image/fetch/$s_!rr-K!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f82283-80c1-4310-bb47-4792fa43f9d6_745x268.png 848w, https://substackcdn.com/image/fetch/$s_!rr-K!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f82283-80c1-4310-bb47-4792fa43f9d6_745x268.png 1272w, https://substackcdn.com/image/fetch/$s_!rr-K!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f82283-80c1-4310-bb47-4792fa43f9d6_745x268.png 1456w" sizes="100vw" loading="lazy" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div></blockquote><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.whytryai.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.whytryai.com/subscribe?"><span>Subscribe now</span></a></p>
      <p>
          <a href="https://www.whytryai.com/p/deepseek-r1-free-openai-o1-alternative">
              Read more
          </a>
      </p>
   ]]></content:encoded></item></channel></rss>