I Checked Out All 35 Google Labs Experiments: Here's The Scoop
What do they do and are they any good?
Google is killing it in GenAI lately.
From state-of-the-art image and video models to mainstream research tools like NotebookLM, Google is constantly pumping out impressive stuff.
To wit, I covered 12 Starter Apps in Google AI Studio just a few months ago:
Well…there are 32 of them now.
As if that wasn’t enough, Google Labs now boasts 35 different experiments1 in various stages of development. Some are already popular, while others are niche and obscure.
This week, I took them for a quick spin to let you know what I think and whether they’re worth your time.
Don’t worry, I’ll keep each section short, because 35 is no small feat.
Note: Some Google experiments are only available in the US. If you’re outside the US and want to try them, you might need a VPN.
🎨Creativity
These Google experiments are all about generating new things or reimagining existing ones.
1. Doppl (mobile app)
Doppl is a smartphone app that lets you virtually “try on” different outfits. You upload a full-body shot of yourself and a picture of the clothes you’d like to try. Then you get an AI-generated image of you wearing the outfit, which you can also turn into a short animated clip.
Daniel’s take: I can’t personally test Doppl in Denmark,2 but I imagine it’s pretty handy to quickly generate dozens of rough sketches for fashion inspiration.
2. Flow (formerly “VideoFX”)
Flow is a tool for AI filmmakers and one of the best places to experiment with Google’s frontier video models like Veo 3. It also comes with “Flow TV,” where you can view clips generated by others and see the prompts they used.
Daniel’s take: Flow is perfect if you’re working on a longer movie, thanks to a scene builder that lets you string multiple clips together and develop your story. But it might feel like overkill if you’re just out to make a few silly standalone clips.
3. GenType
GenType creates an entire alphabet based on your requested style. If you don’t like how a specific letter turned out, you can regenerate it individually. You can then type short texts directly on the canvas using your alphabet or export it as standalone .png files for every letter.
Daniel’s take: GenType is good for a bit of fun, but you can’t directly save your alphabet in a usable font format, so it has limited practical applications.
4. Google Vids
Google Vids is a hybrid of a video editor and Google Slides that lets you create animated presentations with visuals, transitions, etc. Just last week, Google added the ability to insert AI avatars and create image-to-video clips.
Daniel’s take: While it has potential as a more engaging alternative to Google Slides, the free version of Google Vids is quite limited. For most AI-powered features like animated avatars and Veo-powered video clips, you’ll need to upgrade.
5. ImageFX
ImageFX is a simple text-to-image playground where you generate images from a prompt. It gives you several minor controls, like setting the aspect ratio or keeping the seed fixed for repeatable generations.
Daniel’s take: While ImageFX is okay for quick image generation, it’s very basic and powered by the outdated Imagen 3 model. For advanced use cases and access to newer models like Imagen 4 and Veo 3, Whisk is a stricly better option. (See below.)
6. MusicFX DJ
MusicFX DJ lets you create individual instrumentals from text prompts, then mix them in different proportions to create non-stop music tracks that morph and evolve as you add new sounds and adjust the mixer.
Daniel’s take: It’s certainly a cool parlor trick but doesn’t give you enough control over output to be usable for any serious music creation. Also, unlike Suno, Udio, and Riffusion, MusicFX DJ doesn’t produce precise vocals, so you can’t make songs with it.
7. National Gallery Mixtape
National Gallery Mixtape is, surprisingly, quite similar to MusicFX DJ. But instead of mixing instruments, you drag paintings from the National Gallery onto a canvas to hear what they might “sound” like. In Mixer mode, you can even tweak how strongly each painting should influence the overall composition.
Daniel’s take: This is a pure novelty toy that gives you even less influence over output than MusicFX DJ. Worth a quick “huh, that’s curious” look, but that’s about it.
8. TextFX
TextFX is a collection of mini tools related to language and words. For instance, you can find analogies for concepts, turn acronyms into related phrases, create semantic word chains, and more. (I actually covered TextFX all the way back in August 2023. Ancient times.)
Daniel’s take: This is one of the more immediately useful tools on the entire list. Great for creative inspiration and idea generation, and very relevant for authors, songwriters, or anyone else working with words.
9. Whisk
Whisk is one of the more well-known Google experiments. I dedicated an entire article to it:
The basic idea is that you generate images by blending subjects, scenes, and styles without having to write elaborate text prompts. As of last week, Whisk also gives you 5 free monthly credits to turn images into video clips using the Veo 3 model.
Daniel’s take: If it wasn’t already clear, I’m a big fan of Whisk, especially for newcomers to AI image generation. Whisk makes it easy and fun to experiment with styles and scenes while giving you free access to Google’s most advanced Imagen 4 model.
💻Developers
These experiments are relevant for coders and developers building digital products. Since I have limited coding experience, my takes here will mostly be based on external consensus.
10. AI‑first Colab
Colab is an online notebook that lets you write and run Python code directly in your browser without having to install anything. The AI-first addition brings Gemini in to help explain the code, fix errors, etc.
Daniel’s take: It’s hard to find definitive consensus, but I imagine having built-in AI assistance is more convenient than working across several spaces.
11. Firebase Studio
Firebase Studio is a browser-based, end-to-end app builder that can help you design, code, and deploy your app in the same place.
Daniel’s take: Online reviews are currently mixed, with a negative skew. The consensus seems to be that it has potential but currently feels like a beta product.
12. Jules
Jules is a hands-on AI coding assistant that connects to your GitHub projects and works on them alongside you. It analyzes your code, suggests fixes, creates pull requests, etc.
Daniel’s take: Once again, reviews seem to be a mixed bag but with a more positive lean than those for Firebase Studio.
13. Stax
Stax launched just last week and helps you test and compare different LLMs. You can set up projects and run bulk model evaluations to measure their performance on specific tasks relevant to your needs.
Daniel’s take: This is actually also helpful for anyone wanting to test LLMs side-by-side on specific tasks. But unlike free sites for comparing LLMs, Stax requires you to have an API key for the models you’re testing.
14. Stitch
Stitch turns uploaded sketches or text prompts into fleshed-out UI mockups for websites or apps. It can independently propose relevant screens and dashboards, generate front-end code that you can refine further, or export into a tool like Figma.
Daniel’s take: It works really well for brainstorming and quickly going from concept to mockup. I asked for a “Bird-themed app to read AI newsletters” and got three separate screens mocked up in under a minute. I imagine Stitch being very useful for digital product creators to prototype with.
🎲Fun
These are mostly throwaway just-for-fun apps.
15. GenChess
GenChess lets you create a playable chess set from a simple prompt. Just describe the style, material, etc. to create your pieces. You can then play games against computer opponents with their own sets.
Daniel’s take: I admit, it’s kind of cool to watch AI try to turn silly materials and ideas into chess pieces. If you’re a fan of chess and want something different, this is worth a try.
16. Moving Archives
Moving Archives uses Google’s Veo models to animate photos from historic archives. Right now, only the Harley-Davidson Museum collection is available, but I assume more might come later.
Daniel’s take: This one’s a cool concept piece and a clever way to bring old archives to life. But it’s limited to just one collection so far and doesn’t let you upload images or otherwise steer the generation, so it gets repetitive pretty quickly.
🧩Integrated
These are Google Experiments that pop up inside other Google products. Many of these are already live.
17. AI Mode in Search
AI Mode is an opt-in feature that shows up as an extra tab whenever you Google stuff. You can open it up as an extended chat experience that uses reasoning models to help you answer more complex questions.
Daniel’s take: This is definitely a useful way to branch off from your initial search and explore the topic in more detail through back-and-forth questions.
18. AI Overviews in Search
AI Overviews should be familiar to most of you. They show up automatically above the search results for certain queries and use AI to synthesize information about a given topic into a handy summary.
Daniel’s take: AI Overviews are a good way to pull answers when they’re most relevant, as long as you’re aware of the inherent issues with AI search. I don’t recommend blindly relying on an AI Overview without verifying sources on your own.
19. Ask Photos
Ask Photos lets you search your entire collection and surface relevant photos based on natural-language queries instead of rigid filters. It can also do stuff like generate a caption for a given photo.
Daniel’s take: The new “Ask Photos” features aren’t showing up for me yet, so I assume they’re also limited to a US rollout for now. But being able to find things in your collection using conversational queries definitely sounds like a win to me.
20. Conversational AI on YouTube
Conversational AI on YouTube is an optional AI chat that you can trigger for select videos by clicking the “Ask” button under them. This lets you ask questions about the video to help you better parse and understand the information.
Daniel’s take: Really useful, especially when watching a longer, information-dense presentations, podcasts, etc. Normally, I’d pull YouTube links into NotebookLM to enable this kind of AI chat experience, so having it embedded into YouTube removes a bunch of extra steps.
21. Gen AI in Chrome
Gen AI in Chrome is a suite of mini-experiments that use AI for stuff like creating custom themes, organizing your tabs, or drafting texts. The mix is constantly evolving, with some features (like the tab organizer) disappearing without warning.
Daniel’s take: If I’m honest, I haven’t found much use for any current GenAI features in Chrome. I can imagine the “Help me write” feature being quite handy in certain situations though.
22. Help Me Script
Help Me Script turns natural language requests into automations for your Google Home devices. Describe a routine, and Help Me Script will create a YAML script that does what you request (provided your devices support those actions).
Daniel’s take: I don’t use any Google Home device automations, but turning a hypothetical chat request into a YAML script was very intuitive and quick. If you’re a frequent Google Home user, this should certainly speed things up.
23. Project Astra
Project Astra is about integrating AI into live interactions, so you can engage with it via voice and vision. Some of the capabilities have already been integrated into Gemini Live, while others are only available to trusted testers (sign up for the waitlist)
Daniel’s take: I don’t know much about the current WIP features, but Gemini Live’s ability to share your screen and camera during a realtime voice conversation is incredibly useful. If you haven’t already tried having video-enabled voice conversations with Gemini, I recommend taking them for a spin.
24. Project Mariner
Project Mariner is Google’s answer to OpenAI’s Operator. It’s an autonomous agent that can take browser actions and conduct web research on your behalf. You can also teach it repeatable tasks to perform.
Daniel’s take: Project Mariner is only open to US-based Google AI Ultra accounts ($250 a month). If we ever do get a reliable autonomous browser agent, this would be a big deal, since so far they’re all rough around the edges.
📚Research & learning
These are Google Experiments that help you learn or conduct research. Some of them are already polished, popular products.
25. Career Dreamer
Career Dreamer helps you explore job opportunities and career paths based on your skills and experience. You start by sharing a current or previous role, and Gemini helps you extract insights about related skills, tasks, etc. Career Dreamer then maps everything it learned onto potential job roles, suggests helpful resources, and directs you to Gemini to help draft a resume or explore job opportunities.
Daniel’s take: This one’s a no-brainer for anyone looking to find a new job or considering a career pivot. Career Dreamer guides you step-by-step through the process and incorporates many concepts I covered in my “job hunting with AI” article.
26. Food Mood
Food Mood generates cooking recipes by fusing two different cuisines of your choice. You can select the type of meal, the number of people, and even specific ingredients to include. Food Mood even creates an image of what your resulting meal might look like (with occasionally funny results).
Daniel’s take: While it’s presented in a sort of tongue-in-cheek way, I think Food Mood is actually very cool for unexpected inspiration. It takes mere seconds to put together the ingredient list and the recipe, giving you something to riff on.
27. Illuminate
Illuminate turns any URL into a two-host podcast about the topic, essentially identical to NotebookLM’s “Audo Overview” feature but with a few more options for voices and direction. There’s also a library of ready-made podcasts to pick from. Illuminate used to be limited to research papers, but can now handle any public URL.
Daniel’s take: Illuminiate doesn’t do much more than what you can already achieve with NotebookLM (which also lets you work with multiple sources at once). But if you are only interested in a single source, want to skip a few steps, and need a bit more control over the end result, Illuminate is a great alternative.
28. Learn About
Learn About lets you explore any topic or question by generating an on-the-fly curriculum and learning path with vetted sources, complete with chapters and quizzes. You can dive deeper into specifics, request simpler explanations, or ask open-ended follow-up questions.
Daniel’s take: Learn About is fantastic! I was already a fan when it first launched ( of and I chatted about it back in November 2024). Since then, Learn About’s only gotten better and now also lets you save past sessions so you can get back to learning later.
29. Little Language Lessons
Little Language Lessons generates different bite-sized lessons focused on a niche topic in over a dozen languages. You can describe a specific situation, learn local slang, or even snap a picture to generate a related mini lesson, complete with vocabulary and helpful phrases.
Daniel’s take: Super useful. Little Language Lessons doesn’t teach you the language from the ground up but instead equips you with key phrases for a given scenario, so it’s directly relevant to your immediate goals. Makes for a great companion to more traditional language learning apps like Duolingo.
30. NotebookLM
NotebookLM needs no introduction. It’s perhaps the most polished and well-known of all Google’s experiments. It can turn any source in almost any format into any type of output: free-form AI chat, audio/video overview, mind map, study guide, FAQ, and more. I wrote a guide all the way back in March 2024 (the layout and feature mix have evolved a lot since then, but the key concepts are still the same):
Daniel’s take: If you’re going to pick just a single Google Experiment on this list, make it NotebookLM. It’s powerful and versatile, and you can easily apply it to practically any research or learning task.
31. Portraits
Portraits gives you two premade AI chatbots trained on real-world experts: Kim Scott and Matt Dicks. You can ask them for advice or simply discuss different topics to hear their perspectives.
Daniel’s take: While it’s a fun idea, Portraits doesn’t really do anything that you can’t replicate with a knowledge base and a Gemini Gem or custom GPT. Also, the interface is clunky: You don’t get to have a true live chat with the experts. You can only submit one question at a time. For a voice-driven tool, this makes little sense.
32. Say What You See
Say What You See is a gamified way to learn how to prompt text-to-image models. You are shown an image and have to come up with a text prompt that would make AI generate a similar one. The strict 120-character limit teaches you to be economical with your words instead of “splatterprompting.” (Technically, you can also try to cheat by using my recent Image-To-Prompt-Converter.)
Daniel’s take: This is great if you’re just starting out with text-to-image generators. It teaches you the basic principles in a fun, hands-on way. The levels get progressively more difficult, so it’s easy to get caught up for a while.
33. Sparkify
Sparkify creates animated mini-lessons from a prompt using AI video generation. You describe what you want to learn, make some style selections, and get a short movie about your topic. There’s a waitlist, so most of us will probably only get to explore the existing library of clips.
Daniel’s take: I’m on the waitlist, so I haven’t been able to test drive the tool. But the idea sounds more gimmicky than useful, just like the Explain Things With Lots Of Tiny Cats starter app.
34. SynthID Detector
SynthID Detector lets you check whether a given image, video, audio clip, or text has been generated by one of Google’s AI models, thanks to the embedded SynthID watermark. Right now, you have to join the waitilist to try it out.
Daniel’s take: While I can’t test the product in action yet, there’s a clear benefit to being able to definitively spot AI-generated content. Great to see that Google is working seriously on these kinds of watermarking solutions.
35. Talking Tours
Talking Tours is “Google Street View meets Audio Overviews.” Pick one of the available map locations to teleport there. You will then enter Street View and “walk” through the nearby landmarks. When you take a snapshot, you hear an AI-generated audio narration of whatever it is you’re looking it. Simple and effective.
Daniel’s take: It’s a really fun way to travel the world without leaving your house. There are a few hundred locations to explore across several categories, and being able to point, click, and hear an audio explainer is an immersive experience.
🫵 Over to you…
Were you already familiar with many of these? Which of them do you use or see yourself actually using on a regular basis? I’d be curious to hear everyone’s take, especially for developer apps where I have limited experience.
Leave a comment or drop me a line at whytryai@substack.com.
Thanks for reading!
If you enjoy my writing, here’s how you can help:
❤️Like this post if it resonates with you.
🔄Share it to help others discover this newsletter.
🗣️Comment below—I love hearing your opinions.
Why Try AI is a passion project, and I’m grateful to those who help keep it going. If you’d like to support my work and unlock cool perks, consider a paid subscription:
Technically, there are 38 listed experiments, but “VideoFX” now redirects to Flow, “/code” is an umbrella category for Google Colab and Jules, and “Daily Listen” redirects to AI Mode.
The Android app has a Google Store region lock that can’t be bypassed with a VPN.