27 Comments
Jan 12Liked by Daniel Nest

Great issue! The prompting guide is super helpful. I also like to try my prompt in different models, Bard is always creative.

Expand full comment
author

Thanks Zeng!

Speaking of Bard, Microsoft Copilot (formerly Bing), etc. I actually want to do a post where I try to dive into the main options and figure out what each is (and isn't) useful for. I used Bing in early 2023 when it got GPT-4, but now I mostly stick to ChatGPT because I have the Plus account. So I'm developing a blind spot for alternatives that I'd like to fix.

Expand full comment
Jan 12Liked by Daniel Nest

I am using Easy Peasy AI, where GPT-3.5, GPT-4, GPT-4 32k, GPT-4 turbo, turbo with vision, Mixtral 8x7B, Claude instant, Claude 2.1 and Meta Llama 2 70B are available in the same app. I can change my chatbot model without switching app. Which is very convenient. And I can't wait for your post about Bard and Microsoft Copilot!

Expand full comment
author

Yup, Perplexity Labs have something similar with the ability to switch LLMs directly in the app. Definitely a time saver. I'll see when (if) I get around to testing the chatbots.

Expand full comment
Jan 12Liked by Daniel Nest

Oh ya! I almost forgot about Perplexity. So many LLMs, so little time!

Expand full comment
Jan 12Liked by Daniel Nest

Just echoing the others. Good stuff. I realized this is what I was doing just by trial and error, but you actually documented it! Lol

Expand full comment
author

That's what I love to hear: People arriving at their own approaches by actually working with the tools. Way to go!

Expand full comment
Jan 20Liked by Daniel Nest

I often follow https://www.promptingguide.ai/

Expand full comment
author
Jan 20·edited Jan 20Author

Yeah that's another great reference site!

I think there's great value in understanding things like "role," "context," etc., but I find that the more complext approaches turn beginners off. Which is why I focus on helping someone new get results in a quick and easy way before diving into fine-tuning, etc.

Expand full comment
Jan 12Liked by Daniel Nest

Should your opening line be cyber-sleuth and not cyber-sloth? Just curious. Thank you and have a great slothful day! 😀

Expand full comment
author
Jan 12·edited Jan 12Author

Ha, nice attempt at sleuthing, fellow sloth, but cyber-sloths was very intentional (as far as nonsensical goody openers go). I entertained cyber-koalas because I liked the visual, but cyber-sloths just sounds better, doesn't it?

And now we're way deep down the rabbit hole. Hope you're happy!

Expand full comment

Good work - useful technique. There's for sure an "Aha" moment when it all comes together towards the end.

My general issue with the perception that the general public is interested in learning prompting in general is that I do not believe that will be the future. Applications will have this kind of framework built in, and be updated on better and better techniques so that we do not have to remember all this.

For example - you have an excellent set of Midjourney prompts listed - I will 100x go to your list, as I have already decided it's high quality, and use that as a starting point instead of remembering by heart the various prompt techniques. Apps will have various techniques built in, so we do not have to figure out elaborate prompting techniques that actually do change with every LLM major update.

Please keep up the high quality and informative writing.

Subscribed!

Expand full comment
author
Jan 12·edited Jan 12Author

Interesting take, and I agree to some extent!

I don't see any reason why future AI tool won't simply be pre-prompted with the best known techniques by default. Also, as they get increasingly better at natural language understanding, we won't have to re-learn the ropes. We'll just need to talk like we do to any other human being.

As for my "Midjourney prompts" list - that's actually a good example of what I mean by not using "off-the-shelf" prompts. I've been very intentional in my "MJ prompts" list to mostly provide people with simple, one-word modifiers rather than 10-sentence-long copy-paste prompts full of adjectives and words they might not understand or need. My philosophy is that the bulk of the scene and subject should come from the user, with a few strong qualifiers added to make the image look the way they want.

As for techniques changing with LLM updates. That's also true. We even find out wacky stuff like ChatGPT being "lazier" in December. But my main argument is that fundamentally, the way you work with these models and the underlying approach shouldn't change dramatically. If you take the "try and learn" approach, you won't have to sit and memorize any hacks or chase the latest technique.

I appreciate you subscribing and happy you found it helpful!

Expand full comment

Hey, a real thoughtful response... Thank you Daniel! - the part about talking to LLMs like any other human being is an issue tho - the AI's understanding of our world is non-existent. It does not "comprehend" what we want to say and there's no boundary (it falls past the boundary very fast)

For example, I was doing a demo to a client, and I asked the AI "Show me the authorized BMW dealerships in the city XXX" - this was using Anthropic/Claude. The AI listed 5 dealerships - of which, 4 ended up being completely made up, and the AI will hallucinate with authority tho.

This is where an application can pre-set boundaries, so that I do not have to set these boundaries every single time I am asking anything.

I see in our work with clients, once they get past the initial "oh we deployed a chatbot" excitement, how do you finetune the app? Well, that's real work, and we go back to a good old application with a lot of heavy lifting done by humans... (and this is why I do not subscribe to the "chatbot" term, but "AI apps" - because in the backend to make it useful, it's a real app with many things happening where the UI search is just one of the many ways to access the app - and I am a big fan of actually eliminating the LLM altogether from the user-app workflow)

So, to be clear, I am not disagreeing with your overall message - I really like your tone and how you eliminate the "magic" around AI and explain in clear, repeatable way. The AI has no comprehension of things tho, and it is dangerous to imply it does. I see it in my work that the AI has a knife's edge operating window and it falls outside of that very quickly.

The techniques you explain are getting incorporated in applications, so humans don't have to worry about setting those boundaries and learning all these layered techniques.

Your Midjourney list is IMO the perfect example of how product and app developers can monetize AI right now - take the fundamental techniques you describe, turn them into an app that "guides" the human by asking 2-3 clarification questions and then spits out a great result that has some pre-set boundaries.

Yeah - your newsletter is on my "recommended" list. It's great.

Cheers

Expand full comment
author
Jan 13·edited Jan 13Author

I think we generally agree!

Though we probably have to differentiate between business use cases, where precision, security, etc. is paramount and casual everyday use by the average Joe, where I think a degree of anthropomorphizing the chatbots can be helpful to ease a layman into the experience. But you're right: We must be careful not to anthropomorphize too much. Hallucinations are a thing. The whole "LLMs being a black box we can't fully comprehend even though we built them" is a thing, etc.

Still, it's a useful analogy.

Also, while it's hard to speculate about the exact direction the future takes, I don't believe we'll ever see the "simulated chat" experience disappear completely. More likely, we'll see a splintering of the world into at least two parallel tracks:

1) Professional apps/tools with fine-tuning, preset parameters, predictable outcomes, and wrappers that abstract away from the chatbot interface altogether (basically what you're describing).

2) Increasingly smart assistants that get better at mimicking human conversations that people will "talk" to on a daily basis (ala "Her").

...and perhaps many other sub-categories with their own intended use cases.

I appreciate the recommendation!

Expand full comment

Aha... I must watch the movie one of these days... From what I hear, "Her" is definitely going to happen :)

Expand full comment

Nice stuff today.

I think part of the problem you point out, folks who use GPT4 and are like "meh it's just predictive text, nothing special" or "this is too complex for me to use" are doing so because they don't feel comfortable talking to a "machine" as though it's a person, so having a "conversation" to get something done seems anathema .

Folks expect either human-level responses with some kind of mind-reading abilities, or a programmable machine. You need to act as though an LLM is somewhere in between those spots, even if you don't really believe so.

Expand full comment
author

I think you might be on to something. We're largely conditioned to expect 100% predictable execution based on our prior experience with software. So when you get something from a chatbot, you might unconsciously feel that it's the standard response you can expect from this "software" rather than a tool you can talk to to improve the result.

That's another reason I like these approaches (especially the first one). They immediately showcase the intended conversational, back-and-forth nature of the experience.

Expand full comment

Yeah dude. Plus, if you treat it just like a human, you'll be disappointed there too. You have to understand that this is a different thing than either of the things you're used to, and that's very disconcerting and discouraging for many.

Expand full comment
author

True! Although I think some degree of anthropomorphizing is helpful, as long as you're able to abstract from it.

Expand full comment

Yeah. Act like you're talking to a cyborg already, folks!

Expand full comment
author

Yes..."like"...*ominous crescendoing music*

Expand full comment

I'm thinking more along the lines of Doom Patrol Cyborg.

Expand full comment