23 Comments

I love the Memento explanation, and I'm stealing it. I know the improved learning myth stuck with me for a while for some reason, until I learned more about how these suckas work.

I have a good working theory on why this is happening:

"To this day, the meme continues to gain traction in Facebook groups and Reddit threads. Try searching for “AI accepting the job” on Twitter / X and see what pops up."

I think most folks (let's be real - almost everyone in the world) is in the camp of never trying any image generators, so they only learn about how messed up these things are in memes. Memes take a while to be created and circulate to groups that aren't already plugged in (and the groups that are plugged in know damn well that AGI can make hands today, thank the gods).

Do I get a Noble (sic) prize for discovering this reason or what?

Expand full comment
author
Apr 18·edited Apr 18Author

You're welcome to steal the Memento analogy as long as I get royalties from the inevitable avalanche of money this will undoubtedly bring you!

And yeah I'm sure that's a huge part of it. The hand-drawing issue especially, because it's so visual and lends itself well to memes.

As for the "learning automatically" misconception, I'm sure movies and pop culture in general are to blame. They always portray AI as self-sufficient, self-learning, so that's immediately where our mind goes when we encounter AI in the real world.

You and I can split the Ignoble Prize.

Expand full comment
Jun 5Liked by Daniel Nest

The most pervasive AI mythology that I encounter is that AI will take all the jobs, or that AI won't take any of our jobs. I think it'll influence plenty of jobs, and will probably replace some (poorly), but I don't think it's an all-or-nothing conversation.

Expand full comment
author

Agreed. I explicitly steered away from these broader industry-level myths to focus on the inner workings of AI models, but both the extreme doomerist and utopian perspectives are ultimately misguide.

Expand full comment

"We’ve been exposed to ChatGPT for over a year, so we know what generic AI writing sounds like. Hell, 79.67% of content on LinkedIn is probably just ChatGPT by now."

100% It's everywhere including the responses. LI also just released a 'rewrite with AI' option that will help make our post even better (read like AI)

Expand full comment
author

Yup. LI has always been the Land of Cringe, and AI somehow managed to make it even worse now.

Expand full comment

The thing that drives me the most nuts is Broetry. That noxious trait of

splitting lines forcing

People to keep scrolling

To read.

Ironically, I use ChatGPT to take their broetry, which doesn't rhyme, and make it rhyme. They get really sensitive about it because it calls out their gratuitous algo hacking.

Here's a great post about it:

https://www.linkedin.com/feed/update/urn:li:share:7161557213856149504/

Expand full comment
author

Ha, "Broetry" - that's fantastic!

Expand full comment

Simplest way I've found to catch someone using ChatGPT:

"Hey, I read your ticket. It seems really generic and not very actionable. Did you have ChatGPT write some of it?"

"Yes. Sorry."

"I need you to rewrite it with specific steps you're going to take. I don't even care if you get help from ChatGPT but you'll have to check it yourself to see if it makes sense."

That was a real conversation. Yes, someone who is really determined to cheat can find a way to make it look good. I would have been fine with that. But most people are too stupid and/or lazy to go to that trouble and the result is usually obvious.

Expand full comment
author

Yup, many will take the path of least resistance, and LLMs sure make that path way easier to take. Then again, like you said, being that lazy is typically very obvious and easy to spot. And it will only become increasintly so as more of us are exposed to ChatGPT-written text and get an inutitive read for it!

Expand full comment

I think the fuss of AI detectors and worrying that writers are using AI is, at the core, not a new problem. The problem of plagiarism or conmen publishing works that aren’t theirs has been around long before LLMs.

Sure, detecters keep people honest and may give you a sense of peace that what you’re reading was written by a human, but I think it’s a trivial issue.

What we hope for is that companies who employ writers do their diligence to make sure their writers are honest and doing their own work. But from what I can tell, even if they don’t, eventually these charlatans get found out and don’t last the scrutiny of the masses. They may make a lot of money in the short term, but long term people conning the system get found out and fall off pretty quickly.

I for one, perhaps naively, don’t see the AI generated text or images being a bigger problem than plagiarism has always been.

Expand full comment
author

I'm actually with you on this one. I wrote something similar in relation to Google and SEO. At this point, I don't buy the "AI text is the end of SEO" predictions just yet. In the short run, people will spam Google (and other search engines) with AI-generated fluff made at scale and maybe see some temporary results. But ultimately, quality, relevant content rises to the surface because you can't substitute generic cookie-cutter stuff for genuine expertise and research.

Let's see if we are in fact wearing rose-colored glasses here.

Expand full comment

Good writeup. I did want to do a "well, actually" and say that while the LLMs themselves aren't learning from user conversations, it's likely that LLM-enabled products are, in a roundabout way. The best practice for integrating an LLM also involves monitoring and observing user behavior (or at least asking "was this response good?"), and the best engineering teams are likely sifting through those metrics and using them to test better prompts, and eventually fine-tune more tailored models for their use case.

Your point still stands, but "AI for X" products might actually be getting smarter!

Expand full comment
author
Apr 20·edited Apr 20Author

Thanks - that's very much my understanding, too, but I can see that I didn't convey my thoughts in the most elegant way. The key message was not that LLMs don't get better but the SELF-learning aspect.

Just updated the myth headline from “LLMs upgrade themselves from live interactions" to "LLMs upgrade themselves on their own”

Also, expanded this existing section with your point above, changing it from "The only time they get better is when the team of researchers behind them trains and releases a new version..." to "The only time they get better is when the team behind them trains (or fine-tunes) and releases a new version..."

We're definitely on the same page, so hopefully I made things more clearcut now.

If there's any other akward phrasing, feel free to point it out.

Appreciate the input!

Expand full comment

I’m going to get a tattoo of RLHF because i know i will forget this

Expand full comment
author

Hurry, before the memory fades!

Expand full comment

What about reinforcement learning or RLHF or arrrellHeffff?

Expand full comment
author

Yup. That's part of the fine-tuning process done by e.g. OpenAI before releasing the model.

Expand full comment
Apr 19Liked by Daniel Nest

Really enjoyed this Daniel, as always!

I linked to this in my own weekly newsletter, so hopefully you get some love from it. Here's what I said...

"In my public talks, one of my favourite things to do is to shoot down myths about AI. It's not only fun to do, for me and the audience, but it's a surprisingly powerful first step in raising levels of AI education and literacy. That's why I loved the article below so much. Written by one of the easiest-to-understand, effective and funny AI commentators I know, you'll get a few chuckles out of this one... "

Expand full comment
author

Happy to hear you enjoyed it Mark - and I appreciate the shoutout in your newsletter, very kind of you. I'd be curious to hear what kind of misconceptions you typically come across in your public talks and first-hand interactions with businesses. Do some of them match the ones above or are they entirely different?

Expand full comment
Apr 19Liked by Daniel Nest

100% match in my work with clients, to the 3 myths you’ve written about. They’re the 3 big ones. A close 4th would be more of a misconception than a myth - “if I put things into it, other people can ask for that info and it will give it to them.”

Then, I find there’s a plethora of smaller myths that all come from people only having used free AI (eg free ChatGPT) and having underwhelming results, then drawing conclusions from 1 or 2 of those disappointing experiences. Those myths include things like:

- It can’t write like me (“it sounds too formal/robotic/exaggerated/American, insert adjective”)

- It only knows about the world until December 2022.

- It can’t access the internet.

- It can’t really be creative.

- etc.

These are the myths that I enjoy busting, with fun demonstrations, because they’re also the ones that are holding back mass adoption, and therefore holding companies back from the efficiency and productivity gains awaiting them.

Expand full comment
author
Apr 19·edited Apr 19Author

It's always amazing to me that even those misconceptions still exist.

But, as discussed with a friend of mine recently, we live in a sort of "AI bubble" where things that are obvious to us because we follow the developments closely are completely new to almost everyone else.

I can imagine how fun it is to watch someone experience ChatGPT browsing the web or create an image for the first time. You get to relive the magic moment when you yourself tried it first.

Expand full comment
Apr 19Liked by Daniel Nest

Exactly! I ran a training session yesterday for ~100 real estate agents... and the laughs, gasps, and amazement were all on full display. It's very satisfying to open eyes like that, and dispel those myths in the process.

Keep up the great work Daniel!

Expand full comment