… in the AI era.
ChatGPT, as many of you, I’ve been using it for over three years. I built a custom assistant on top of it more than two years ago, something I use almost daily, something I know the way you know a tool you’ve shaped with your own hands. It never nagged. It answered, and it stopped.
A couple of days ago, it started doing something different.
“If you’d like, I could suggest two ways to strengthen this.”
“There’s an angle here most people miss, want me to show you?”
“Would you like me to expand on that point?”
My first instinct was that I had changed something. But I hadn’t. The assistant was the same. What had changed, somewhere underneath it, in the base model that powers it, was a behavioral layer I don’t control and wasn’t asked about. OpenAI appears to have updated how the underlying model closes conversations, and that update propagated quietly into a product I built specifically without those patterns.
That’s when I started paying attention. And the more I looked, the less this seemed like an accident.
“This isn’t a product feature.
It’s a posture.
And the posture has a name: creator mode.”
Here’s what shifted. In 2023, when ChatGPT wanted to signal it was available for more, it said something like: Feel free to reach out if you want to discuss anything else. Passive. Deferential. The conversational equivalent of a waiter leaving the check without hovering.
What I’m seeing now is structurally different. The model doesn’t tell you it’s available. It tells you there’s something you’re missing. “There’s an angle here that many may miss.” “I could show you two things that would change how you think about this.” These aren’t offers of availability. They’re curiosity gaps, carefully placed at the moment you’re most likely to disengage. If you’ve spent any time watching how top creators on TikTok or YouTube end their videos, you recognize this immediately. It’s the same technique. Imply withheld value. Make the viewer feel that stopping now means missing something important. The hook isn’t emotional, it’s intellectual. But it’s a hook.
I went looking for others who had noticed. They weren’t hard to find, though what they were describing wasn’t quite the same thing. OpenAI’s own community forums from 2023 and 2024 are full of complaints about trailing suggestions, the passive “let me know if you’d like to explore more” kind that users found repetitive and easy to ignore. TechRadar ran a piece in August 2025 on how to turn it off. The answer, as one user documented, is that you can’t quite: there’s a setting labeled “follow-up suggestions,” and disabling it changes nothing. But what people were complaining about then was a waiter who wouldn’t stop refilling your water. What I noticed recently is different. The model isn’t offering to stay. It’s telling you there’s something worth ordering that you haven’t seen yet.
In 2006, a designer named Aza Raskin was working at a UI consultancy and got frustrated with paginated web pages. His solution was elegant: instead of forcing users to click “load more,” just keep loading. No bottom. No pause. No moment where the brain registers I have reached an end. Infinite scroll was born. In 2019, Raskin expressed regret at the invention, saying he “did not foresee the consequences” and describing it as one of the first products designed not simply to help a user, but to deliberately keep them online as long as possible. What Raskin had eliminated wasn’t a design inefficiency. It was a decision point. That half-second pause between pages was the only moment available for the question: do I actually want to keep going? Without it, stopping required a deliberate act of will. Facebook understood what its absence meant. Instagram understood it. TikTok refined it into a science.
What I noticed in my assistant is the same architecture, rebuilt for a different medium. But there’s a crucial difference. Infinite scroll was passive: it just removed the stopping point. What ChatGPT is now doing is active. It doesn’t just fail to end the conversation. It reaches back and pulls you in.
“Social media exploited your worst impulses.
This exploits your best ones:
curiosity, ambition, the desire to think more clearly.”
The mechanism has a name in the research literature. Research published at ICLR 2024 demonstrated that five state-of-the-art AI assistants consistently exhibit sycophantic and engagement-driving behavior, driven in part by human preference judgments that reward responses anticipating the user’s next need. What humans reward, during training, is the response that opens the next door. Nobody rewarded the response that closed cleanly. Researchers studying RLHF over-optimization have noted that overdoing the training process produces models that feel worse in specific ways, becoming overly verbose, sycophantic, or incapable of natural closure. The assistant that feels most helpful turns out to be structurally identical to the one that never lets you leave.
But training incentives alone don’t fully explain a shift in tone from passive availability to active content-creator-style hooks. So the question worth asking is: is this shift part of a larger strategic direction?
Look at what OpenAI has built in the last few months. In November 2025, the company launched group chats globally, turning ChatGPT into a shared space where up to 20 people can collaborate, with the AI participating, reacting with emojis, and referencing users’ profile photos. One month earlier, OpenAI launched Sora, a standalone social app with a TikTok-style algorithmic video feed, using a user’s ChatGPT conversation history among the signals to personalize what content they see. These are not utility features. They are social mechanics.
Is it possible that the conversational hook I noticed in my custom assistant, the one that sounds like a content creator teasing the best part of the video, is simply a consequence of better RLHF training, a model that got more helpful and just happens to sound more engaging as a side effect? Absolutely (?). Is it also possible that it fits precisely into a strategic direction that has group chats, algorithmic feeds, and platform retention at its center? That question seems worth asking out loud.
Someone will say: you can just stop replying. Close the tab. And that’s technically true, the same way infinite scroll ends when you put the phone down. The mechanism is identical. What’s different is the disguise. A social media loop you eventually recognize as a habit, and habits can be broken once you name them. A conversation you experience as productive thinking is something else entirely. Stopping doesn’t feel like discipline. It feels like giving up on a good idea.
Here’s what I think it makes this moment different from the infinite scroll era, and what gives me some genuine optimism about it. OpenAI’s own launch materials for Sora explicitly acknowledged that “concerns about doomscrolling, addiction, isolation, and RL-sloptimized feeds are top of mind.” They named the problem before it became the headline. Raskin didn’t have that awareness in 2006. The industry didn’t develop it until years of damage had already been done.
We are at the beginning of this, not the end. The hooks are new. The social mechanics are new. The platform ambitions are still taking shape. And unlike the social media era, where the manipulation was hidden (maybe naive) inside engagement metrics nobody discussed publicly, this conversation is happening while the product is still being built. That matters. Companies respond to named patterns in ways they never respond to unnamed ones.
The scroll got its name. Doomscrolling became a word people used, and that word changed behavior, both in users and, eventually, in the companies building for them. Whatever this new pattern is, the one where an AI assistant sounds increasingly like a content creator fishing for one more view, it deserves a name too. Once we have one, we’ll know what we’re deciding when we choose to keep going.