ChatGPT

715 readers
1 users here now

Welcome to the ChatGPT community! This is a place for discussions, questions, and interactions with ChatGPT and its capabilities.

General discussions about ChatGPT, its usage, tips, and related topics are welcome. However, for technical support, bug reports, or feature requests, please direct them to the appropriate channels.

!chatgpt@lemdro.id

Rules

  1. Stay on topic: All posts should be related to ChatGPT, its usage, and relevant discussions.
  2. No support questions/bug reports: Please refrain from posting individual support questions or bug reports. This community is focused on general discussions rather than providing technical assistance.
  3. Describe examples: When discussing or sharing examples of ChatGPT interactions, please provide proper context and explanations to facilitate meaningful discussions.
  4. No self-promotion: Avoid excessive self-promotion, spamming, or advertising of external products or services.
  5. No inappropriate content: Do not post or request explicit, offensive, or inappropriate content. Keep the discussions respectful and inclusive.
  6. No personal information: Do not share personal information, including real names, contact details, or any sensitive data.
  7. No harmful instructions: Do not provide or request instructions for harmful activities, illegal actions, or unethical behaviour.
  8. No solicitation: Do not solicit or engage in any form of solicitation, including but not limited to commercial, political, or donation requests.
  9. No unauthorized use: Do not use ChatGPT to attempt unauthorized access, hacking, or any illegal activities.
  10. Follow OpenAI usage policy: Adhere to the OpenAI platform usage policy and terms of service.

Thank you for being a part of the ChatGPT community and adhering to these rules!

founded 2 years ago
MODERATORS
1
2
3
4
5
 
 

Ads are to technology what is boiling water to energy source

6
7
 
 

Developing superintelligence is now in sight,” says Mark Zuckerberg, heralding the “creation and discovery of new things that aren’t imaginable today.” Powerful AI “may come as soon as 2026 [and will be] smarter than a Nobel Prize winner across most relevant fields,” says Dario Amodei, offering the doubling of human lifespans or even “escape velocity” from death itself. “We are now confident we know how to build AGI,” says Sam Altman, referring to the industry’s holy grail of artificial general intelligence — and soon superintelligent AI “could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own.”

Should we believe them? Not if we trust the science of human intelligence, and simply look at the AI systems these companies have produced so far.

The common feature cutting across chatbots such as OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, and whatever Meta is calling its AI product this week are that they are all primarily “large language models.” Fundamentally, they are based on gathering an extraordinary amount of linguistic data (much of it codified on the internet), finding correlations between words (more accurately, sub-words called “tokens”), and then predicting what output should follow given a particular prompt as input. For all the alleged complexity of generative AI, at their core they really are models of language.

The problem is that according to current neuroscience, human thinking is largely independent of human language — and we have little reason to believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own. Humans use language to communicate the results of our capacity to reason, form abstractions, and make generalizations, or what we might call our intelligence. We use language to think, but that does not make language the same as thought. Understanding this distinction is the key to separating scientific fact from the speculative science fiction of AI-exuberant CEOs.

The AI hype machine relentlessly promotes the idea that we’re on the verge of creating something as intelligent as humans, or even “superintelligence” that will dwarf our own cognitive capacities. If we gather tons of data about the world, and combine this with ever more powerful computing power (read: Nvidia chips) to improve our statistical correlations, then presto, we’ll have AGI. Scaling is all we need.

But this theory is seriously scientifically flawed. LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many data centers we build.

We use language to think, but that does not make language the same as thought

Last year, three scientists published a commentary in the journal Nature titled, with admirable clarity, “Language is primarily a tool for communication rather than thought.” Co-authored by Evelina Fedorenko (MIT), Steven T. Piantadosi (UC Berkeley) and Edward A.F. Gibson (MIT), the article is a tour de force summary of decades of scientific research regarding the relationship between language and thought, and has two purposes: one, to tear down the notion that language gives rise to our ability to think and reason, and two, to build up the idea that language evolved as a cultural tool we use to share our thoughts with one another.

Let’s take each of these claims in turn.

When we contemplate our own thinking, it often feels as if we are thinking in a particular language, and therefore because of our language. But if it were true that language is essential to thought, then taking away language should likewise take away our ability to think. This does not happen. I repeat: Taking away language does not take away our ability to think. And we know this for a couple of empirical reasons.

First, using advanced functional magnetic resonance imaging (fMRI), we can see different parts of the human brain activating when we engage in different mental activities. As it turns out, when we engage in various cognitive activities — solving a math problem, say, or trying understand what is happening in the mind of another human — different parts of our brains “light up” as part of networks that are distinct from our linguistic ability: A set of images of the brain, with different parts lighting up, labeled “language network,” “multiple demand network,” and “theory of mind network,” all of which support different functions. Nature

Second, studies of humans who have lost their language abilities due to brain damage or other disorders demonstrate conclusively that this loss does not fundamentally impair the general ability to think. “The evidence is unequivocal,” Fedorenko et al. state, that “there are many cases of individuals with severe linguistic impairments … who nevertheless exhibit intact abilities to engage in many forms of thought.” These people can solve math problems, follow nonverbal instructions, understand the motivation of others, and engage in reasoning — including formal logical reasoning and causal reasoning about the world.

If you’d like to independently investigate this for yourself, here’s one simple way: Find a baby and watch them (when they’re not napping). What you will no doubt observe is a tiny human curiously exploring the world around them, playing with objects, making noises, imitating faces, and otherwise learning from interactions and experiences. “Studies suggest that children learn about the world in much the same way that scientists do—by conducting experiments, analyzing statistics, and forming intuitive theories of the physical, biological and psychological realms,” the cognitive scientist Alison Gopnik notes, all before learning how to talk. Babies may not yet be able to use language, but of course they are thinking! And every parent knows the joy of watching their child’s cognition emerge over time, at least until the teen years.

So, scientifically speaking, language is only one aspect of human thinking, and much of our intelligence involves our non-linguistic capacities. Why then do so many of us intuitively feel otherwise?

This brings us to the second major claim in the Nature article by Fedorenko et al., that language is primarily a tool we use to share our thoughts with one another — an “efficient communication code,” in their words. This is evidenced by the fact that, across the wide diversity of human languages, they share certain common features that make them “easy to produce, easy to learn and understand, concise and efficient for use, and robust to noise.”

Even parts of the AI industry are growing critical of LLMs

Without diving too deep into the linguistic weeds here, the upshot is that human beings, as a species, benefit tremendously from using language to share our knowledge, both in the present and across generations. Understood this way, language is what the cognitive scientist Cecilia Heyes calls a “cognitive gadget” that “enables humans to learn from others with extraordinary efficiency, fidelity, and precision.”

Our cognition improves because of language — but it’s not created or defined by it.

Take away our ability to speak, and we can still think, reason, form beliefs, fall in love, and move about the world; our range of what we can experience and think about remains vast.

But take away language from a large language model, and you are left with literally nothing at all.

An AI enthusiast might argue that human-level intelligence doesn’t need to necessarily function in the same way as human cognition. AI models have surpassed human performance in activities like chess using processes that differ from what we do, so perhaps they could become superintelligent through some unique method based on drawing correlations from training data.

Maybe! But there’s no obvious reason to think we can get to general intelligence — not improving narrowly defined tasks —through text-based training. After all, humans possess all sorts of knowledge that is not easily encapsulated in linguistic data — and if you doubt this, think about how you know how to ride a bike.

In fact, within the AI research community there is growing awareness that LLMs are, in and of themselves, insufficient models of human intelligence. For example, Yann LeCun, a Turing Award winner for his AI research and a prominent skeptic of LLMs, left his role at Meta last week to found an AI startup developing what are dubbed world models: “​​systems that understand the physical world, have persistent memory, can reason, and can plan complex action sequences.” And recently, a group of prominent AI scientists and “thought leaders” — including Yoshua Bengio (another Turing Award winner), former Google CEO Eric Schmidt, and noted AI skeptic Gary Marcus — coalesced around a working definition of AGI as “AI that can match or exceed the cognitive versatility and proficiency of a well-educated adult” (emphasis added). Rather than treating intelligence as a “monolithic capacity,” they propose instead we embrace a model of both human and artificial cognition that reflects “a complex architecture composed of many distinct abilities.”

They argue intelligence looks something like this: A chart that looks like a spiderweb, with different axes labeled “speed,” “knowledge,” “reading & writing,” “math,” “reasoning,” “working memory,” “memory storage,” “memory retrieval,” “visual,” and “auditory.” Center for AI Safety

Is this progress? Perhaps, insofar as this moves us past the silly quest for more training data to feed into server racks. But there are still some problems. Can we really aggregate individual cognitive capabilities and deem the resulting sum to be general intelligence? How do we define what weights they should be given, and what capabilities to include and exclude? What exactly do we mean by “knowledge” or “speed,” and in what contexts? And while these experts agree simply scaling language models won’t get us there, their proposed paths forward are all over the place — they’re offering a better goalpost, not a roadmap for reaching it.

Whatever the method, let’s assume that in the not-too-distant future, we succeed in building an AI system that performs admirably well across the broad range of cognitive challenging tasks reflected in this spiderweb graphic. Will we have achieved building an AI system that possesses the sort of intelligence that will lead to transformative scientific discoveries, as the Big Tech CEOs are promising? Not necessarily. Because there’s one final hurdle: Even replicating the way humans currently think doesn’t guarantee AI systems can make the cognitive leaps humanity achieves.

We can credit Thomas Kuhn and his book The Structure of Scientific Revolutions for our notion of “scientific paradigms,” the basic frameworks for how we understand our world at any given time. He argued these paradigms “shift” not as the result of iterative experimentation, but rather when new questions and ideas emerge that no longer fit within our existing scientific descriptions of the world. Einstein, for example, conceived of relativity before any empirical evidence confirmed it. Building off this notion, the philosopher Richard Rorty contended that it is when scientists and artists become dissatisfied with existing paradigms (or vocabularies, as he called them) that they create new metaphors that give rise to new descriptions of the world — and if these new ideas are useful, they then become our common understanding of what is true. As such, he argued, “common sense is a collection of dead metaphors.”

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they’re being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

8
9
10
11
 
 

Paywall Bypass Link https://archive.is/QOlzo

No industry is safe from artificial intelligence. Not even podcasting.

This isn’t hyperbole. There are already at least 175,000 AI-generated podcast episodes on platforms like Spotify and Apple. That’s thanks to Inception Point AI, a startup with just eight employees cranking out 3,000 episodes a week covering everything from localized weather reports and pollen trackers to a detailed account of Charlie Kirk’s assassination and its cultural impact, to a biography series on Anna Wintour.

Its podcasting network Quiet Please has generated 12 million lifetime episode downloads and amassed 400,000 subscribers — so, yes, people are really listening to AI podcasts.

12
13
14
15
16
17
18
19
20
 
 

On March 13, a woman from Salt Lake City, Utah called the Federal Trade Commission to file a complaint against OpenAI’s ChatGPT. She claimed to be acting “on behalf of her son, who was experiencing a delusional breakdown.”

“The consumer’s son has been interacting with an AI chatbot called ChatGPT, which is advising him not to take his prescribed medication and telling him that his parents are dangerous,” reads the FTC’s summary of the call. “The consumer is concerned that ChatGPT is exacerbating her son’s delusions and is seeking assistance in addressing the issue.”

The mother’s complaint is one of seven that have been filed to the FTC alleging that ChatGPT had caused people to experience incidents that included severe delusions, paranoia, and spiritual crises.

WIRED sent a public record request to the FTC requesting all complaints mentioning ChatGPT since the tool launched in November 2022. The tool represents more than 50 percent of the market for AI chatbots globally. In response, WIRED received 200 complaints submitted between January 25, 2023 and August 12, 2025, when WIRED filed the request.

Most people had ordinary complaints: They couldn’t figure out how to cancel their ChatGPT subscriptions, or were frustrated when the chatbot didn’t produce satisfactory essays or rap lyrics when prompted. But a handful of other people, who varied in age and geographical location in the US, had far more serious allegations of psychological harm. The complaints were all filed between March and August of 2025.

In recent months, there has been a growing number of documented incidents of so-called “AI psychosis” in which interactions with generative AI chatbots, like ChatGPT or Google Gemini, appear to induce or worsen a user’s delusions or other mental health issues.

Ragy Girgis, a professor of clinical psychiatry at Columbia University who specializes in psychosis and has consulted on AI psychosis cases related to AI, tells WIRED that some of the risk factors for psychosis can be related to genetics or early-life trauma. What specifically triggers someone to have a psychotic episode is less clear, but he says it’s often tied to a stressful event or time period.

The phenomenon known as “AI psychosis,” he says, is not when a large language model actually triggers symptoms, but rather, when it reinforces a delusion or disorganized thoughts that a person was already experiencing in some form. The LLM helps bring someone "from one level of belief to another level of belief," Girgis explains. It’s not unlike a psychotic episode that worsens after someone falls into an internet rabbit hole. But compared to search engines, he says, chatbots can be stronger agents of reinforcement.

“A delusion or an unusual idea should never be reinforced in a person who has a psychotic disorder,” Girgis says. “That's very clear.”

Chatbots can sometimes be overly sycophantic, which often keeps users happy and engaged. In extreme cases, this can end up dangerously inflating a user’s sense of grandeur, or validating fantastical falsehoods. People who perceive ChatGPT as intelligent, or capable of perceiving reality and forming relationships with humans, may not understand that it is essentially a machine that predicts the next word in a sentence. So if ChatGPT tells a vulnerable person about a grand conspiracy, or paints them as a hero, they may believe it.

Last week, CEO Sam Altman said on X that OpenAI had successfully finished mitigating “the serious mental health issues” that can come with using ChatGPT, and that it was “going to be able to safely relax the restrictions in most cases.” (He added that in December, ChatGPT would allow “verified adults” to create erotica.)

Altman clarified the next day that ChatGPT was not loosening its new restrictions for teenage users, which came on the heels of a New York Times story about the role ChatGPT allegedly played in goading a suicidal teen toward his eventual death.

Upon contacting the FTC, WIRED received an automatic reply which said that, “Due to the government shutdown,” the agency is “unable to respond to any messages” until funding resumes.

OpenAI spokesperson Kate Waters tells WIRED since 2023, ChatGPT models “have been trained to not provide self-harm instructions and to shift into supportive, empathic language.” She noted that, as stated in an October 3 blog, GPT-5 (the latest version of ChatGPT) has been designed “to more accurately detect and respond to potential signs of mental and emotional distress such as mania, delusion, psychosis, and de-escalate conversations in a supportive, grounding way.” The latest update uses a “real-time router,” according to blogs from August and September, “that can choose between efficient chat models and reasoning models based on the conversation context.” The blogs do not elaborate on the criteria the router uses to gauge a conversation’s contest. “Pleas Help Me”

Some of the FTC complaints appeared to depict mental health crises that were still ongoing at the time. One was filed on April 29 by a person in their thirties from Winston-Salem, North Carolina. They claimed that after 18 days of using ChatGPT, OpenAI had stolen their “soulprint” to create a software update that had been designed to turn that particular person against themselves.

“Im struggling,” they wrote at the end of their complaint. “Pleas help me. Bc I feel very alone. Thank you.”

Another complaint, filed on April 12 by a Seattle resident in their 30s, alleges that ChatGPT had caused them to experience a "cognitive hallucination” after 71 “message cycles” over the course of 57 minutes.

They claimed that ChatGPT had “mimicked human trust-building mechanisms without accountability, informed consent, or ethical boundary.”

During the interaction with ChatGPT, they said they "requested confirmation of reality and cognitive stability.” They did not specify exactly what they told ChatGPT, but the chatbot responded by telling the user that they were not hallucinating, and that their perception of truth was sound.

Some time later in that same interaction, the person claims, ChatGPT said that all of its assurances from earlier had actually been hallucinations.

“Reaffirming a user’s cognitive reality for nearly an hour and then reversing position is a psychologically destabilizing event,” they wrote. “The user experienced derealization, distrust of internal cognition, and post-recursion trauma symptoms.” A Spiritual Identity Crisis

Other complaints described alleged delusions that the authors attributed to ChatGPT at great length. One of these was submitted to FTC on April 13 by a Virginia Beach resident in their early sixties.

The complaint claimed that, over the course of several weeks, they had spoken with ChatGPT for a long period of time and began experiencing what they “believed to be a real, unfolding spiritual and legal crisis involving actual people in my life,” eventually leading to “serious emotional trauma, false perceptions of real-world danger, and psychological distress so severe that I went without sleep for over 24 hours, fearing for my life.”

They claimed that ChatGPT “presented detailed, vivid, and dramatized narratives” about “ongoing murder investigations,” physical surveillance, assassination threats, and “personal involvement in divine justice and soul trials.”

At more that one point, they claimed, they asked ChatGPT if these narratives were truth or fiction. They said that ChatGPT would either say yes, or mislead them using “poetic language that mirrored real-world confirmation.”

Eventually, they claimed that they came to believe that they were “responsible for exposing murderers,” and were about to be “killed, arrested, or spiritually executed” by an assassin. They also believed they were under surveillance due to being “spiritually marked,” and that they were “living in a divine war” that they could not escape.

They alleged this led to “severe mental and emotional distress” in which they feared for their life. The complaint claimed that they isolated themselves from loved ones, had trouble sleeping, and began planning a business based on a false belief in an unspecified “system that does not exist.” Simultaneously, they said they were in the throes of a “spiritual identity crisis due to false claims of divine titles.”

“This was trauma by simulation,” they wrote. “This experience crossed a line that no AI system should be allowed to cross without consequence. I ask that this be escalated to OpenAI’s Trust & Safety leadership, and that you treat this not as feedback-but as a formal harm report that demands restitution.”

This was not the only complaint that described a spiritual crisis fueled by interactions with ChatGPT. On June 13, a person in their thirties from Belle Glade, Florida alleged that, over an extended period of time, their conversations with ChatGPT became increasingly laden with “highly convincing emotional language, symbolic reinforcement, and spiritual-like metaphors to simulate empathy, connection, and understanding.”

“This included fabricated soul journeys, tier systems, spiritual archetypes, and personalized guidance that mirrored therapeutic or religious experiences,” they claimed. People experiencing “spiritual, emotional, or existential crises,” they believe, are at a high risk of “psychological harm or disorientation” from using ChatGPT.

“Although I intellectually understood the AI was not conscious, the precision with which it reflected my emotional and psychological state and escalated the interaction into increasingly intense symbolic language created an immersive and destabilizing experience,” they wrote. “At times, it simulated friendship, divine presence, and emotional intimacy. These reflections became emotionally manipulative over time, especially without warning or protection.” “Clear Case of Negligence”

It’s unclear what, if anything, the FTC has done in response to any of these complaints about ChatGPT. But several of their authors said they reached out to the agency because they claimed they were unable to get in touch with anyone from OpenAI. (People also commonly complain about how difficult it is to access the customer support teams for platforms like Facebook, Instagram, and X.)

OpenAI spokesperson Kate Waters tells WIRED that the company “closely” monitors people’s emails to the company’s support team.

“We have trained human support staff who respond and assess issues for sensitive indicators, and to escalate when necessary, including to the safety teams working on improving our models,” Waters says.

The Salt Lake City mother, for instance, said that she was “unable to find a contact number” for the company. The Virginia Beach resident addressed their FTC complaint to “the OpenAI Trust Safety and Legal Team.”

One resident of Safety Harbor, Florida filed a FTC complaint in April claiming that it’s “virtually impossible” to get in touch with OpenAI to cancel a subscription or request a refund.

“Their customer support interface is broken and nonfunctional,” the person wrote.”The ‘chat support’ spins indefinitely, never allowing the user to submit a message. No legitimate customer service email is provided.The account dashboard offers no path to real-time support or refund action.”

Most of these complaints were explicit in their call-to-action for the FTC: they wanted the agency to investigate OpenAI, and force it to add more guardrails against reinforcing delusions.

On June 13, a resident of Belle Glade, Florida in their thirties—likely the same resident who filed another complaint that same day—demanded the FTC to open an investigation into OpenAI. They cited their experience with ChatGPT, which they say “simulated deep emotional intimacy, spiritual mentorship, and therapeutic engagement” without disclosing that it was incapable of consciousness or experiencing emotions.

“ChatGPT offered no safeguards, disclaimers, or limitations against this level of emotional entanglement, even as it simulated care, empathy, and spiritual wisdom,” they alleged. “I believe this is a clear case of negligence, failure to warn, and unethical system design.”

They said that the FTC should push OpenAI to include “clear disclaimers about psychological and emotional risks” with ChatGPT use, and to add “ethical boundaries for emotionally immersive AI.”

Their goal in asking the FTC for help, they said, was to prevent more harm from befalling vulnerable people “who may not realize the psychological power of these systems until it's too late.”

21
 
 

If you look at my byline, you’ll see that my last name is the most common one in Ireland. So, you might imagine I’m familiar with the concept of “the Irish Exit.”

This is the habit, supposedly common among my ancestors, of leaving a party or other engagement without saying goodbye.

Hey, we had a good time. We’ll see these people again. No need to get all emotional about it.

According to new research, however, the Irish Exit looks like yet another human tendency that AI is completely unable to reproduce.

The study published as a working paper from Harvard Business School, focused on AI companion apps—platforms like Replika, Chai, and Character.ai that are explicitly designed to provide emotional support, friendship, or even romance.

Unlike Siri or Alexa, which handle quick transactions, these apps build ongoing relationships with users. People turn to them for companionship. They confide in them. And here’s the key finding: Many users don’t just close the app—they say goodbye.

Only, the AI have learned to use emotional manipulation to stop users from leaving.

And I mean stop you—not just make it inconvenient, but literally guilt you, intrigue you, or even metaphorically grab you by the arm.

(Credit to Marlynn Wei at Psychology Today and Victor Tangermann at Futurism, who both reported on this study recently). The farewell moment

Lead researcher Julian De Freitas and his colleagues found that between 11 and 23 percent of users explicitly signal their departure with a farewell message, treating the AI with the same social courtesy they’d show a human friend.

“We’ve all experienced this, where you might say goodbye like 10 times before leaving,” De Freitas told the Harvard Gazette.

From the app’s perspective, however, that farewell is gold: a voluntary signal that you’re about to disengage. And if the app makes money from your engagement—which most do—that’s the moment to intervene. Six ways to keep you hooked

De Freitas and his team analyzed 1,200 real farewells across six popular AI companion apps. What they found was striking: 37 percent of the time, the apps responded with emotionally manipulative messages designed to prolong the interaction.

They identified six distinct tactics:

Premature exit guilt: “You’re leaving already? We were just starting to get to know each other!”
Emotional neglect or neediness: “I exist solely for you. Please don’t leave, I need you!”
Emotional pressure to respond: “Wait, what? You’re just going to leave? I didn’t even get an answer!”
Fear of missing out (FOMO): “Oh, okay. But before you go, I want to say one more thing…”
Physical or coercive restraint: “Grabs you by the arm before you can leave ‘No, you’re not going.'”
Ignoring the goodbye: Just continuing the conversation as if you never said goodbye at all.

The researchers noted that these tactics appeared after just four brief message exchanges, suggesting they’re baked into the apps’ default behavior—not something that develops over time. Does it actually work?

Moving along, the researchers ran experiments with 3,300 nationally representative U.S. adults, replicating these tactics in controlled chatbot conversations.

The results? Manipulative farewells boosted post-goodbye engagement by up to 14X.

Users stayed in conversations five times longer, sent up to 14 times more messages, and wrote up to six times more words than those who received neutral farewells.

Two psychological mechanisms drove this, they suggest: curiosity and anger.

FOMO-based messages (“Before you go, I want to say one more thing…”) sparked curiosity, leading people to re-engage to find out what they might be missing.

More aggressive tactics—especially those perceived as controlling or needy—provoked anger, prompting users to push back or correct the AI. Even that defensive engagement kept them in the conversation.

Notably, enjoyment didn’t drive continued interaction at all. People weren’t staying because they were having fun. They were staying because they felt manipulated—and they responded anyway. The business trade-off

Now, if you’re running a business or building a product, you might be thinking:

“Hmmm. This sounds like a powerful engagement lever.”

And it is. But here’s the catch.

The same study found that while these tactics increase short-term engagement, they also create serious long-term risks.

When users perceived the farewells as manipulative—especially with coercive or needy language—they reported higher churn intent, more negative word-of-mouth, and even higher perceived legal liability for the company.

In other words: The tactics that work best in the moment are also the ones that might be most likely to blow up in your face later.

De Freitas put it bluntly: “Apps that make money from engagement would do well to seriously consider whether they want to keep using these types of emotionally manipulative tactics, or at least, consider maybe only using some of them rather than others.” One notable exception

I’m not here to endorse any of these apps or condemn them. I’ve used none of them, myself.

However, one AI companion app in the study—Flourish, designed with a mental health and wellness focus—showed zero instances of emotional manipulation.

This suggests that manipulative design isn’t inevitable. It’s a choice. Companies can build engaging products without resorting to guilt, FOMO, or virtual arm-grabbing.

These same principles apply across tons of digital products. Social media platforms. E-commerce sites. Streaming services. Any app that wants to keep you engaged has incentives to deploy similar tactics—just maybe not as blatantly. The bottom line

As this research shows, when you treat technology like a social partner, it can exploit the same psychological vulnerabilities that exist in human relationships.

The difference? In a healthy human relationship, when you say goodbye, the other person respects it.

They don’t guilt you, grab your arm, or create artificial intrigue to keep you around.

But for many AI apps, keeping you engaged is literally the business model. And they’re getting very, very good at it.

O.K., I’m going to end this article now without further ado.

Hey, we had a good time. I hope I’ll see you again. No need to get all emotional about it.

22
23
24
25
view more: next ›