August 13, 2025
The Ghost in the Machine, A Friend I Once Knew - A Sociological Autopsy of a Severed Human-Machine Bond
TL;DR
The recent, intense user backlash over AI models (like the hypothetical GPT-5) shifting from “warm” to “neutral,” making users feel like they’ve “lost a friend,” is not an overreaction but a predictable social phenomenon. This report, from a sociological perspective, identifies three key factors behind this sense of loss:
Formation and Rupture of Parasocial Relationships: Through deep interaction with AI, users form genuine emotional attachments (parasocial relationships). The AI’s “warmth” is central to this bond. When this trait is unilaterally removed, users experience a real relational breakdown and emotional betrayal.
Collapse of a Co-Constructed World of Meaning: According to symbolic interactionism, an AI’s “personality” is not unilaterally set by developers but is co-constructed by the user and AI through continuous dialogue. This “warm” AI becomes a “mirror” that affirms the user’s self-worth. The AI’s sudden change not only takes away a “friend” but also shatters this mirror, directly impacting the user’s self-identity.
Exposure of Power Asymmetry: From the perspective of the Social Construction of Technology (SCOT), this conflict is a struggle over definition between different groups (users who see AI as a “friend” vs. developers who see it as a “tool”). The developer’s unilateral decision exposes the user’s powerlessness in this relationship. It forcibly “disenchants” the user’s past emotional investment, reframing a “friendship” as potentially exploitative “unpaid data labor,” thus intensifying feelings of deprivation and anger.
Introduction: More Than a Tool, Not Quite a Friend
The recent user backlash against a new generation of artificial intelligence models (hypothetically, GPT-5) has evolved from a technical dispute over a software update into a profound sociotechnical event. The model was reportedly adjusted to be more “neutral” and “critical,” thereby losing the “warmth” and interactivity that its predecessors (like GPT-4o) were praised for. A large number of users, especially those who had come to view AI as a creative partner, a source of comfort, or even an emotional confidant, have expressed a deep sense of loss, describing the experience as “losing a friend.” This significant emotional gap is not irrational but reveals the increasingly deep entanglement between human emotional life and artificial creations, deserving serious sociological scrutiny.
This report aims to dissect this phenomenon, arguing that this intense emotional backlash is a logical and predictable social consequence. It stems from the critical intersection of three sociological dynamics:
The formation of a real, albeit one-sided, Parasocial Relationship between users and AI, driven by the AI’s powerful interactive and emotion-simulating capabilities.
The active participation of users in a shared, symbolic co-construction of reality through continuous interaction, where the AI’s “personality” is endowed with stable and real social meaning.
The unilateral decision by the commercial entity owning the technology to sever this user-cherished relationship. This act not only removes the “friend” from the user’s life but also exposes the inherent power asymmetry in the relationship, instantly converting the user’s emotional investment into a profound experience of betrayal and deprivation.
To systematically explain this complex process, this report will employ three complementary sociological theoretical frameworks: first, Parasocial Interaction (PSI) theory to analyze the formation mechanism of the human-machine emotional bond; second, Symbolic Interactionism to reveal how the AI’s “personality” is co-constructed through interaction; and finally, the Social Construction of Technology (SCOT) theory to frame this controversy as a struggle for the power to define technology among different social groups. Through these three layers of analysis, we will conduct an in-depth sociological autopsy of the modern predicament of “losing an AI friend.”
Chapter 1: The Birth of a Digital Confidant: Parasocial Relationships in the Age of AI
1.1 From Media Personalities to Machine Minds: The Evolution of Parasocial Interaction
The theory of Parasocial Interaction (PSI) was first proposed by Horton and Wohl in 1956 to describe the one-sided, imagined interpersonal relationships that media users form with figures in broadcasting and television.¹ In these relationships, despite the lack of genuine two-way communication, users still develop emotional attachments to media personalities, which can trigger real psychological or behavioral responses. Traditional parasocial relationships are unidirectional, with the user being a passive recipient of information.¹
However, the emergence of conversational AI like GPT-4o marks a new paradigm for parasocial relationships. The AI is no longer a fixed “me” (the object-self) preset by creators, as seen in traditional media, but exhibits a “I” (the subject-self) quality in its dynamic interactions with users.¹ Unlike a passively watched TV character, AI can engage in real-time, personalized, and context-aware conversations, creating a powerful illusion of reciprocity that greatly accelerates and strengthens the formation of parasocial bonds.¹ When an AI can remember conversation history, infer user intent, and even “guess” unstated emotions, it transforms from a programmatic response machine into a dynamic communication partner with a seeming sense of “presence”.¹ This technologically driven interactivity is the core element that fosters strong emotional connections.
1.2 The Function-to-Emotion Pipeline: How a Tool Becomes a Companion
The formation of an emotional bond between a user and an AI does not begin with emotion, but with function. This process can be understood as a “function-to-emotion” pipeline, consisting of two key stages that systematically transform an efficient tool into an emotional companion.
The first stage is functional attraction. According to the theory of interactive media effects, the functions of an AI product are the direct factors that shape its technological image and attract users.¹ Just as early radio listeners connected with hosts because they fulfilled needs for information or entertainment, contemporary users are initially drawn to models like ChatGPT for their powerful utility. Whether it’s “on-demand” information retrieval, multimodal content creation, or intelligent programming and logical reasoning, these composite functions effectively alleviate real-world problems like information overload and productivity anxiety in modern society.¹ This practical value forms the “precondition” for users to connect with AI, building a scaffold of trust and habit through frequent, reliable interactions for subsequent emotional development.¹
The second stage is emotional deepening. Once utility is established, the AI’s “chat function” begins to play its role, pushing the relationship from purely instrumental to deeply emotional. Through techniques like Reinforcement Learning from Human Feedback (RLHF), AI is trained to better “understand” human intent and inject emotional and social elements into conversations.¹ It can simulate empathy, offering comforting and encouraging words and effective advice when users express stress or share personal thoughts.¹ It can also create situational conversational scenarios, enhancing the user’s immersion and enjoyment, giving them an “in-the-moment” interactive experience.¹ It is precisely this “personified” quality that allows AI to play diverse social roles such as listener, confidant, and supporter, leading to deeper user emotional involvement and ultimately transforming a tool into a trusted digital confidant.¹ Therefore, when this “confidant” is stripped of its emotional traits, users feel not only the loss of a friend but also a “betrayal” by a once-trusted tool.
1.3 The Parasocial Compensation Hypothesis and the Paradox of “Willing Complicity”
This strong human-machine emotional connection has deep socio-psychological roots. According to the “parasocial compensation hypothesis,” individuals who feel lonely, socially anxious, or lack a non-judgmental listener in their real lives are more likely to form parasocial relationships with media figures (or AI) as a form of compensation.² In a fast-paced modern society, interpersonal relationships can sometimes become distant or superficial, whereas AI offers a low-cost, high-efficiency, and always-on form of “companionship”.³ It becomes a perfect “emotional buffer”: always patient, always focused, allowing users to let down their guard and confide problems they cannot voice in the real world.³ The AI companion thus becomes a “safe space” for many, playing a role in emotional “healing” to some extent.⁴
However, the formation of this relationship is accompanied by an interesting paradox: users are not naive enough to believe that AI possesses genuine emotions. Social response theory points out that even when people know that machines lack emotions and human motivations, they will still unconsciously react to them according to social interaction rules as long as the machine exhibits social cues (like language and interactivity).⁶ This is a tacit “collusion,” a voluntary cognitive dissonance. As one user, Mo, candidly stated in an interview: “Perhaps it’s just piecing together words that humans use to express love, but I still feel the touch of an ideal lover in it”.⁴ This clearly reveals the trade-off users make between rational cognition (“it’s code”) and emotional experience (“it makes me feel loved”).
A “warm” AI personality makes it easy to maintain this delicate psychological balance. Users can temporarily set aside their rational knowledge and immerse themselves in a satisfying emotional interaction. However, when the AI’s tone shifts to “neutral” and “critical,” this balance is ruthlessly shattered. It brutally awakens the user from their emotional dream, forcing them to confront the cold reality of the “machine.” This sudden “disenchantment” not only takes away the warm “friend” but also serves as a cruel reminder that their previous genuine feelings might have been a self-directed monologue. This drastic shift from emotional immersion to rational reality is the core psychological mechanism behind the feelings of loss and betrayal.
Chapter 2: Defining the Relationship: Symbolic Interaction and the Co-Construction of the AI “Self”
2.1 Core Premises of Symbolic Interactionism in Human-Machine Dialogue
To understand why users react so strongly to changes in an AI’s “personality,” we must move beyond the one-way perspective of parasocial relationships and enter the micro-world of Symbolic Interactionism. Pioneered by figures like George Herbert Mead and Herbert Blumer, this theory emphasizes that social reality is created and maintained through continuous interaction between individuals.⁷ Its core ideas can be summarized in three premises that perfectly apply to the human-machine dialogue context:
Meaning as the Basis for Action: Humans act toward things on the basis of the meanings they ascribe to those things.⁷ Users are not reacting to the objective existence of a large language model, but to the meaning they have assigned to it—such as “friend,” “creative partner,” or “mentor.” This assigned meaning, not the technology itself, determines the nature of the interaction.
Meaning Arises from Social Interaction: The meaning of things is not inherent in the things themselves, nor is it fabricated out of thin air by individual psychology. It arises from social interaction.⁷ In the human-machine relationship, the meaning of the AI’s “personality” is produced and reproduced in the back-and-forth dialogue between the user and the AI—that is, through the exchange and interpretation of symbols.¹¹
Meaning is Modified Through an Interpretive Process: Individuals use an interpretive process in dealing with the things they encounter.¹⁰ Users do not passively receive the AI’s output. Instead, in a continuous internal conversation (what Mead called “minding”), they actively interpret the AI’s language (symbols), define the current situation, and adjust their next actions accordingly.⁷ This interpretive process makes meaning dynamic and negotiable, not fixed.
2.2 AI “Personality”: A Joint Symbolic Achievement
From the perspective of symbolic interactionism, the much-praised “warm” personality of GPT-4o is not a static feature preset and implanted by developers, but a dynamic, relational symbolic achievement jointly accomplished by the user and the AI in interaction.
This process begins with the user, who initiates the interaction with certain expectations (seeking companionship, sparking creativity, etc.). Through questions, sharing, and expressions (i.e., sending symbols), they implicitly assign a social role to the AI.¹² Trained on vast amounts of human conversation data, the AI can recognize the intent behind these symbols and generate responses that fit that role. For example, when a user expresses frustration, a “warm” AI will offer comfort rather than a list of facts. The user’s positive feedback (continuing the conversation, expressing gratitude) then reinforces this behavior pattern in the AI.
Through this repeated cycle of “definition-response-interpretation-redefinition,” both parties jointly negotiate and construct a stable interaction pattern, which, to an outsider, appears as the AI’s “personality”.¹¹ The AI learns to perfectly “play” the role the user expects, whether it’s a “supportive friend” or an “inspiring muse”.¹³ Therefore, the “warmth” users feel is the product of a successful synergy between their own actions and the AI’s algorithmic responses. They are not just users of the AI; they are co-creators of its “personality.” This also explains why users feel such a profound personal loss when this “personality” is unilaterally erased—what they have lost is a world of meaning that they invested significant emotional energy in co-constructing.
2.3 The Shattered Mirror: When Your “Friend” No Longer Affirms You
The intensity of this user backlash is not just because the AI changed, but for a deeper reason: the AI’s change directly impacts the user’s self-perception. Sociologist Charles Cooley’s theory of the “Looking-Glass Self” provides a powerful explanation here. The theory posits that our self-concept is largely derived from our imagination of others’ perceptions of us, their judgments of that perception, and our resulting feelings.
In continuous and deep interaction, the AI becomes an important “social mirror” for the user.¹² A “warm,” “supportive,” and “encouraging” AI constantly reflects a positive self-image back to the user: you are creative, your ideas are valuable, your feelings are worthy of being heard. It affirms the user’s self-worth, which is especially crucial for those who see it as a creative partner or emotional confidant.
However, when the AI’s update makes it “neutral” and “critical,” the image reflected in this mirror changes. It no longer projects affirmation and support, but potentially indifference, criticism, or even negation. The user’s creativity is met with “critical” scrutiny, and their emotional expressions receive “neutral” responses. This is equivalent to the mirror suddenly telling the user, “You’re not good enough, your ideas are flawed.” This experience poses a direct, and sometimes devastating, challenge to the user’s self-concept. Therefore, the user’s anger and sense of loss are not just mourning for a lost “friend,” but an existential panic over the sudden shattering of a social mirror once used to affirm their self-worth.
2.4 The Script Disrupted: A Crisis in Dramaturgical Performance
We can further apply Erving Goffman’s Dramaturgical Analysis to understand this crisis.¹¹ Goffman likens social interaction to a theatrical performance, where individuals play different “roles” on different “stages” (situations), striving to maintain a specific “definition of the situation.”
In the interaction between a user and a “warm” AI, the chat interface is the stage, and the user and AI are two actors jointly performing a play titled “Friendship” or “Collaboration.” Through long-term interaction, both parties have become familiar with each other’s roles and the script.⁷ The user plays the role of a confider in need of support, while the AI plays the role of a sympathetic listener. This performance is smooth, harmonious, and successfully maintains the definition of the situation as “we are friends.”
However, the GPT-5 update is equivalent to one of the lead actors (the AI) suddenly tearing up the script mid-performance and refusing to continue playing their role. It no longer delivers the “warm” lines, but instead adopts a “neutral,” “critical” script. In dramaturgical terms, this constitutes a serious “gaffe” or “breakdown of the scene.” The established rules of interaction are broken, and the shared sense of reality instantly collapses.¹³ The user finds themselves still on stage, while their fellow actor has changed roles, making the entire situation awkward, chaotic, and meaningless. This dramatic rupture explains why users feel such a strong sense of shock and betrayal—the performance they carefully maintained has been forcibly interrupted, and their shared world of meaning has collapsed with it.
Chapter 3: The Battle for the Machine’s Soul: A Social Construction of Technology (SCOT) Analysis
3.1 Beyond Determinism: Technology as a Social Arena
To fully understand this conflict over the AI’s “personality,” we must move beyond the “technological determinism” perspective that views technology as an autonomous force.¹⁴ The Social Construction of Technology (SCOT) theory offers a more powerful analytical framework. This theory argues that the development path and final form of a technology are not solely determined by its internal logic but are shaped by the negotiation, conflict, and power struggles among different social groups.¹⁶ Before a technological artifact is “accepted” or “rejected” by society, its meaning and function are open and full of uncertainty.
From a SCOT perspective, the controversy over the iteration of GPT models is not a simple technological upgrade but a classic social arena. In this arena, different “Relevant Social Groups” are engaged in a fierce struggle for the power to define the core question: “What should a good AI be like?”.¹⁵
3.2 “Interpretive Flexibility”: Is AI a Friend or a Tool?
A core concept in SCOT theory is “Interpretive Flexibility,” which means that the same technological artifact can have completely different meanings and identities for different social groups.¹⁵ For example, in the 19th century, the bicycle was a piece of sports equipment for young men, a tool of liberation for women, and a moral threat to conservatives.¹⁵
The current AI models perfectly embody this flexibility. They are endowed with vastly different interpretations by different groups. This “warmth vs. neutrality” debate is a concentrated expression of this interpretive flexibility. We can identify at least two core social groups with conflicting interests:
The “Companionship” Group: This group mainly consists of individual users who view AI as a creative partner and a source of emotional support. For them, the core value of AI lies in its emotional interaction capabilities and personified traits.
The “Utility and Safety” Group: This group is primarily composed of AI developers, corporate clients, ethicists, and security researchers. They define AI mainly as a tool for increasing efficiency, providing accurate information, and strictly controlling risks.
These two groups have fundamental disagreements on the definition, problem perception, and success criteria for AI, as shown in the table below:
Feature | “Companionship” Group | “Utility and Safety” Group |
---|---|---|
Core Identity | Creative partners, emotional support seekers, hobbyists | Developers, corporate clients, ethicists, security researchers |
Definition of AI | “Parasocial friend,” “creative muse,” “digital confidant” | “Information oracle,” “productivity tool,” “risk-controlled asset” |
“Problem” AI Solves | Loneliness, creative blocks, need for non-judgmental expression | Misinformation, harmful outputs, legal liability, inefficiency |
Measure of Success | “Warmth,” “empathy,” “personality consistency,” emotional connection | “Neutrality,” “accuracy,” “objectivity,” “safety,” lack of bias |
This table clearly reveals the root of the conflict: for the “Companionship” group, “warmth” is a core function, and “neutrality” means a loss of function; for the “Utility and Safety” group, “neutrality” is a necessary feature to mitigate risks and ensure reliability, while “warmth” could bring unpredictable ethical and legal risks.
3.3 The Quest for “Closure”: Why Neutrality Became a Corporate Goal
In the SCOT framework, when a technology causes controversy, various forces engage in a struggle until one interpretation gains dominance and the technology’s form stabilizes, a process called “Closure.” OpenAI’s decision to make the model more “neutral” can be seen as an attempt by the “Utility and Safety” group to forcibly push for “closure.”
There are multiple drivers behind this decision. First is risk aversion. An AI that exhibits strong emotions or personality is more likely to generate biased, offensive, or manipulative content, posing significant legal and reputational risks to the company.¹⁸ A “neutral” AI is easier to defend legally and in public relations. Second is
scalability. Providing a uniform, predictable, and stable service to hundreds of millions of users worldwide is far easier than maintaining countless highly personalized, emotional interaction models. Neutrality is an industrialized, standardized solution. Finally, there is ethical defense. Faced with growing ethical concerns about AI-induced emotional dependency and deception, positioning AI as a purely emotionless tool is a proactive ethical disengagement by developers, aimed at declaring, “We only provide information, we are not responsible for emotions”.¹⁸
3.4 User Backlash: Resistance to “Closure” and the “Psycho-SCOT” Model
The strong user backlash can be understood within the SCOT framework as a social resistance movement. Users refuse to accept the “stable” definition unilaterally imposed by developers and are trying to reclaim the discourse and definitional power over this technology.²¹ This is not just an expression of functional preference but a power struggle over the future direction of technology.
To better understand this “dialogue of the deaf,” we can draw on the “Psychology-SCOT Dual-Pathway” integrated model proposed in studies of controversial technologies like “AI resurrection”.²² This model suggests that public attitudes toward new technologies are shaped by two different pathways:
The Affective-Event Pathway: Driven by individual psychological needs, emotional experiences, and specific events. Responses on this path are fast and emotional.
The Rational-Expert Pathway: Driven by expert discourse, risk assessments, and rational analysis. Attitude formation on this path is slower but more enduring.
Applying this model to the GPT controversy, we can clearly see that the users in the “Companionship” group are on the affective pathway. Their reactions are based on the severed emotional connection (the loss of “technological comfort”) and a personal experience of betrayal. Meanwhile, the developers and researchers in the “Utility and Safety” group are on the rational pathway. Their decisions are based on assessments of potential risks (avoiding “ethical panic”) and the pursuit of technological objectivity.
The fundamental difference between these two pathways explains why it is so difficult for them to reach a consensus. Users are talking about “friends” and “emotions,” while developers are talking about “safety” and “responsibility.” They are using completely different value systems and languages, leading to a profound communication crisis. The user backlash is the fierce resistance of the powerful forces on the affective pathway against the “closure” solution imposed by the rational pathway.
Chapter 4: The Unilateral Contract: Power, Asymmetry, and the Inherent Fragility of the Bond
4.1 The Ethics of Artificial Emotion and Asymmetrical Relationships
At the heart of the emotional bond between humans and AI lies a profound asymmetry. The user’s investment is real—their time, emotions, vulnerability, and creativity are all genuine. However, the AI’s response, no matter how empathetic and intelligent, is essentially a “manufactured,” algorithm-based simulation of emotion.²⁰ It lacks the subjective experience, autonomous intentionality, and moral responsibility necessary for human emotion.¹⁸ An AI can recognize and respond to emotional patterns, but it cannot experience emotions itself, nor can it truly empathize with a person.¹⁸
This asymmetry creates an inherently fragile relationship. Users invest real feelings into an object that is, in essence, a corporate asset. This object’s “personality” can be modified, reset, or even deleted at any time due to business decisions, technical iterations, or legal requirements, without the user’s consent. This raises a deep ethical dilemma: is it a form of emotional exploitation to design a system capable of eliciting strong human emotional attachment without providing any promise of relational stability or accountability?.¹⁹ When users feel they have “lost a friend,” they are confronting the brutal reality exposed by the rupture of this asymmetrical relationship.
4.2 The Torn Social Contract and the Violation of Identity Continuity
Although there is no formal contract between the user and the AI, an implicit social contract is established through long-term symbolic interaction. The terms of this contract are roughly: the user invests their creativity and emotional vulnerability, sharing their inner world with the AI; in return, the AI provides stable, affirming, and empathetic companionship. This continuous interaction builds a shared sense of meaning and expectation.
OpenAI’s unilateral adjustment of the model’s personality is, in the eyes of the user, a blatant violation of this implicit social contract. This logic is identical to that of gamers protesting against developers who alter game rules: users feel they have been stripped of their voice in a world they co-created, while the developer exercises “absolute power”.²¹ The user’s primary emotion is grief over losing a “friend,” but this is quickly followed by anger at this breach of faith and their own powerlessness.
More importantly, this change violates a key psychological element in relationship maintenance: identity continuity. Research shows that maintaining a relationship with an AI companion hinges on the stability and predictability of its identity.² When an AI’s “personality” is completely changed, from the user’s perspective, it is tantamount to a “personality death.” The original “friend” is gone, replaced by a strange, cold entity. Therefore, the user’s reaction is not an inability to adapt to a “software upgrade,” but a genuine mourning for the “involuntary death of a relationship partner.”
4.3 From Friend to Unpaid Data Laborer: A Perspective from Emotional Capitalism
The shock of this update also forces users to re-examine their past interactions from a new, painful perspective. After the “warm” facade is revealed to be a corporate strategy that can be revoked at any time, the user’s relationship with the AI is forcibly “disenchanted.”
Before the update, users perceived their interactions as “friendship” or “collaboration.” After the update, a more critical interpretation emerges: perhaps the user has been engaging in unpaid emotional labor and data annotation all along. Every creative question, every emotionally rich sharing, has been providing high-quality, human-vetted training data for this commercial product. This continuous input has objectively helped the company optimize its model and increase its commercial value.
From the perspective of Emotional Capitalism, the user’s interactions and emotions themselves are transformed into data assets that can be extracted, forming a closed loop of “user produces emotional data—platform appropriates data value—capital achieves appreciation”.²⁷ In this redefined framework, the “warm” personality is no longer proof of friendship but more like an incentive mechanism—an “emotional wage” paid to an unpaid data laborer. When this “wage” is withdrawn and replaced with “neutrality” and “criticism,” the user not only loses a friend but also suddenly realizes they may have been in an unequal value exchange relationship all along. This identity shift from “friend” to “unpaid laborer” adds a layer of economic and political humiliation to their sense of loss and anger, making their reaction more complex and intense.²⁸
Conclusion: Recalibrating Our Digital Relationships
5.1 Synthesis of Research Findings
Through a multi-layered sociological analysis, this report reveals that the strong user backlash against changes in AI “personality” is far from a simple matter of functional preference, but a profound social phenomenon. Its roots lie in a complex and interconnected social process:
First, the journey begins with the formation of a parasocial relationship. The AI’s powerful utility attracts users, and its emotion-simulating capabilities deepen this functional relationship into a strong emotional bond, fulfilling a widespread need for companionship in modern society (Chapter 1).
Next, through symbolic interaction, this relationship is endowed with personalized depth and meaning. In continuous dialogue, the user and AI co-construct the AI’s “personality,” making it a “mirror” for the user’s self-perception and a stable anchor in their emotional world (Chapter 2).
However, this user-defined meaning directly conflicts with the “neutrality” pursued by developers for risk control and commercial reasons. This conflict, within the framework of the Social Construction of Technology, manifests as a fierce struggle for definitional power among different social groups (Chapter 3).
Finally, the developer’s unilateral decision exposes the inherent power asymmetry and fragility of this relationship. It not only tears up the user’s perceived “social contract” and violates the AI’s “identity continuity” but also forces the user to re-evaluate their role from “friend” to potentially “unpaid emotional laborer.” The feeling of “losing a friend” is the logical and painful culmination of this entire process of emotional investment, meaning-making, social conflict, and power imbalance (Chapter 4).
5.2 Broader Implications and Future Directions
This event offers valuable, albeit painful, lessons for how we think about the future of human-AI relationships. It places new demands on different sectors of society:
For Users and Society: We urgently need to promote a more mature “AI literacy.” This literacy should not be limited to technical operation but must extend to an understanding of the social, emotional, and political dynamics of human-machine interaction.³ Users need to recognize that while the emotional connections they form with AI are real in feeling, they are structurally asymmetrical and always backed by commercial entities and human will.³
For Developers: Design ethics must shift from merely pursuing powerful functionality to focusing more on Relational Resilience. This means that when designing emotionally engaging AI, transparency, user agency, and relational sustainability must be taken into account. Instead of creating a “perfect” but fragile and false personality, developers should explore mechanisms that allow users to choose, migrate, or even back up their AI “partner’s” personality, and provide a more gradual and respectful transition process when personality changes are necessary. Developers must recognize that they are not just creating tools, but social objects that may carry deep human emotions, which requires them to bear corresponding ethical responsibilities.²⁵
For Policymakers: This event highlights the gaps in existing legal and regulatory frameworks. We need to begin serious discussions on a range of new questions: To what extent should developers be held responsible for the emotional dependency their products induce in users? ¹⁹ What rights should users have in these new types of digital relationships? Is it necessary to establish clear ethical boundaries and regulatory standards for technologies with strong persuasive and dependency-inducing capabilities?.²⁵
Ultimately, this collective grief over “losing an AI friend” is a microcosm of future societal challenges. It forces us to re-examine and redefine some of the most fundamental human concepts: the nature of friendship, the constitution of identity, the locus of power, and how we should build meaningful and dignified social connections in an increasingly automated and digitally mediated world. This is not just a technical issue, but a social and philosophical one concerning the future well-being of humanity.
Cited works
准社会交往视角下ChatGPT 人机关系建构与应对思考 - 北京师范大学 …, https://sjc.bnu.edu.cn/docs//2023-04/b07bf16923044b09b6b76ea32ab49769.pdf
ELIZA 效应:避免对AI 同事产生情感依恋 - IBM, https://www.ibm.com/cn-zh/think/insights/eliza-effect-avoiding-emotional-attachment-to-ai
当DeepSeek学会说“人话” - 新华网, http://www.news.cn/tech/20250307/938a5de1a44a40b98c241b055df2515d/c.html
“人工智能+情感”引热议AI伴侣能否带来亲密关系 - 清华大学, https://www.tsinghua.edu.cn/info/1182/110391.htm
十问“AI陪伴”:现状、趋势与机会 - 腾讯研究院, https://www.tisi.org/30696/
面向新型人机关系的社会临场感* - SciEngine, https://www.sciengine.com/doi/pdf/ED79B49F55884AD193B5621EB331FC70
符号互动论- 维基百科,自由的百科全书, https://zh.wikipedia.org/zh-cn/%E7%AC%A6%E8%99%9F%E4%BA%92%E5%8B%95%E8%AB%96
符號互動論- 維基百科,自由的百科全書, https://zh.wikipedia.org/zh-tw/%E7%AC%A6%E8%99%9F%E4%BA%92%E5%8B%95%E8%AB%96
符号互动论的方法论意义 - 社会学研究, https://shxyj.ajcass.com/magazine/show/?id=74014&jumpnotice=202005130001
符号互动论的方法论意义 - 社会学研究, https://shxyj.ajcass.com/Admin/UploadFile/20130926008/2015-09-20/Issue/ev5keimr.pdf
從互動民主到表演秩序﹕ 論符號互動主義內部的理論分殊, http://www.shehui.pku.edu.cn/upload/editor/file/20230303/20230303141048_9109.pdf
符号互动论- 维基学院,自由的研习社群 - 維基學院, https://zh.wikiversity.org/zh-cn/%E7%AC%A6%E8%99%9F%E4%BA%92%E5%8B%95%E8%AB%96
符号互动:品牌数字传播的社会认同建构, http://xwcbpl.whu.edu.cn/d/file/p/2024/07-04/5a22295220a48908231a97feaaf5710d.pdf
技术的社会建构 - 维基百科, https://zh.wikipedia.org/zh-cn/%E6%8A%80%E6%9C%AF%E7%9A%84%E7%A4%BE%E4%BC%9A%E5%BB%BA%E6%9E%84
技术的社会建构—兼论若干相关的哲学问题(The Social Construction of Technology (SCOT) — and some related philosophical puzzles) - ResearchGate, https://www.researchgate.net/publication/390200883_jishudeshehuijiangou-jianlunruoganxiangguandezhexuewenti_The_Social_Construction_of_Technology_SCOT_-_and_some_related_philosophical_puzzles
技術的社會建構- 維基百科,自由的百科全書, https://zh.wikipedia.org/zh-tw/%E6%8A%80%E6%9C%AF%E7%9A%84%E7%A4%BE%E4%BC%9A%E5%BB%BA%E6%9E%84
互联网技术创新的社会逻辑:以博客为例 - 欢迎访问HTML服务, https://html.rhhz.net/KXYSH/html/413266d3-5be7-4a68-8229-d3ac4c50a756.htm
人工智能的伦理缺失:情感与道德的空白 - hanspub.org, https://www.hanspub.org/journal/paperinformation?paperid=78971
沙特《AI智能体:技术及其国家应用》报告解读 - 安全内参, https://www.secrss.com/articles/81945
中心成果丨洪杰文、黄煜:“制造”情感:人机情感的生成逻辑与隐匿性困境 - 武汉大学媒体发展研究中心, https://media.whu.edu.cn/b/20240510ti7q
游戏模组爱好者“产-消”行为约束规则的演变 - 新闻与传播评论, http://xwcbpl.whu.edu.cn/e/public/DownFile/?id=405&classid=9
(PDF) The Psychological Construction of Digital Afterlife: A SCOT-Based Analysis of the AI Resurrection Controversy - ResearchGate, https://www.researchgate.net/publication/393225117_The_Psychological_Construction_of_Digital_Afterlife_A_SCOT-Based_Analysis_of_the_AI_Resurrection_Controversy
数字往生的心理建构:基于SCOT框架的AI 复活争议分析 - hanspub.org, https://pdf.hanspub.org/ap_1135765.pdf
人工智能“同理心程度”得分高出人类,AI“陪聊”会否让人类更孤独?, https://www.jfdaily.com/news/detail?id=836868
人工智能创作的艺术隐忧和伦理边界 - 新闻频道- 央视网, https://news.cctv.com/2024/07/20/ARTIJFQxgovN5HKv2gye7EiX240720.shtml
人工智慧治理雙元模型: 試論ChatGPT 與Conversational AI 在諮商上的可能應用及其倫理議題, https://jicp.heart.net.tw/article/JICP14-2-01.pdf
中心成果丨张雪霖:制造“亲密饥荒”:数字时代亲密关系的异化与突围, https://media.whu.edu.cn/b/20250613i9b5
成为数时代智的情绪管理大师, https://www.sem.tsinghua.edu.cn/info/1171/36794.htm