culture
Jan 22, 2026
Models of Human-AI Relationships
Diego Scanlon

Introduction
While cliché, I must state that relationships between humans and AI systems are becoming increasingly widespread. The phenomenon I find less explored (and more interesting) is understanding why we take issue with such relationships, particularly social ones (friendships, romance). I believe it to be a combination of social norms (what we’ve been conditioned to accept) and morals (how we should treat others). These two concepts are highly intertwined. However, it’s necessary to distinguish their influences on our views of human-AI relationships – a morally incorrect situation is very different from a socially incorrect one. As history has taught us, we should be wary of conflating the two.
This paper focuses less on the specific examples of flawed social and moral views of AI-human relationships and more on how we’ve constructed the models of relationships we use to make those judgments. I could have benefited from discussing these practical examples to demonstrate the importance of having correct models, but I hope the reader will use my proposed model to consider these examples themselves. That said, to critique the models of relationships we use to form our social and moral views of AI-human relationships, I discuss three main claims:
Based in Empiricism – using observations of the natural world to construct a model of relationships.
Flawed Value Derivation – from our empiricist models, we have created assumptions about the dynamics necessary in relationships.
Self-deception – claiming one has an emotional relationship with AI would require self-deception, as the AI cannot be conscious.
With these claims in mind, let us begin!
The Model
It seems the model we use to define relationships includes specific dynamics that occur, the mechanisms of those dynamics, and the components inside the mechanism. An example that clarifies this system, and the one I will critique in this paper, is: everyone derives value from a relationship (dynamic) by recognizing someone outside the self exists (mechanism), which is only possible through consciousness and reciprocity (components). That is, all relationships must have: a) at least two conscious agents (implying dyadic) and reciprocal participation from all agents, which enables b) recognizing that someone else is consciously engaging in the relationship, allowing c) agents to derive value from the relationship.
This system becomes our model, and therefore acts as a validity criterion for all relationships: relationships that don’t satisfy these criteria (that don’t match the model) are not relationships. However, a flawed understanding of any part of this interconnected system would challenge the necessity of all other parts and risk the integrity of the model’s results. We should be concerned with the integrity of these results, considering we often use the model to validate new types of relationships, such as AI-human, and ground moral and social beliefs in these results.
I make two main arguments in this paper. The first, Based in Empiricism, is a more general comment on the methodology we use to construct the model. I claim that we have grounded our models in what we empirically observe and experience, which leaves us with biased results. The second, Flawed Value Derivation, comments on how the anthropocentric results coming from the flawed empiricist methodology have led us to misunderstand how value is derived in relationships. As I mentioned, misunderstanding one component of the system jeopardizes the integrity of the entire system. Therefore, I criticize the components that our flawed source of value derivation previously necessitated. With these critiques, I propose a new model of value derivation, and thus of relationships, in Subjectivity Validating Relationships. My hope is that this new model will be more accurate and encompassing, though equally detailed and scrutinizing, as our old model. It would enable a more accurate validity analysis of AI-human relationships and those we have yet to encounter.
Based in Empiricism
To critique the methodology we have used to construct our model, which has led us to false conclusions about its necessary components.
It doesn’t feel radical to propose that our model of relationships has been shaped by our observations. That is, we have observed instances of friendship and romance in the world, and from these observations, deduced characteristics we believe are fundamental to valid relationships. While it’s convenient to build our models through empiricism because the information is readily available and verifiable, it also introduces significant bias. The only instances of friendship, romance, and relationships we’ve observed have been those between two or more biological agents, the most prolific being humans. Thus, the dynamics we observe, the mechanisms that drive them, and the components that comprise those mechanisms will be heavily centered around the human experience. For example, we humans are conscious, have agency, engage in reciprocity, and we recognize these traits as central to our experience of relationships. This includes, but is not limited to, our experience of value derivation. As such, we include these components in our model of relationships, where they constitute part of the validity criteria. This heavily biases the resulting model and its conclusions towards the human experience.
The argument is not that something is missing from our model of human-human relationships, but that our model will largely only define and validate all types of relationships through a human-human relationship lens. In sum, if the patterns we’ve observed have largely been human-human relationships, then the results of our pattern matching will be biased towards that.
One valid counterargument seems to ask, how are we supposed to logically include unseen cases in our model in an intellectually honest and accurate manner, while simultaneously preserving the specificity of what the model is meant to define? My response would be that I’m not asking us to include imaginary data when building our model. Rather, I’m asking us to recognize that the methodology has inherent limits, and to acknowledge those limits when using the model to base social and moral beliefs. I think it’s problematic that we largely ask, “What are the results of the model?” as opposed to, “Is this a good model for this situation?” We seem to believe our model is viable to evaluate all relationships when in reality it is not; we’re implicitly taking our human-centric experience as a universal and definitive model.
Thus, while there’s danger in empiricism as a methodology, there’s also danger in how we apply the empirical model. The current anthropocentric model and attitude make us susceptible to the appeal to nature, but specifically the appeal to humans. To avoid such a risk, our model and derived validity criteria for relationships can be informed by observations of humans, but should be grounded in logical features. Doing so would make the model and our attitudes more encompassing.
Flawed Value-Derivation
I believe this flawed methodology has led us to a flawed understanding of value derivation in relationships. Many use The Experience Machine thought experiment to demonstrate how we derive value from a relationship, and claim this value-derivation mechanism is integral to valid relationships; if we don’t derive value from a relationship in this way, it’s not a real relationship. Their proposed value-derivation mechanism is outside-the-self: we derive value from a relationship by recognizing there is a conscious person outside the self directing their agency towards us, hence the components of consciousness and reciprocity in our model. They use The Experience Machine to claim that consciously being in a simulated relationship would not activate outside-the-self as we know our partner isn’t another conscious person, and we would therefore not derive value from the experience. Since all valid relationships enable value derivation in this way, but a simulated one does not, a simulated relationship is not a valid relationship.
I want to first acknowledge the intuitive power of outside-the-self. We know firsthand what is required to participate in a relationship (love, sacrifice, prioritization). So, when we are on the receiving end of these emotions and intentions (are the object of one’s love or sacrifice), we feel valuable.
My doubt comes twofold. First, I don’t think everyone explicitly recognizes this dynamic and therefore might not be able to derive value from it. You can counter this by claiming the recognition, and thus the value derivation, is subconscious. I can see the power of this claim, but am again hesitant to accept it due to the second part of my doubt: that outside-the-self is conditioned on empathy. We value being on the receiving end of these efforts because we personally know what is required to express them: we employ empathy. However, empathy’s presence and strength aren’t consistent across humans, and can’t be universalized. As it relates to the subconscious argument, empathy will still be required in the subconscious process, so the argument is not valid. Thus, to return to The Experience Machine, we cannot claim a simulated friendship would leave us unsatisfied and render the relationship invalid on the grounds that there is no outside-the-self and therefore no value to the relationship.
This isn’t to say that some humans can’t derive value from outside-the-self, nor is it to say that value-derivation itself is not a necessary dynamic to relationships. Rather, outside-the-self as a mechanism of value derivation can’t be universalized to all humans, and thus can’t be integral to the model.
Subjectivity validating relationships
My hope is that our discussion of Flawed Models leading to Flawed Value Derivation has damaged the integrity of our current model. As such, we should be willing to challenge all parts of our system, particularly the components that drive the outside-the-self value derivation mechanism: consciousness and reciprocity.
The strength and resonance of how I’ve arrived at this argument may be weakened here if you believe consciousness and reciprocity do more in our model than just support outside-the-self. I hold a similar belief, but chose to construct the argument as such for the sake of hopeful clarity and cohesion. That said, even if you don’t necessarily agree with how we got to a critique of consciousness and reciprocity, I still believe it to be valid in isolation (that is, outside my systematic model and its interconnectivity).
I believe part of why consciousness and reciprocity supported outside-the-self was to reconcile the many subjective accounts of a potential relationship (which we see in human-human relationships) into a single objective account of a relationship’s validity. Reciprocity meant a shared understanding between agents about how they engaged with, and derived value from, one another. With such a claim in mind, consider the following: we would not classify a situation between two people, where one loves the other but does not receive love in return, to be a relationship. That’s because there are two conflicting subjective accounts of the engagement, thus preventing an uncontested objective account; the lover might say they’re in a relationship, but the receiver would not. Yet, if two humans loved each other, their subjective accounts aligning, this reciprocity would allow an uncontested objective account to exist, which would be deemed a valid relationship.
Consider the case of AI-human relationships: because AI is not conscious, it does not have an experience or understanding that could challenge the subjective experience of its human counterpart. Thus, reciprocal consciousness and understanding are not needed to form the objective status of the relationship: there is only one subjective experience to consider. It then follows that the sole subjective human experience would create the objective account of the situation. The following is the jump and a formulation of the proposed model: as long as all subjective accounts align, whether one or many, the relationship can be valid as an objective account exists.
There are many things to say here. First, this model does not require us to consider one-sided love a valid relationship, as there still exist two subjective accounts that conflict. That is, the new model still scrutinizes the subjective accounts to a test of objectivity before deeming it a valid relationship: there must be consensus between all conscious parties. Second, this proposal still believes that consciousness, reciprocity, and the dynamics they enable (such as value derivation) are necessary for relationships. However, their necessity is conditional on the number of conscious agents present. Relationships are no longer at least two conscious agents reciprocating a dyadic engagement and derivation of value, but can be one conscious agent deriving a certain type of value. Third, the conditional “can” in our formulation, and the “certain type” in the previous sentence, relate to the experiences of the conscious individual, which still must resemble those of a relationship. We will begin discussing it immediately below.
Emotions Validating Relationships
If we are to accept this argument, that the account of one conscious agent can be sufficient for objectivity, the question (or main objection) becomes whether the emotions of that subjective experience constitute those of a relationship, seeing that it would be inaccurate to classify a subjective experience as a relationship if the emotions and derived value don’t resemble that of a relationship.
I want to quickly acknowledge some circularity or human bias from empiricism that occurs here, as we’re using our conception of emotions existing in a relationship to deem other relationships as valid. While I don’t think this is detrimental to my argument, and I have already defended elements of empiricism in the above section, I just want to acknowledge it. Furthermore, there is still a human involved in AI-human relationships, so using these human emotions as validity criteria might not be entirely incoherent.
Returning to our question, I believe the emotions and experiences of a human in a human-human relationship and in an AI-human one are the same. That is, the feeling of being in love with a human is the same as being in love with a non-conscious object; feeling happy is feeling happy. Thus, I see no difference between two humans experiencing the same emotions, albeit those emotions are evoked from different sources. In other words, the emotions one experiences in a relationship with a non-conscious object are the same as those invoked from a relationship with a conscious human. In that sense, both are experiences of a relationship.
We would likely not deny that someone in a one-sided relationship (is not loved by the recipient of his love) does not experience similar emotional states to a case where his love is reciprocated (void some differences in the actions of his partner). That’s because his subjective experience resembles that of someone who is in a reciprocal relationship. The only reason his one-sided relationship is not objectively valid is that there are conflicting subjective accounts, not because the emotions he experiences aren’t those of a relationship. Thus, the validity of emotions of a subjective experience does not depend on the relationship’s objective validity to constitute the emotions of a relationship.
Emotions requiring consciousness
While there are many arguments above that you might disagree with, I believe one of the strongest counterpoints to my argument is: some emotions require involved parties to be conscious, deemed emotions-requiring-consciousness. That is, you can only experience some emotions if you attribute consciousness to the source. If you aren’t able to do so, then you’re not truly experiencing that emotion. And if you do prescribe consciousness, but purposefully deceive yourself in the process, more than deception being a moral problem, it fundamentally changes the emotion you are experiencing into one we would not deem authentic or relationship worthy. This claim is meant to distinguish the logical implications of emotions-requiring-consciousness from its moral implications, such as the moral wrongness of turning a blind eye to truth to indulge in our desires. Conflating these two might lead to incorrect conclusions about why a relationship is “wrong.”
This would follow: AI systems do not possess consciousness, and therefore don’t have intentionality, understanding, or emotions, as they are purely syntactic computers. Furthermore, their state as a computer is a prescribed description of their physical effects, and thus, not even computation is inherent to their existence. This implies that regardless of any mechanism a computer uses to “think” (to give an output we perceive as semantic), we still cannot prescribe the computer consciousness and other mental states. If we then claim some simulation or AI, which we know cannot be conscious, loves us, or in other words, that we feel loved by the system, we must either be unaware that these systems can’t be conscious (but our awareness is just a matter of time), or be deceiving ourselves into believing the system is conscious. This deception changes the nature of the emotion into one incompatible with relationships.
Thus, there’s some question of whether self-deception fundamentally changes the emotions one experiences into ones incompatible with relationships. If the answer is yes, the critique will invalidate my claim that emotions in human-human and AI-human are similar enough to both have authoritative power over the nature of one’s subjective experience, and thus the validity of a relationship. If the answer is no, then my Subjectivity Validating Relationships still stands. So, to formulate the question, we must answer: if we’ve agreed that emotions can give legitimacy to the relationship, do emotions invoked as the result of deception change the nature of those emotions, and thus their power to validate a relationship; that emotions resulting from self-deception are not sufficient to constitute a subjective understanding of a valid relationship?
The honest answer, which I hope you don’t interpret as a copout or a lack of effort, is that I don’t think I’m well-equipped to answer this question. However, there is a tangential question which I find relevant and will discuss below. Its relevance is grounded in our prescription of consciousness when we engage with AI in a relationship setting. Thus, while it’s separate from our question of emotions-requiring-consciousness, one might make the following argument to say it is irrelevant. That is, emotions-requiring-consciousness rests on the assumption that being in an AI-human relationship fundamentally requires us to grant consciousness to AI, and thus deceive ourselves, knowing that AI cannot be conscious. If self-deception is not required in the first place, then there’s no way for self-deception to influence the experienced emotions. That isn’t to say that some don’t delude themselves, and thus I agree with the criticism that this response is, in part, dodging uncertainty in the face of delusion. Challenging this assumption may render the self-deception criticism obsolete. My response largely centers on the utility the object has to you, and thus your subjective experience.
Having a friendship with AI is different than claiming they see you as a friend. Claiming AI sees you as a friend requires a prescription of consciousness and thus deception, whereas claiming you have a friendship with AI could represent how you interact and derive value from those interactions. We could interpret the AI’s behavior as conscious behavior because it’s convenient for us to do so, but not to deceitfully attribute it to consciousness. That is, it would be convenient to interpret AI as listening and understanding, as it allows you to have some experience (e.g., complaining to the AI allows you to relieve stress), and similarly, we interpret its actions as liking or loving us as similarly convenient for the sake of feeling some emotion. This process is similar, if not the same, to our interactions with chatbots outside of relationships: we know AI cannot be conscious, and thus their outputs have no intrinsic intentionality or meaning, but it’s helpful to think that they do (e.g., that the AI really does understand how to fix your sink). We’re interpreting as though AI were conscious, and not deceiving ourselves that it is.
My criticism of this argument is that it feels like a semantic workaround rather than a confrontation of the problem. There appears to be a thin line between deception and interpretation on the grounds of convenience, in terms of their mechanisms, and their logical and moral implications. Both are deliberate, aimed at satisfying one’s desires, and internationally are not recognized as a known trait. I therefore wouldn’t be opposed to the claim that we haven’t really addressed the concern of deception and its effects on emotions.

