Social recommendations and the Eliza Effect

In a post on on Algorithmic Authority one of Adrian Chan’s key points is to displace the critique of the authority claim from the recommendation itself to the users’ acceptance or rejection of the recommendation. “Authority, in short, depends perhaps on the user, not on the algorithm, for it is only on the basis of the user’s acceptance that authority is realized. It is subjectively interpreted, not objectively held.” However, there are a number of problems in severing the communication from its reception.

The example at hand came from Facebook’s flawed friend recommendations. These suggest that you friend or re-contact people, apparently based on an analysis of your social network and communication patterns. These recommendations are particularly annoying because of visual design, and even more so because of social design. The recommendations are featured prominently in the interface, and impossible to suppress without effortful, power-user techniques like Greasemonkey scripts. The big problem is social design. Dennis Crowley, co-founder of Dodgeball and later Foursquare, described the classic flaw with social network based recommendation as the “ex-girlfriend” problem – when an algorithm detects a person that you haven’t communicated with in a while, there is often some very good reason for the change in communication pattern.

The flaw in social network based friend recommendations is related to Adrian’s recent critique of social network analysis. The social network map is not the territory – the visualization of lines of connection based on explicit communications leaves out critical information about the directionality and the content of the communication. A gap in communication may be an unintentional lapse in attention or a rupture; frequent communication may be a sign of closeness or flamewar.

One problem with this misreading of social network analysis results is that the more personalized and personal recommendations, the more likely they are to trigger the Eliza effect, which is “the tendency to unconsciously assume computer behaviors are analogous to human behaviors.” The more a computer impersonates human, the more people will tend to anthropomorphize the computer, and have a strong emotional response to that computer which acts as a human. The converse reason for a strong emotional response to a poorly personalized recommendation can also come into play. The “Uncanny valley” is the name of the disconcerting effect when computer simulations that are nearly but not quite human. People find simulations that are close to human much more annoying than simulations that are more cartoonlike.

It is risky to simply dismiss the effect of pervasive messages, even messages that are not acted upon. Marketers have long considered the psychological effects of communication; marketing messages and frames affect consciouslness even if the listener does not take immediate action, even if the listener superficially ignores the message, even if if the listener superficially disagrees.

You can’t unsee and you can’t unhear. This effect is most visible at the extremes; thus the disturbing effect of chatroulette, the random-navigation chat program that has attracted people in search of random conversation and entertainment, and plenty of flashers. If someoene doesn’t want to see the private parts of random people they should stay off chatroullette; clicking past the flasher doesn’t solve the problem, because you can’t unsee.

Sure, bad social system recommendations are merely annoying; they don’t make us take any action we don’t want to do; but just because we haven’t taken action doesn’t mean the recommendations have had no effect.

The holy grail of internet marketing has been to make recommendations that are powerful and compelling because they are personal, based on a wealth of information based on the user’s personal behavior and actual social network. The lesson for social designers is that it is possible to make recommendations that are not-quite-right, that are more annoying to users because they are more personal. Being personal can be touchy; requiring care and caution, and avoiding overconfidence.

Adrian’s post on Algorithmic Authority has a broader scope, dealing with the larger sociological implications of the idea of algorithmic authority proposed by Clay Shirky, and refining some distinctions on the topic I proposed here. If you haven’t read it, it’s worth consideration.

10 thoughts on “Social recommendations and the Eliza Effect”

  1. Adina, thank-you for a wonderfully rich, balanced and humanistic analysis. 🙂

    I guess I’ve lived in the uncanny valley for too long, but imvho, while it is not possible to undo an environmental influence, it is possible to gradually develop antidotes — also known as active coping — to unpredictable noise.

    In a sense, that might be part of the rite of passage toward “internet savviness”: The ability to properly read a genuine commercial leaflet in the midst of phishing expeditions; the maturation pattern of annoyance-turned-desensitization-turned-empathy toward trolling; etc.

    So these pervasive messages, while uncanny, may also provide a training ground of inner reconcilation, in this age of wide deployment of Eliza effect, serving a multitude of values often undiscernable even to its designers. 🙂

  2. Adina,

    Thanks for your kind words, and for reading a post that is a bit of a beast to get through. I wrote that because I thought there were interesting distinctions to make in the concept of authority, starting with the difference between authoritative claims and institutional authority — claiming authority in statements vs referring to the book of law, if you will. And because I thought the social web’s contribution to changes in our cultural references to authority (from institutional to social, as per shirky) is worth thinking about.

    Insofar as algorithms play into authority, and pertaining to the example of socialized algos and their use in FB friend recommendations, there is a difference between the recommendation made and the user’s in/action as a response. And I wonder if there’s an updated Eliza effect that might cover the aspects of passive social observation you mention here.

    I would ground it in the linguistic mix of system messages and real human messages. Both are communication, but the former is system generated, the latter is authored. Former is non-intentional from a linguistic perspective; the latter is. Rather than ascribe the anthropomorphization of UI/UX to “simulation” we would instead argue that since the social web is human content plus software messaging, the post-eliza effect obtains from the discursive mix. As when advertising is too closely tied to editorial in print/media.

    The uncanny then might result from a slippage of system messaging — inauthentic discourse in the linguist’s terms — blended or mixed into authentic (human communic) discourse. Uncanny goog ads in gmail, etc. One can then explain the experiential fx of the uncanny juxtaposition of true and false speech by means of language and communication. We don’t then have to use the anthropomorphization argument, which shifts the uncanny to the computer, away from the interpretive act of the user (reader).

    This is where the fourth wall, so to speak, comes into play with product placement and advrtising in feed messages, etc. Twitter could long ago have enabled its own form of goog adwords as yellow commercial tweets — mixed in with follower feed. But we would have opposed. We’re not yet open to advertising on communic platforms (imagine ads prior to phone calls, or mixed in with vmail; and twitter is for many too much a comm tool for the advertising we permit of publishing media).

    When the fourth wall moves, and we are exposed to ads in communication platforms, the debate will center on in/authentic communication or speech. I think that’s where it is w/ the surface fx of system messages that now use social algos. And where, in some cases, it becomes a privacy issue also (so and so just bought tickets to ____).

  3. I like this piece as it talks to what many complain about, the disconnection between the who and the what (as well as the directions of the conversation and the granular components of common interest (Granular social network)).

    The Eliza effect and the Uncanny Valley (the Silicon Valley really needs to not listen to its self) are good, but can be overstated. In user testing the rather human responses are often seen as clearer communication. The “thank you” in ATM transactions is the closure statement to a transaction, it is part of the common social scripts in cultures that indicate a set of interactions is done. Human or not, it has a place and is very clear in usability testing that staying close to these known social scripts is very helpful for clear communication and interaction.

Leave a Reply

Your email address will not be published. Required fields are marked *