In a post on on Algorithmic Authority one of Adrian Chan’s key points is to displace the critique of the authority claim from the recommendation itself to the users’ acceptance or rejection of the recommendation. “Authority, in short, depends perhaps on the user, not on the algorithm, for it is only on the basis of the user’s acceptance that authority is realized. It is subjectively interpreted, not objectively held.” However, there are a number of problems in severing the communication from its reception.
The example at hand came from Facebook’s flawed friend recommendations. These suggest that you friend or re-contact people, apparently based on an analysis of your social network and communication patterns. These recommendations are particularly annoying because of visual design, and even more so because of social design. The recommendations are featured prominently in the interface, and impossible to suppress without effortful, power-user techniques like Greasemonkey scripts. The big problem is social design. Dennis Crowley, co-founder of Dodgeball and later Foursquare, described the classic flaw with social network based recommendation as the “ex-girlfriend” problem – when an algorithm detects a person that you haven’t communicated with in a while, there is often some very good reason for the change in communication pattern.
The flaw in social network based friend recommendations is related to Adrian’s recent critique of social network analysis. The social network map is not the territory – the visualization of lines of connection based on explicit communications leaves out critical information about the directionality and the content of the communication. A gap in communication may be an unintentional lapse in attention or a rupture; frequent communication may be a sign of closeness or flamewar.
One problem with this misreading of social network analysis results is that the more personalized and personal recommendations, the more likely they are to trigger the Eliza effect, which is “the tendency to unconsciously assume computer behaviors are analogous to human behaviors.” The more a computer impersonates human, the more people will tend to anthropomorphize the computer, and have a strong emotional response to that computer which acts as a human. The converse reason for a strong emotional response to a poorly personalized recommendation can also come into play. The “Uncanny valley” is the name of the disconcerting effect when computer simulations that are nearly but not quite human. People find simulations that are close to human much more annoying than simulations that are more cartoonlike.
It is risky to simply dismiss the effect of pervasive messages, even messages that are not acted upon. Marketers have long considered the psychological effects of communication; marketing messages and frames affect consciouslness even if the listener does not take immediate action, even if the listener superficially ignores the message, even if if the listener superficially disagrees.
You can’t unsee and you can’t unhear. This effect is most visible at the extremes; thus the disturbing effect of chatroulette, the random-navigation chat program that has attracted people in search of random conversation and entertainment, and plenty of flashers. If someoene doesn’t want to see the private parts of random people they should stay off chatroullette; clicking past the flasher doesn’t solve the problem, because you can’t unsee.
Sure, bad social system recommendations are merely annoying; they don’t make us take any action we don’t want to do; but just because we haven’t taken action doesn’t mean the recommendations have had no effect.
The holy grail of internet marketing has been to make recommendations that are powerful and compelling because they are personal, based on a wealth of information based on the user’s personal behavior and actual social network. The lesson for social designers is that it is possible to make recommendations that are not-quite-right, that are more annoying to users because they are more personal. Being personal can be touchy; requiring care and caution, and avoiding overconfidence.
Adrian’s post on Algorithmic Authority has a broader scope, dealing with the larger sociological implications of the idea of algorithmic authority proposed by Clay Shirky, and refining some distinctions on the topic I proposed here. If you haven’t read it, it’s worth consideration.