Trust is contextual

As healthcare reform passes, I’m having a strong but respectful disagreement with a friend over twitter on the nature and impact of this change. I know him through professional circles and I’d recommend him for jobs; and I’d trust him to pick me up on the highway if I was stuck in his town. But I wouldn’t trust him for advice in political matters. There are friends whose political judgement I trust and seek out, but whom I wouldn’t trust to take me to a concert or a movie I’d like. There are people who’s advice I’d take personally but not economically, and vice versa. Trust is faceted. Trust is contextual.

Craig Newmark, who combats untrustworthy behavior every day on Craigs list, believes that distributed trust is the next big problem for the internet to solve. If Google, Facebook, and Amazon got together, they might be able to address this problem of untrustworthiness online. Jay Rosen, who I trust and think is wise about many issues, agrees a distributed trust network is needed.

I am a huge fan of distributed solutions to many problems, but not this one. Trust is contextual. Even trust within a specific online service can’t be generalized. The other week I was scouring the music recommendations of a prolific Amazon reviewer with deep musical taste in areas I like. That same reviewer’s opinions about religion and politics are 180 degrees away from mine. He could buy me a recording sight unseen and I’d probably love it, but I couldn’t read most of his book recommendations without throwing them.

Bruce McVarish believes that the trusted circle will start with people we trust in real life. But even in a close personal circle, trust is contextual. Even among my family and closest friends, there are different people I would trust for different things.

Trust can be extended along specific and faceted lines. In the important area of transactional trustworthiness, ebay-style ratings are critical. A distributed solution for transactional trustworthiness could be quite useful. It would be handy to have distributed trust metrics that could be extended across services in a given domain – I’d love to follow the Amazon music recommender across his various music services on Last.fm and Spotify and so on. But trust in any one domain doesn’t extend to other domains. Someone might be a meticulously reliable seller of used books and electronics, but they might be a horrible filter for news, which is the area that Jay Rosen cares about; or they might be personally unkind, so wouldn’t make personal trust list.

There is no general distributed trust solution to be had. Trust is always contextual.

Postscripts

Update 1: a few more thoughts, in response to some questions on Twitter and in comments

Trust, here, is the inverse of reputation.

For the facets of trust where there might be some tractable technology help, the facets need to be addressed with different kinds of technology augmentation.

* In the area of transactional trust – do I trust you to deliver this used book on time in the condition you promised – trust may be represented as a number. If your trust score is 99/100 I’ll buy the book. If your trust score is 62/100 I won’t buy the book from you.
* But in the context of opinions and tastes – would I like your movie recommendations – the number is not so useful by itself, but only as an indicator about similarity of opinion. So, if an algorithm says that our tastes are 75% similar, then I may want to subscribe to your movie recommendations.

So, for transactional trust, a distributed trust solution might aggregate a reputation score. For opinion trust, a distributed trust solution might calculate a score based on actions in multiple services, but then aggregate the actions (like movie recommendations) across services. Last.fm’s music similarity scale works along these lines already, based on aggregating listening information. Last allows users to scrobble their listening across services, shows how similar your taste is to others, and then allows you to explore the listening of these others.

Also, to anticipate another question, in social media, recommendations and other such trust-building actions are social gestures, not just quantitative ones. I recommend a movie to show off my own taste, to be generous to friends, to amuse, to give a gift, to express my similarity or difference with a public, and so on. And within this social dynamic, it would be helpful to be able to identify people who have enough similarity to want to share these gestures; to aggregate the identification across services, and to aggregate the recs across services.

Update 2 in response to comments from Thomas Vander Wal and Charles Green.

It’s important to consider that the set of circumstances where technology help with trust problems is really constrained. Both Vander Wal and Green spend much of their time helping with people to communicate better in groups – in these situations, the responses are largely about people, customs, culture, values, leadership, facilitation, tummeling. Tools are small part of the response, compared to the human aspects. Charles Green summarizes well: “The things that can be scaled up through numbers on the internet are important, but limited.”

From the perspective of technologists, numbers and tools are hammers, and social concerns may look like nails. Technologists tend to overestimate the set of problems that can be addressed by technology. Part of the job of those of us working in social software and social media is to be analytical enough, and humble enough, to identify the things that are tractable with metrics and tools, and those things that need to be handled by people regardless of tools.

Even the frame of “problems to solve” that technologists bring is part of the problem, sometimes. Even with good will, people are always different, with different perspectives and interests. Buber, Levinas (and other purveyors of wisdom) remind us that people being different is a fundamental condition of life and an opportunity, not a problem. Sometimes there is no “solution” because problem is often the wrong frame.

68 thoughts on “Trust is contextual”

  1. Thomas Vander Wal enumerates a number of aspects that people often have in mind when they use the word: “Respect, comfort, dependable, valued, honest, consistent, believable, loved, treasured, etc.”

    When I hear these words used near conversations about social software, the first thing that comes to mind are that perhaps there are issues that are out of scope of software to address. When people and communities have conflicting understandings, interests, behaviors and values, communication is needed – people may use rooms or telephones or software, but the main thing is the communication not the tools used for it.

    Charles Green summarizes well: “The things that can be scaled up through numbers on the internet are important, but limited.”

    From the perspective of technologists, numbers and tools are hammers, and social concerns may look like nails. Technologists overestimate the set of problems that can be addressed by technology. Part of the job of those of us working in social software and social media is to be analytical enough, and humble enough, to identify the things that are tractable with metrics and tools, and those things that need to be handled by people regardless of tools.

  2. Trust is an assessment. It can be grounded or ungrounded, not true or false. To make a well-grounded assessment of trustworthiness, the requirements are clear, but not often pursued with vigor or rigor. The underlying desire to trust (or inverse tendency to distrust) leads often to early action.

    Trust is an assessment of four characteristics:

    1. Competence: can this person do what they say they will do?

    2. Capacity: does this person have the time and resources to do what they say will do?

    3. Sincerity: is this person committed to do what they say will do?

    4. Relationship: will this person address any issue that arises in connection with what they say will do?

    Making a well grounded assessment of trustworthiness is accomplished by following these steps (which really should be taught in grade school):

    1. Why are you making this assessment? (Be clear what is at stake.)

    2. What is the specific domain of action? (Be precise, avoid collapsing domains.)

    3. What standards of assessment to apply? (Identify standards accepted by the community of competent practitioners in this specific domain of action.)

    4. Seek recurrent evidence of competent action. (What is the record of past performance? “Ratings” referred to in the original post are relevant here.)

    5. Consider contrary evidence of incompetent action. (Identify the frequency, recency and circumstances.)

    6. Remember trust is only speculation about future performance. (The best practitioners do mess up. Take full responsibility for trusting someone else.)

    7. Make specific Requests, Promises, Offers within this domain of action.

    On the Internet, where “gossip trumps truth”, technological support for making a well grounded assessments of trustworthiness are evolving and welcome.

Leave a Reply

Your email address will not be published.