Facebook OpenID is a really big deal

The news that Facebook will allow users to log into Facebook with a 3rd party OpenID may sound alike a technical detail but it is a really big deal. One of the crippling aspects of the Facebook Connect strategy is that it welded together user authentication with every other Facebook service available to developers. And it gave unto Facebook the ability to aggregate individuals authenticated actions across the web. This bargain leaves everyone dealing with Facebook feeling used – developers give up far too much, and end users give up far too much.

By supporting OpenID, Facebook decouples authentication from the providing of services. This gives more power to developers, and more power to end users. And, by enabling a more win-win relationship, increases the chance that Facebook will take advantage of its position at the core the big social graph to offer immensely valuable network infomediary services.

Facebook Open? not til they fix the privacy model

FaceBook has just taken two important steps away from being a walled garden by opening the API to stream data and by supporting OpenID. These things are very good. As someone who’s complained about the walled garden model, I think these are steps in the right direction. But these steps do not get FaceBook very far until and unless they fix the privacy model.

FaceBook is simultaneously too private and not private enough. This gets in the way of using it for information sharing AND for private information.

Facebook’s model for information about people is symmetric and mostly private. (Pages are a limited exception) I can only see information about you and from you if we mutually declare each other to be friends. This puts a break on the discovery of new people and new information. If you post an interesting link in Twitter, I can navigate to see your stream of tweets and choose to follow you. If you comment on my friend’s link in Facebook, I can’t see enough about you determine if I want to know more. Even if I could, “friending” is a different social gesture – I won’t friend you because I don’t know you. The mostly-private nature means that search is useless except to find people you already know.

ReadWriteWeb explains how this dramatically limits the utility of the newly open API:

Unfortunately, the data that developers are able to work with is severely limited. They will simply be able to make a call for a user to Facebook and get back the friends’ streams that this particular user has the permission to see. … Terms of Service will prohibit eyes outside of a user’s Facebook friends from seeing the massive amounts of friend-limited data. In other words, this is permission to build more interfaces for Facebook. That’s cool, but that’s not really what the world needs – more interfaces for giving Facebook love.

Meanwhile, Facebook’s model is not private enough. Facebook has been trying to be private but viral, and that makes it really hard to be private. Facebook actions leave trails all over the place. For example, if I comment on my friend’s link, others can see my comment. Facebook does have some granular controls over categories of friends and what is exposed to them. But these controls are not very easy to use. The API makes it possible to for developers to disclose information about your activities in unintended contexts, which may open new opportunities for privacy violation.

Until Facebook fixes its privacy model so that what’s open is open and what’s private is private, supporting open standards doesn’t make Facebook usefully open, and may make privacy issues worse.

Empathy for Amazon (but not that much)

When the #amazonfail kerfuffle hit over Easter weekend, I wanted to give Amazon the benefit of the doubt, at least a little bit. It was clearly outrageous that books about gay and lesbian themes were classified as “adult”, and removed from main search results, including kids books like “Heather has Two Mommies”. But the tweets and posts assuming organized homophobia on Amazon’s part were premature. The braindead customer service response simply citing policy didn’t convince me that malice was behind it either. Customer service people are trained to use existing documentation to respond to questions, which is often correct and sometimes lacking in common sense.

If you’ve worked at an organization, you know that things go wrong, sometimes badly. When this happens, people need to figure out what went wrong, and coordinate a response. When the rain of wrathful posts was falling on #amazonfail, I was imagining Amazon folk being pulled from family dinners around the world to investigate what happened and figure out how to respond, including fixing the problem and communicating to people affected. Socialtext is much smaller than Amazon but this process is painfully familiar.

Amazon’s response fell short of what it could be. Their first public response was that it was a “glitch.” This may be technically true, but it doesn’t consider the genuine and valid outrage that a powerful service like Amazon was marginalizing a group of people that faces real discrimination. Even if was a technical accident, the right response was “I’m sorry that this glitch has the affect of suppressing books by GLBT authors, we have no intent of discriminating, we support gay rights, and we will fix this as soon as humanly possible.” Their next response was posting a form letter to the comments section of a few blogs. What they should have done instead was to have spokespeople talking like human beings. It’s hard to do. It’s easier to post a form letter. It’s much harder to be human and nondefensive in the face of customer outrage. Amazon missed an opportunity to respond in a human way, and earn back the respect of angry customers with interest.

Watching the GetSatisfaction crew handle the complaints about their policy about non-company sponsored pages, and improving their service with the criticism, and watching Rashmi at Slideshare handle customer anger at a misunderstood April Fool’s prank provided inspiring examples of companies really engaging in a professional and human manner with angry customers.

Twitter is the new headline: how blogging and Twitter are complementary

A couple of weeks ago, Jay Rosen asked whether this was the dumbest newspaper column about Twitter ever. A game critic blogger at the New Orleans paper makes fun of Twitter by attempting to write his review of an xbox game in 140 character increments. The reason this is idiotic is that the author misses the complementary relationship between Twitter and blogging. You don’t write your review itself on Twitter. You write a normal essay, and then share the link on Twitter with a catchy phrase.

The conventional lament is that Twitter is killing blogging, since bloggers are now spending their time and sharing their ideas on Twitter. As Robin Hamman observed last fall in this Headshift post, Twitter (and Facebook) are siphoning off a lot of the energy from personal diary blogging – the proverbial sandwich post – or simple link sharing. Bloggers observe that they post less frequently because they tweet ideas more often.

While Twitter may be siphoning blog energy from very short posts, Twitter also increases interest in more substantive blog posts and discussion around blog ideas. An increasing amount of blog traffic is driven from Twitter and Facebook status (good stats welcome). Through link posting and retweets, the social network is used to share and spread interesting posts and call attention to good bloggers. Essentially, Twitter is the new headline. Blogger Louis Gray takes this a bit too far, I think, when he recommends that bloggers change their headlines into catchy twitteresque phrases for SEO purposes. A good blog title is catchy enough to be interesting, and explicit enough to make sense in search results months later. A good Twitter callout is catchy, makes sense in the current social context, and doesn’t need to be as explicit. There’s no reason to make all blog titles into Twitter callouts.

Reactions and conversation about blog post ideas take place in Twitter, Facebook status, and Friendfeed. Journalism professor Jay Rosen is developing a phased process for developing ideas, using Twitter for mindcasting short thoughts and links, Friendfeed for assembling links and ideas together with discussion, and blog for long-form essays. Update: Science blogger BoraZ writes about a similar social journalistic workflow, carrying the process all the way through composing articles and books. Christian Crumlish has actually used the workflow from twitter through book composition, with a wiki as tool for book editing and feedback for O’Reilly’s Designing Social Interfaces.

The relationship between social messaging and blogging can be particularly handy in the workplace, where social messaging is used to call attention and discuss timely and relevant work-related posts and updates. The ease of sharing and discussion motivates people to write useful things, because they will be shared, discussed and used.

In summary, Twitter and blogs are highly complementary. The role of Twitter isn’t to limit thoughts to what can can be expressed in 140 characters or less, it’s to call attention to longer-form writing, and discuss the ideas through the social network.

The Yiddish Policeman’s Union

Michael Chabon’s The Yiddish Policeman’s Union was excellent Passover reading, for reasons I’ll explain. The book is set in a counterfactual present where secular Yiddish culture wasn’t crushed by the Holocaust. Instead, it migrated to a gritty frontier district in Alaska with yiddish-speaking cops, crooks, lowlifes, idling chess-players, dissolute klezmorim, pork-loving secularists and insular hassidim. According to a review at the Yiddish Book Center website, “the entire project actually began life as an essay from the late 1990s about a phrasebook called Say It in Yiddish, which seemed to Chabon to be a guidebook to a land that has never existed, where one needs to know how to say “What is the flight number?” and “I will call a policeman” in mameloshn.

In the book’s fictional world, Sitka Alaska becomes the refuge of millions of Jews after the Holocaust when the Zionist settlement in Israel was crushed. The refuge was temporary, the 60-year agreement is about to expire, and most Jews are facing deportation once more. The irreverent homicide detective hero is estranged from his ex-wife and nurses his alienation in a worlds-fair shot glass of slivovitz. The villains of the piece are a secretive chassidic sect who operate an organized crime ring and conspire to bring about redemption with a violent Messianic plot. The book explores classic Jewish themes of exile and redemption from thoroughly secular antimessianic perspective, making for tasty Passover reading.

The book was heavily advertised as the adventure of a literary author in the wilds of genre fiction. I was concerned that I’d find it over-written (for example, I hated Everything is Illuminated). But Chabon did a fine job of translating Chandler. His figurative language is apt. A few examples culled by a NYMag review Landsman’s ex-wife “accepts a compliment as if it’s a can of soda that she suspects him of having shaken.” A pretentious, overly formal journalist speaks Yiddish “like a sausage recipe with footnotes.” An awkward father-son hug “looked like the side chair was embracing the couch.” The neoyiddish slang is entertaining – a cellphone is a shoyfer and handgun is a sholem.

I had only two quibbles. The book has a characteristic of many mystery-thrillers – the plot climax is convoluted and cartoonlike; I stop caring, skim, and then read back to parse what is supposed to have happened. I care about the resolution of the characters and themes, but the plot to destroy the world, whatever. I’m glad to see that the book is getting a film treatment by the Coen brothers, and you can read the wannabe movie scenes in the vehicle chases and underground escapes.

The other quibble is a bit of political correctness. Spoiler alert, if you haven’t read the book yet and care….
….
….
….
….
….
….
There are two key characters who die; one is a closeted gay man, and the other a macha bush pilot described as “lesbian in everything but sexual preference.” I thought we were 30-40 years past the date when people with sexuality off the center of the bell curve needed to meet a tragic end. It was entertaining that the woulda-been messiah was a gay ex-chassid junkie who tied off with tefillin, but I wished that he had found a nice boyfriend somewhere along the line.

I strongly recommend the book. Chabon has written a fun translation of Chandler into “Jewish”, does a great job with language, setting and atmosphere, a decent job with character and theme, and adapts the traditional plot in an entertaining manner. If you haven’t read it yet, enjoy.

How Twitter creates serendipity

Josh Porter makes a good observation: “a big difference between Facebook & Twitter is serendipity. Stuff “happens” all the time on Twitter. Not really so much on FB.” Twitter’s serendipity is an outcome of its design. Twitter asymmetrical, mostly-public, searchable network creates serendipity. Facebook’s mostly-private symmetrical network doesn’t.

Twitter generates serendipity with visible mentions and searches in your extended network. You can see replies from people you aren’t following. This allows you to expand your contacts and knowledge beyond people you already know. When someone asks an interesting question, you can do a search and watch the answers and responses unfold, bringing you to references and people you didn’t know before. By contrast, Facebook’s mostly-closed, symmetrical network makes it hard by design to see outside of your social network.

Handles and hashtags also help with serendipity. Handles are unique, so you can do a search for @bokardo and see the stream of references to Josh Porter, much more easily than if you searched for Josh Porter. This is a major advantage of Twitter over Facebook and LinkedIn, where searches for common names yield enough results that it’s nearly impossible to find a person with a common name. Hashtags make it easy to generate a topic by social convention and follow the thread. It is doubtful that Twitter intended handles to be useful for search and serendipity – they just used a convention that’s ubiquitous in consumer web services. Twitter doesn’t even have any explicit support for hashtags – they arose as social convention in the community. But as search became an integral part of the Twitter experience, handles and hashtags help.

My favorite thing about Twitter serendipity is that “pivot search” on people and tags kicks in when you get actively engaged in a topic. Most design patterns intended to support serendipity do a query for you, and deliver “recommended results” using some algorithm. An article about bank bailouts has several other suggested articles on the same topic. When you’re reading, you may or may not be reading more. Personally, I’m more likely to follow hand-picked links the author has chosen within the context of the article. The human mind is a better filter than the algorithm.

By contrast, when a person or topic is interesting in Twitter, you can easily pivot on the person or topic and explore. A twitter hashtag search is likely interesting — more interesting than generic tag searches — because a tag points to an active conversation created in a social context, rather than an abstract topic. When you get interested in something, you can easily pursue it and discover interesting results. This “pivot search” design pattern may be ideally suited for infovores like me, and too implicit for people with other styles, but I really love it. It would be interesting to find out how many others use Twitter for pivot searches in this way.

In sum, there are properties of Twitter’s design: asymmetrical, mostly-public, searchable, easy-pivot, that foster serendipity. Some of them were probably designed by Twitter designers on purpose, others may be sweet side effects. As part of the evolutionary experiment in social software, they provide great lessons to learn from.

Geithner-Summers plan and social decay

There’s a nasty hidden cost of the Geithner-Summers plan to buy distressed assets for more than they’re worth. A commenter on the Balloon Juice blog points out that by keeping mortgage assets on the books for more than they are worth, the owners of foreclosed properties have an incentive not to sell them. “If a mortgage is worth $400K and the house sells for $200K, the Title Holders would have to write down that $200K loss immediately. But, keeping that house abandoned and unsold means they don’t have to write down any losses.” Empty homes sit vacant, attract vagrants and copper-strippers, and cause neighborhood blight.

The obvious cost of the PPIP is taxpayer ripoff. The Public Private Investment Partnership plan from Obama’s financial Tim Geithner and Larry Summers has investors take bad assets off of banks’ books for more than they’re worth, leveraged by taxpayer dollars. If the assets aren’t worth inflated prices, taxpayers bear the loss. If the assets go up, taxpayers get only half the profit. The hidden cost is creeping social decay caused by squelching the market in the real houses beneath the mountain of fantasy investments.

To arbitrage this market failure, nonprofits have been creating schemes
to house the newly homeless in abandoned properties (the topic that
started the Cole thread) http://www.nytimes.com/2009/04/10/us/10squatter.html?partner=rss&emc=rss

Hashtags for LocalTweeps: Geography is social

A few days ago, the LocalTweeps service reached my Twitter social network. To sign up for LocalTweets, you tell it your zipcode and it broadcasts your signup on Twitter. LocalTweets hopes to be come a local directory with information organized by zipcode. This could be handy, but it doesn’t yet take advantage of an important aspect of geography, where the internet has a unique advantage over traditional directories. Geography is social and contextual.

Where am I? The downtown neighborhood of Menlo Park, on the Peninsula, in the Bay Area, in Northern California, and so on zooming outward. We use these different markers depending on context. Neighborhood is important for convenience and neighborhoodly socializing. The Bay Area is big, so the regions are important when considering the travel radius for an event. The relevant geographical category sometimes coincides with political jurisdiction (e.g. San Mateo County), and sometimes they don’t. That’s why it would be cool to be able to use tags, not just zipcodes, to identify events and places. A barbeque at a local park would be tagged with the neighborhood. An event at a venue is tagged with a local region. Broader organizing would refer to larger regions, e.g. “Central Valley.”

In a medium with limited physical space, it makes sense to use a single criterion like zipcode to categorize locations and events. But on the internet, there’s no reason to limit. People can, do, and will select the subjective geographical categories based on context.

A couple of years ago, I attended a meeting hosted by the unlamented hyperlocal startup, Backfence. Attendees at the Palo Alto meeting were frustrated because the service would not let them post news in neighboring Menlo Park, even though there are close ties between the towns: people are likely live in one town and work in the other, and to shop and do cultural things the next town over.

So the recommendation for LocalTweets and other internet geography services: free your taxonomy. Let people tag events, and designate them according to what’s socially relevant. The address (and zipcode) will identify where it is on the map. And the tag will identify where it is in people’s cultural context.

Twitter, Facebook, and the unselfish API

Bernard Lunn’s ReadWriteWeb piece about the reverse network effect, writes that one of the ways that social networking services can wear out their welcome is by making their user base feel exploited. Intrusive ads, aggressive marketing, or onerous terms of service can create dissatisfaction and eventual exodus. The RWW article has the end user base of the service in mind, but I suspect the same dynamic pertains to the developer community. With that lens, it’s interesting to consider the very different ways that Twitter and Facebook handle APIs and integration.

Twitter’s API is unselfish. Using the straightforward REST API, developers can and do write clients, search tools, mapping tools, recommendation tools, analytics, personal organizing – a wide range of extensions. Twitter doesn’t do anything to constrain developers other than a rate limit. The lightest weight sort of integration is RSS, and Twitter generates RSS feeds for queries and streams, making it trivially easy to disseminate data. The availability of applications helps build the Twitter user base because they make Twitter more useful. Twitter’s business model is up in the air; but whether it moves toward paid accounts for power users, corporate users, advertising, there will continue to be plenty of room for complementary apps.

Facebook’s API is build to serve Facebook more than developers. The original API constrained developers to exposing a limited user interface within Facebook’s strict design. The functionality encouraged the creation of apps that expanded the Facebook user base because users were encouraged to spam their friends. Given the limits of Facebook, applications tended to be shallow. The most successful app developers needed to relentlessly focus on novelty because users would get bored with yesterday’s toy. Still, application developers put up with the limits because Facebook gave them access to oodles of users.

Then, with the move toward the Twitter-style user interface and the strategic shift toward Facebook Connect, Facebook hid and de-emphasized apps in the user interface. App providers, and users who were starting to like using Facebook for richer engagement were short on luck. Facebook Connect looks on the surface like it might provide developers with more breathing room. A developer can build a fully fledged application or community site, and take advantage of Facebook Connect, which lets users to bring their social network to the site.

But this is a deal with the devil. The problem is that when sites use Facebook Connect, they have minimal connection to their user base. An an application or community site wants to create the policies whereby the site communicates to the community, and the community talks to each other. With Facebook Connect, those rules belong to FaceBook. What’s worse, the member database is critical for a site to make money through ads, sales, donations, or services. With FB Connect, all your member database are belong to them. Another sign of Facebook’s weakness at supporting external sites can be seen in the lack of RSS feeds for public data like Pages. Facebook RSS is designed as a black hole. Content can be sucked into Facebook, and can’t get out. Facebook’s goal with APIs and integration is self-interested. They want to own the social graph, the user data, and the content; developers are sharecroppers on Facebook’s land.

I can see why a short-lived temporary site might want to use FB connect as a shortcut. For an established site, the viral aspect of Facebook may make Connect worth a try. But for a site that wants to build community and business value over the long haul, FB Connect is parasitic. Google’s Friend Connect has some less toxic properties – they are using standards for single signon, portable contacts, and portable lifestream data. The problem with Friend Connect is that doesn’t really have a social network. When there is a more open method with good social properties, applications and communities will go there.

Twitter’s unselfish API strategy will enable it to grow it’s community and provide win/win opportunities for developers. Facebook’s selfish strategy looks on the surface like it will help Facebook’s business success, but it risks running aground on RWW’s exploitatin principle – exploit your developers and they will leave when they get a chance.

Netizen ghosts, or what makes the internet “real”

It reads like a Cory Doctorow satire, but it’s true. Bruce Sterling, the eminent science fiction author and his wife of four years, Jasmina Tesanovic, received an INS notification of pending deportation for Jasmina. A globetrotting couple who organize most of their lives online, they don’t jointly own a house, didn’t go for traditional paraphernalia like wedding china, and have separate bank accounts. Where would one find evidence of their lives together? flickr photos, YouTube videos, a BoingBoing wedding announcement. Bruce needed to make a special Wired Magazine plea for people who know them personally to write the INS before April 15 and testify that they are in fact married. I’ve met Bruce, but don’t know them well enough for that INS form; if you do known them personally please stop reading this right now, tell the INS that they’re for real and then come back.

After the bureaucratic nightmare for Bruce and Jasmina is fixed, what’s interesting is the difference of opinion about what’s considered “evidence” and “real.” The INS is still stuck with an old-fashioned definition of evidence, even though courtrooms have been using email as evidence for a while. The US Federal Rules of Civil Procedure were updated in 2006 with detailed guidelines on how to use email and other electronic information in court.

The epistemological conflict doesn’t just pertain to the dusty bureaucrats at INS. Even Wikipedia has trouble with online sources, as can be seen in this dispute about whether to keep a Wikipedia page on RecentChangesCamp. The event, a regular gathering for a distributed tribe of wiki-keepers, is well-documented in blog posts, online photos, a Twitter stream and so on. But what eventually persuaded the wikipedia editors was an article in the Portland Oregon newsprint business paper. The most chilling aspect of the Wikipedia policy is that blogs are not considered notable. In other words, evidence in the endangered Boston Globe counts, and evidence in the prospering and clearly journalistic Talking Points Memo apparently doesn’t. Another problematic piece of Wikipedia’s policy is the requirement for secondary sources. An event like TransparencyCamp or EqualityCamp is documented by numerous attendees. But unless the San Francisco Chronicle sends a reporter, EqualityCamp doesn’t exist. Attacked by curmudgeons as “unreliable”, Wikipedia ironically places excessive credence in offline sources. As more traditional papers go extinct, and more reporting is provided by online media and peer media, what on earth will Wikipedia do to prove that things are real?

The answer, of course, is that there will develop stronger norms about what makes internet evidence valid. Of course there are many internet sources that are bogus, just as there are forged documents and lies. But there are also plenty of techniques for evaluating the authenticity and reliability of electronic sources. We use them in a common sense manner every day when reading email, evaluating blog comments, and rejecting the fraudsters and spammers.

Surely, there are other government agencies that have developed guidelines that INS could use to update their policies. If you know of any, here is the contact information for Janet Napolitano’s office at the Department of Homeland Security. Do any Wikipedia community members know of efforts to update the notability policy to take TalkPointsMemo and primary event coverage by numerous blogs and other online sources as evidence of notability?

The Bruce and Jasmina INS jam and the RecentChangesCamp kerfuffle show that policy rules and norms haven’t yet caught up with internet reality.