Geithner-Summers plan and social decay

There’s a nasty hidden cost of the Geithner-Summers plan to buy distressed assets for more than they’re worth. A commenter on the Balloon Juice blog points out that by keeping mortgage assets on the books for more than they are worth, the owners of foreclosed properties have an incentive not to sell them. “If a mortgage is worth $400K and the house sells for $200K, the Title Holders would have to write down that $200K loss immediately. But, keeping that house abandoned and unsold means they don’t have to write down any losses.” Empty homes sit vacant, attract vagrants and copper-strippers, and cause neighborhood blight.

The obvious cost of the PPIP is taxpayer ripoff. The Public Private Investment Partnership plan from Obama’s financial Tim Geithner and Larry Summers has investors take bad assets off of banks’ books for more than they’re worth, leveraged by taxpayer dollars. If the assets aren’t worth inflated prices, taxpayers bear the loss. If the assets go up, taxpayers get only half the profit. The hidden cost is creeping social decay caused by squelching the market in the real houses beneath the mountain of fantasy investments.

To arbitrage this market failure, nonprofits have been creating schemes
to house the newly homeless in abandoned properties (the topic that
started the Cole thread)

Hashtags for LocalTweeps: Geography is social

A few days ago, the LocalTweeps service reached my Twitter social network. To sign up for LocalTweets, you tell it your zipcode and it broadcasts your signup on Twitter. LocalTweets hopes to be come a local directory with information organized by zipcode. This could be handy, but it doesn’t yet take advantage of an important aspect of geography, where the internet has a unique advantage over traditional directories. Geography is social and contextual.

Where am I? The downtown neighborhood of Menlo Park, on the Peninsula, in the Bay Area, in Northern California, and so on zooming outward. We use these different markers depending on context. Neighborhood is important for convenience and neighborhoodly socializing. The Bay Area is big, so the regions are important when considering the travel radius for an event. The relevant geographical category sometimes coincides with political jurisdiction (e.g. San Mateo County), and sometimes they don’t. That’s why it would be cool to be able to use tags, not just zipcodes, to identify events and places. A barbeque at a local park would be tagged with the neighborhood. An event at a venue is tagged with a local region. Broader organizing would refer to larger regions, e.g. “Central Valley.”

In a medium with limited physical space, it makes sense to use a single criterion like zipcode to categorize locations and events. But on the internet, there’s no reason to limit. People can, do, and will select the subjective geographical categories based on context.

A couple of years ago, I attended a meeting hosted by the unlamented hyperlocal startup, Backfence. Attendees at the Palo Alto meeting were frustrated because the service would not let them post news in neighboring Menlo Park, even though there are close ties between the towns: people are likely live in one town and work in the other, and to shop and do cultural things the next town over.

So the recommendation for LocalTweets and other internet geography services: free your taxonomy. Let people tag events, and designate them according to what’s socially relevant. The address (and zipcode) will identify where it is on the map. And the tag will identify where it is in people’s cultural context.

Twitter, Facebook, and the unselfish API

Bernard Lunn’s ReadWriteWeb piece about the reverse network effect, writes that one of the ways that social networking services can wear out their welcome is by making their user base feel exploited. Intrusive ads, aggressive marketing, or onerous terms of service can create dissatisfaction and eventual exodus. The RWW article has the end user base of the service in mind, but I suspect the same dynamic pertains to the developer community. With that lens, it’s interesting to consider the very different ways that Twitter and Facebook handle APIs and integration.

Twitter’s API is unselfish. Using the straightforward REST API, developers can and do write clients, search tools, mapping tools, recommendation tools, analytics, personal organizing – a wide range of extensions. Twitter doesn’t do anything to constrain developers other than a rate limit. The lightest weight sort of integration is RSS, and Twitter generates RSS feeds for queries and streams, making it trivially easy to disseminate data. The availability of applications helps build the Twitter user base because they make Twitter more useful. Twitter’s business model is up in the air; but whether it moves toward paid accounts for power users, corporate users, advertising, there will continue to be plenty of room for complementary apps.

Facebook’s API is build to serve Facebook more than developers. The original API constrained developers to exposing a limited user interface within Facebook’s strict design. The functionality encouraged the creation of apps that expanded the Facebook user base because users were encouraged to spam their friends. Given the limits of Facebook, applications tended to be shallow. The most successful app developers needed to relentlessly focus on novelty because users would get bored with yesterday’s toy. Still, application developers put up with the limits because Facebook gave them access to oodles of users.

Then, with the move toward the Twitter-style user interface and the strategic shift toward Facebook Connect, Facebook hid and de-emphasized apps in the user interface. App providers, and users who were starting to like using Facebook for richer engagement were short on luck. Facebook Connect looks on the surface like it might provide developers with more breathing room. A developer can build a fully fledged application or community site, and take advantage of Facebook Connect, which lets users to bring their social network to the site.

But this is a deal with the devil. The problem is that when sites use Facebook Connect, they have minimal connection to their user base. An an application or community site wants to create the policies whereby the site communicates to the community, and the community talks to each other. With Facebook Connect, those rules belong to FaceBook. What’s worse, the member database is critical for a site to make money through ads, sales, donations, or services. With FB Connect, all your member database are belong to them. Another sign of Facebook’s weakness at supporting external sites can be seen in the lack of RSS feeds for public data like Pages. Facebook RSS is designed as a black hole. Content can be sucked into Facebook, and can’t get out. Facebook’s goal with APIs and integration is self-interested. They want to own the social graph, the user data, and the content; developers are sharecroppers on Facebook’s land.

I can see why a short-lived temporary site might want to use FB connect as a shortcut. For an established site, the viral aspect of Facebook may make Connect worth a try. But for a site that wants to build community and business value over the long haul, FB Connect is parasitic. Google’s Friend Connect has some less toxic properties – they are using standards for single signon, portable contacts, and portable lifestream data. The problem with Friend Connect is that doesn’t really have a social network. When there is a more open method with good social properties, applications and communities will go there.

Twitter’s unselfish API strategy will enable it to grow it’s community and provide win/win opportunities for developers. Facebook’s selfish strategy looks on the surface like it will help Facebook’s business success, but it risks running aground on RWW’s exploitatin principle – exploit your developers and they will leave when they get a chance.

Netizen ghosts, or what makes the internet “real”

It reads like a Cory Doctorow satire, but it’s true. Bruce Sterling, the eminent science fiction author and his wife of four years, Jasmina Tesanovic, received an INS notification of pending deportation for Jasmina. A globetrotting couple who organize most of their lives online, they don’t jointly own a house, didn’t go for traditional paraphernalia like wedding china, and have separate bank accounts. Where would one find evidence of their lives together? flickr photos, YouTube videos, a BoingBoing wedding announcement. Bruce needed to make a special Wired Magazine plea for people who know them personally to write the INS before April 15 and testify that they are in fact married. I’ve met Bruce, but don’t know them well enough for that INS form; if you do known them personally please stop reading this right now, tell the INS that they’re for real and then come back.

After the bureaucratic nightmare for Bruce and Jasmina is fixed, what’s interesting is the difference of opinion about what’s considered “evidence” and “real.” The INS is still stuck with an old-fashioned definition of evidence, even though courtrooms have been using email as evidence for a while. The US Federal Rules of Civil Procedure were updated in 2006 with detailed guidelines on how to use email and other electronic information in court.

The epistemological conflict doesn’t just pertain to the dusty bureaucrats at INS. Even Wikipedia has trouble with online sources, as can be seen in this dispute about whether to keep a Wikipedia page on RecentChangesCamp. The event, a regular gathering for a distributed tribe of wiki-keepers, is well-documented in blog posts, online photos, a Twitter stream and so on. But what eventually persuaded the wikipedia editors was an article in the Portland Oregon newsprint business paper. The most chilling aspect of the Wikipedia policy is that blogs are not considered notable. In other words, evidence in the endangered Boston Globe counts, and evidence in the prospering and clearly journalistic Talking Points Memo apparently doesn’t. Another problematic piece of Wikipedia’s policy is the requirement for secondary sources. An event like TransparencyCamp or EqualityCamp is documented by numerous attendees. But unless the San Francisco Chronicle sends a reporter, EqualityCamp doesn’t exist. Attacked by curmudgeons as “unreliable”, Wikipedia ironically places excessive credence in offline sources. As more traditional papers go extinct, and more reporting is provided by online media and peer media, what on earth will Wikipedia do to prove that things are real?

The answer, of course, is that there will develop stronger norms about what makes internet evidence valid. Of course there are many internet sources that are bogus, just as there are forged documents and lies. But there are also plenty of techniques for evaluating the authenticity and reliability of electronic sources. We use them in a common sense manner every day when reading email, evaluating blog comments, and rejecting the fraudsters and spammers.

Surely, there are other government agencies that have developed guidelines that INS could use to update their policies. If you know of any, here is the contact information for Janet Napolitano’s office at the Department of Homeland Security. Do any Wikipedia community members know of efforts to update the notability policy to take TalkPointsMemo and primary event coverage by numerous blogs and other online sources as evidence of notability?

The Bruce and Jasmina INS jam and the RecentChangesCamp kerfuffle show that policy rules and norms haven’t yet caught up with internet reality.

Drive Less Challenge

When I was a kid, I loved cycling over the hill to buy milk at the supermarket and bring it back in a basket. When I read Jane Jacobs as a in college it articulated what I had felt as a kid about the value of neighborhoods scaled for people, where you can stroll and chat with your neighbors, with “third places” where people recognize each other. So I sought out that experience. When I lived in Boston, I loved living walking distance from the supermarket, coffeeshops, hardware store and gym.

In recent years, as information about global warming and limits to the oil supply have become mainstream, the ability to organize everyday life for less driving has become not just a preference, but a necessity to bring energy use to levels that can be sustained. When I moved to California, I deliberately sought somewhere to live that was close to daily errands and train, where I didn’t need to car commute to work. Then, I challenged myself. What would it take to drive less? Slowly, I built up a repertoire of skills. I got bike baskets and can use a bike for most errands. I learned how to take a bicycle onto the caltrain, for practical access to many places in San Francisco and the Peninsula. I got better gear for biking in the rain (but still choose to drive when it’s pouring out).

I joined the Menlo Park Green Ribbon Citizens’ Committee to think globally and act locally. In California, driving is the biggest source of greenhouse gas emissions. So the biggest opportunity for transformation is to drive less. Now, there are some things that just aren’t practical to do without a car. Getting from Menlo Park to the East Bay. Buying furniture or appliances. But there are plenty of trips that are practical and good without a car. It just takes a little bit of learning and incentive to get over the hump and do it.

So I’m putting together the Drive Less Challenge This is a an opportunity to use some neighborhood positive social pressure to help people get over the inertia of daily life and take a few practical actions to do less driving alone. The challenge starts on Earth Day, April 22 and runs for a week. We’re working with local businesses, schools, and neighborhood groups to get the word out. The scale is Menlo Park this year, to make it easy to manage with an all-volunteer team. (If you’re not in Menlo Park you can still participate; your prizes will be recognition and the knowledge that you’re taking a step toward sustainability). There plenty of systemic changes that would make it easier to drive less, but most people have “low hanging fruit” opportunities to make small tweaks in daily life that would add up to meaningful change, now and already. It’s time to challenge ourselves and challenge our neighbors.

I’m coordinating the project with awesome team of Menlo Park volunteers, with minimal budget, weekends and evening time. I’m still doing some final tweaks on the “gameplay” and we’re busy getting the word out. If you’re interested and have questions and suggestions, drop a note in the comments or hail me as alevin on Twitter.

How asymmetry scales

Bokardo predicts that Facebook will go asymmetric. He calls out two key reasons why: asymmetric networks are a a good fit for anyone with micro-fame, not just organizations, brands and bands. Asymmetric networks help people manage their attention – you don’t need to pay attention to every update from everyone following you.

There are a couple of other key reasons why asymmetric networks scale better. In Twitter there are a number of ways where asymmetry in a public network provides good returns to scale, as noted in yesterday’s post on premature predictions of peak Twitter
* Retweets get you information that was first posted by someone outside your network
* Searches let you find information outside your network
* Visible replies, like the lovely feature in TweetDeck that shows when someone mentions you even if you’re not following them, allow you to hail and engage people in conversation, and have others start conversations with you, even if you’re not following.

These features mean that the more people who join the network, the more interesting information will be amplified through it, and the more potentially interesting people you may discover. The level of context is fairly high – you can see what someone else has been Twittering, and see if they are interesting and relevant to you. And the level of obligation is low (you can follow someone without giving them the burden of accepting or rejecting you).

In Facebook, I can see when someone that I don’t know has commented on the update of someone I do know, but then I need to friend a stranger in order to learn more about them. Facebook’s mostly-symmetrical, mostly closed network makes it hard to learn new things and meet new people outside your existing network.

So, the reasons for asymmetry aren’t just about supporting fame, but enabling discovery with low social expense.

Peak Twitter?

There are several arguments going around predicting Peak Twitter. The discussion raises a number of interesting issues questions about social media and scale.

In Twitter is peaking, Steve Rubel describes the risks to Twitter as social trendiness and increasing messiness.

Too popular. Social networks seem to have a property in common with nightclubs, bars, and restaurants – they are popular for a while. Then the throng moves on. The digerati were on Orkut for a few minutes, before moving on to Facebook and Twitter. Popularity depends on community – Facebook and MySpace are bigger in the US, Bebo is big in Europe, Orkut is big in Latin America.

Rubel hypothesizes that the trend pattern is similar other pop culture trends, where hipsters create a trend, and then flee when the mainstream arrive. Rubel writes, “Just six months ago, the list of the top 100 users on Twitter read like a who’s who of geeks. That’s what made it a draw, for many, initially. Now, however, the list looks like People or US Magazine. Twitter is losing its geek creds as celebs flock to the service.” The difference is, a social network is a great many places, not one; the network is inhabited by millions of overlapping subcultures. Honestly, I haven’t heard of many of the pop culture celebrities who have recently joined Twitter, and the ones I’ve heard of, I don’t follow. I do follow some of my personal heroes, but they aren’t pop culture icons.

The argument that people magazine starlets and nba players will crowd out niche communities is the same mass media vision that there would be a handful of pop-culture centered websites that would crowd out the rest of the web. There are 270 million people on Facebook, which is a great many more than say, the 15 million people who visit Disney every year, and their subculture-centric Facebook experiences are different than the mass-produced Disney experiences.

Too big. The second argument is scale and disorganization. “Since replies are not threaded, celebs and corporations do not feel they have to respond to every Tweet.” This is a real challenge. Rubel rightly recognizes that tools are evolving to address the challenge. What’s missing is that personal needs are very different from organizational needs.

For personal use, the fact that Twitter is a flow is part of the charm. A twitter feed doesn’t carry the same perceived social obligation to keep up and respond as email or instant message. You can dip into the stream, step out, and come back later. For personal use, people need some better tools to manage their attention. Tweetdeck, which Rubel calls out as a good example, adds groups, search, and embryonic filtering into the basic experience.

The needs of non-celebrity individuals are different from the needs of corporations, politicians, and famous poeple. If your constituency has thousands to millions of people, you need very different tools to monitor the conversation than if you are following fifty or 100 people. If you’re an individual, and you miss an update from a friend or an interesting news link, no big deal. If you are striving to use Twitter for constituent listening and feedback, you want to notice complaints, suggestions, and kudos. You probably want to have multiple people listening to the account, listening for different products or topics, and working on responses.

Dunbar limit. In ReadWriteWeb, Bernard Lunn makes the opposite point, that size doesn’t matter. “In a social network, the value for existing users of a new user joining the network plateaus once users have most of their own contacts in that network.” For mostly closed, symmetrical networks such as Facebook and Linked In, this is true. For mostly open, asymmetrical networks such as Twitter, this is mostly false, which Lunn mentions briefly. I suspect that people will cap their participation at some augmented Dunbar limit of the number of people they can follow with social attention and time. But in Twitter, retweets, searches, and visible replies mean that the more people who join the network, the more interesting information will be amplified through it, and the more potentially interesting people you may discover. When you have your existing contacts on the networks, it is easy and to make new contacts if you wish. The level of context is fairly high – you can see what someone else has been Twittering, and see if they are interesting and relevant to you. And the level of obligation is low (you can follow someone without giving them the burden of accepting or rejecting you).

Exploitation. In the ReadWriteWeb post, Lunn makes the insightful point that social networks can fail when their hosts start to violate the implied social contract with their communities in the interest of making money from their investments. “If these businesses get too eager to monetize to justify those valuations, they may create the reverse network effect.” When they move to monetize, hosts may move toward intrusive advertising, marketing, privacy violations, or other steps that benefit the site’s commercial interest and go against the interests of the users. I see the potential risks even more broadly than Lunn does. Intellectual property terms of service, and increased control over content and customization can violate the perceived community social contract as much as intrusive ads and marketing can. There is some inertia to switching, but in the absence of monopoly, annoyed communities do pick up and go with some regularity.

Parasitism. In Mourning the loss of Twitter, Ross Mayfield predicts that Twitter will fall prey to the spam and other antisocial behavior that crippled Usenet and Email. Hopefully the Twitter ecosystem will evolve to meet the threats, and blacklist and social filtering tools will keep the parasites from killing the host.

Twitter is a fascinating experiment since the social scale dynamics of an asymmetrical, open network aren’t known. I suspect that the ecosystem will evolve social and topic filtering tools that will help it scale; time will tell. The platform strategy is helping already – third parties are building tools to search, manage, and respond to the twitter stream. And I hope that the Twitter management retains a good sense of environmental judgement and finds ways to make money that don’t feel exploitive to the community.

Database journalism – a different definition of “news” and “reader”

Politifact is an innovative journalism project built by Matt Waite, as a project of the St. Petersburg Times, inspired by Adrian Holovaty’s 2006 manifesto on “database journalism”. Waite and Holovaty both focus on the “shape” of the information presented by database journalism – stories that have a consistent set of data elements that can be gathered, presented, sliced, and re-used. This structure is foreign to traditional journalism which thinks of its form as the story, with title, date, byline, lede, body.

The Politifact site started by fact-checking politicians’ statements during the 2008 political campaign. Each statement is rated as on a one to five scale, from “True, Mostly True, Half True, Barely True, or False. Today, the most compelling piece on the site is the “Obamameter” tracking the performance of the president against over 500 campaign promises. Examples include: No. 513: Reverse restrictions on stem cell research – Promise Kept, No. 464: Reduce energy consumption in federal buildings – In the Works, and No. 446: Enact windfall profits tax for oil companies – Stalled.

The shape of the data is part of the picture. It’s certainly the biggest day-to-day difference if you’re composing news or tools for news. But I don’t think it’s the lede. What’s different here is a a different conception of what’s “new” and what a “reader” does.

What’s new Traditional journalism is based on a “man bites dog” algorithm. What’s newsworthy is the dramatic reversal of expectations. Slow, gradual changes are not newsworthy. Large static patterns are not newsworthy. I suspect that this is part genre and part technology. The technology limitation is space; there isn’t room to publish many stats in a newsprint paper, and minimal affordances for navigation.

The emphasis on concise and dramatic “news” leaves our society vulnerable to “frogboiling”, the urban legend in which the frog in gradually heated water gets accustomed to the change, doesn’t jump out, and boils to death. The decline of the North Atlantic cod fishery or the Sacramento-San Joaquin delta are not newsworthy until the cod and the salmon are gone. Wage stagnation isn’t newsworthy until the middle class is gone. Tens of thousands dead on US highways each year isn’t newsworthy, though a traffic jam caused by a fatal accident is news. Many eyes hunting through financial data may find dramatic scandals, to be sure. With database journalism, perilous or hopeful trends and conditions can become worthy of storytelling and comment.

What a reader does The rise of the internet has made reader participation a much greater part of news than the limited “letter to the editor” section. Dan Gillmor, former editor of the “ur-blog” Good Morning Silicon Valley liked to say “my readers are smarter than me” because of the high-quality corrections and tips he’d get from his readers. Database journalism takes the trend a few steps further. Where a traditional news reader consumes the news, a database user interacts with it, looking for information and patterns. The “news” itself may be found by readers doing queries and analysis of the database, such as the database of Prop 8 contributions published by the San Francisco Chronicle.

So database journalism isn’t just about having some fields that are different from “title” and “body”. It’s about different conceptions of time, space, and participation.

Facebook as a Twitter Wannabee

The new Facebook UI has become a stream of Twitter-like updates. The pattern builds on the addictive conversational nature of Twitter, but cripples some of the key ways that Facebook was different than Twitter. What made Facebook better than the earlier generations of YASNs is that it not only let you declare your friends but do things with your friends – share applications with them, share events, create groups, organize. The new Facebook hides the affordances for apps, events, groups.

By hiding the affordances for application functionality, are they making a really big bet on Facebook connect? Are they hoping that 3rd party services with independent web presense will integrate into the stream by delegating their member database to Facebook? This could be. The weakness of this strategy is that 3rd party services have no loyalty to Facebook and would just as well use some other technology. People just want to do things with their friends, with the least barrier to getting started.

Also, Facebook has FriendFeed-like discussion around assets, which is nice. The threaded comment UI is intuitive. It’s very helpful when you’re actually talking about an asset like a bookmark. But it lacks the transparency, discovery, and immediacy of Twitter conversations. With Twitter conversation, you can see someone replying to someone else, and find interesting new people. With Twitter conversation plus search, you can see someone asking a question and then follow the answers.

Also, Twitter conversation is present-focused in a good way. Facebook conversations are anchored to the original remark that happened to start the conversation. So if someone said something interesting 4 hours ago, you have to scroll back to find it. Which you probably won’t. With Twitter, if the conversation is ongoing, you’ll still hear it.

In summary: the Twitter mode for Facebook does give it some of the addictive quality of Twitter but in imitating Twitter, Facebook has sacrificed too much of what makes Facebook valuable. And in attempting to imitate Twitter, Facebook has missed some of the social dynamics that make Twitter good.

Really social bookmarking

I’d love to see a “really social bookmarking”. Delicious has lots of bookmarks, and shows which bookmarks are popular, but it’s hard to figure out who people are because most people use pseudonyms. Magnolia (RIP) had much more social presence but was small. Twitter is timely and social but amnesic. Friendfeed has people and multiple services, but you can’t navigate it by content type (links) and topic (tags).

I’d love to see bookmarks through a multi-service friend of a friend network, browsable by topic, prioritized by number of links. That would be a great way to find classic information and good curators.