The Kindle debacle, DRM and SaaS

A blogger who’s a library professional gives more wonderful examples of a subscriber losing access to digital content. But these examples conflate the issues with DRM, and with “software as a service” contracts.

  • Due to an oversight, a bill for an e-book service was paid one day after the due date. As a result, access to about 1000 titles was denied for the entire calendar month.
  • The Library subscribed to an e-journal for a few years, then cancelled the subscription. The publisher removed access to the entire journal; the Library could no longer access even the volumes that it had paid for.
  • An e-book publisher went out of business; the Library lost access to hundreds of titles at once.
  • Sometimes, technical/connection problems occur that make hundreds of titles (they are usually bought in packages) temporary unavailable.

However, these examples conflate several related issues with digital content – DRM and the software as a service model. Part of the problem with the Amazon 1984 is DRM and the associated metadata. When I purchase mp3s from Amazon, I can back them up, and play them in different players. If the file is a generic file without source metadata or locking capability, Amazon couldn’t take the files back if they tried.

This issue is related but different to purchasing content as a service. A better analogy to the library’s digital subscriptions are the “web songs” from The online music service enables you to access music in three formats, CDs, mp3s, and “web songs”, which are available for 10 cents and can be streamed only. I expect that web songs would not outlast Lala’s corporate lifespan, and might go away any moment. Digital subscriptions are more like web songs. They are perpetually dependent on the existence of the provider and the terms of service.

The final issue is contract terms. There is a long tradition of contracts that give you temporary access to you don’t own. This is called “renting” or “leasing”. The library contracts are clearly rental contracts where the agreement is that service will continue as long as the library pays its bill. Rental contracts, of course, can apply to physical objects. You can lease a car, and the leasing company expects the car back when the lease is done, or if you stop paying your bill. If the lease terms are “in perpetuity”, this practically means “for the lifetime of the provider”. And even perpetual lease terms typically allow the provider to change the terms, too.

With “content as a service” buyers need to be especially aware if the service supports content they themselves contributed. When I upload content to a photo sharing service, for example, I explicitly want the right to get my content back at any time.

The issues of DRM and SaaS go together, in that it’s easier with DRM to turn contracts that seem like purchase at first glance into conditional rentals. This is what Amazon appears to have done with the Kindle. When a contract is explicitly a rental contract, the subscriber should expect to be tied to the lifetime and the changing discretion of the provider. When a contract is for downloaded, non-DRM content, it’s at least possible to create a traditional agreement of sale.

When I’ll get a Kindle

The current Kindle debacle, in which Amazon deleted copies of George Orwell’s 1984 from the devices of people who thought they bought the book, highlights one of the three main reasons that I haven’t bought a Kindle yet.

DRM is first. The 1984 scenario shows in a dramatic fashion that when you “buy” DRM’d product you don’t own it. DRM fragments the bundle of rights you have when you buy a paper book – you can lend it, sell it, read it outloud, take it wherever you go. DRM enables the provider to set the terms, for example restricting the use of an audio feature that reads the book outloud. DRM prevents one of the main reasons that I buy books to begin with – the ability to share books with friends. The 1984 example shows the limitation clearly. In a DRM world, users do not have rights to stuff that historical experience leads them to mistakenly think they own. I’m not at all fooling about staying away from DRM, I avoided digital music until the industry walked away from DRM as the norm.

The second reason is social. If I’m getting books online, I want to be able to choose to share them. The internet makes it possible to create wonderful social applications for reading books together, commenting, annotating, creating clubs and discussion groups, discovering other peoples’ collections. LibraryThing goes a little of the way there. The Kindle experience is isolated – it’s even less social than physical books that you can at least lend to a friend. It’s less social than the bookshelves that disclose the history of your reading interests to your friends. After (and only after) the DRM is gone, good social capabilities would make it much more compelling to use a Kindle-like device – paired with a service for sharing.

The third reason is inventory. One of the big benefits of digital music is that publishers have digitized a large portions of back catalog. This means that one can search and acquire a wide variety of music, ranging from the hyper-popular to the moderately obscure. Almost everything I want to listen to, with a small number of exceptions, is available digitally. This isn’t the case for books. Kindle inventory is clustered at two ends of the spectrum. New popular books are all on Kindle. Old, classic, public domain works are on Kindle. But there’s a large number of moderately obscure, somewhat older books that aren’t. And this sort of book represents a good portion of the books I buy. The last two books I bought: an biography of an author, published in the early 90s. A music tutorial (thanks Tracy for the recommendation). Turning around to look at my bookcase – “Merchants of Desire” – a superb history of retail and mass merchandising, published in 94. More Work for Mother, Ruth Schwartz Cowan’s brilliant classic on the history of household technology, published in the mid-80s. Not on Kindle. Until the large majority of books I want to buy are on Kindle, it’s not so helpful for me.

So can I see having a Kindle-like device. Eventually. After DRM is gone. And I believe it eventually will be, just as it’s gone for music. When the social experience is better than reading printed books. And when the majority of books I want to read, including a couple of decades of back catalog are online. These things will eventually and I can wait.

Update. A blogger who’s a library professional gives wonderful examples of losing access to digital content. But these examples conflate the issues with DRM and with Software as a Service content. I unpack these issues in a separate post.

Update 2. Same reasoning for when I’ll get an iPad. The iPad will have a greater variety of content for it. But I’m not much of a gamer. Seems like a nice platform for graphic novels but seems like an expensive investment to read comic books. I’m not as opposed to consumption devices as, say, Cory Doctorow – I’ll get one when DRM is gone and when the experience can be social.

Music critic curmudgeon tells blogs & twitter to get off his lawn

The familiar complaints of old media curmudgeons bemoaning the rise of the unwashed, pajama-clad blogger tribe, have now reached the rarified domain of music criticism, with a much-forwarded entertaining rant about how blogs and twitter are ruining music.

Christopher Weingarten, a critic at the Village Voice and other publications runs through every curmudgeonly cliche in the book, raising arguments that have been swatted down for a decade by Jay Rosen and other internet-age thinkers: bloggers in pajamas, echo chamber, 140-character essays, nostalgia for savviness, all of it. Critiquing Weingarten’s arguments is like shooting fish in a barrel (in the words of some original internet ranters). I kind of hate to contribute to the negative energy, but Weingarten’s rant is getting an undue level of cheering given the retro content. So here goes.

Bloggers in pajamas
Weingarten’s first complaint is that swarms of bloggers came from nowhere to do for free, and with less quality, what music critics used to do for money. This is the “bloggers in pajamas” argument, thousands of people posting rumors and blather on the internet from their parents’ basements. Sure, the internet enables people to post junk, but also provided a platform for new projects and voices – Josh Marshall’s Talking Points Memo, Marcy Wheeler a superb investigative analyst who blogs at Firedoglake, strong local voices such as West Seattle Blog and more. The fact that it’s easy to publish doesn’t negate or prevent powerful new voices from arising.

The echo chamber
One of the early critiques of the blogosphere is that the internet would give rise to an echo chamber where people would listen only to the voices that re-inforced their pre-conceptions. There’s a similar concern that on the internet, people self-segregate into groups for hiphop, reggaeton, viking metal, and then don’t cross the boundaries. The thing is that hasn’t turned out to be true with respect to news and politics. A Pew Internet and American Life study in 2004 found “Wired Americans are more aware than non-internet users of all kinds of arguments, even those that challenge their preferred candidates and issue positions.”

With online music, my personal experience is that the social network helps extend preferences as well as re-enforce them. Plus, I don’t see why it’s bad thing to go to a reggaeton expert for reviews of reggaeton music. It is delightful to search the internet and find people who know about the topic they are discussing. All too often, general-purpose mainstream critics write reviews of musicians and types of music that they don’t know well and/or don’t like much.

Fans are fans!
A more interesting critique is that people who aren’t professional critics write like fans. In music blogs, “You can find out about new bands without cranky snarky stuff.” The jaded tone of the professional critic is a music-world analog to the news journalism “church of the savvy” as described by Jay Rosen. In an attempt to be “objective”, news journalists adopt a savvy, cynical attitude that can keep them from seeing the real story – for example, when “horse race” coverage predominates over actually covering the differing records and policies of politicians. Internet-style journalists don’t pretend to be dispassionate and free of opinion. They disclose their beliefs and desires, and are more credible for it.

Now, simple-minded music fandom is not very interesting. Look at youtube or shoutbox comments and you can see fans saying unedifying things like “awesome song!” and “best solo evar!”. Educated fandom on the other hand, involves discussing the sound, emotion, influences, performances – from the perspective of someone who continues to be excited and moved by the music. It’s interesting that when musicians talk about their heroes, mentors, who they’re listening to, they sound like fans, not like jaded critics.

Weingarten alleges that there has been a loss of venues to explain *why* a piece of music is good or bad is nonsense – “google: band review” will often find informed and insightful reviews and opinions about pretty obscure acts. What is actually missing is is better tools and venues for fans to have intelligent discussion. Currently, the intelligent discussion seems to be fragmented in harder-to-find online forums.

Loss of elite status
Music criticism was dominated by a handful of elite voices back when you needed an expensive printing press or radio license or TV channel to get the word out, just as opinion columnists like Tom Friedman and David Brooks used to have more exclusive status. These days, there’s no longer an exclusive club of arbiters. I understand why Weingarten cares that his elite status is devalued, but not why anyone else should care. There was also nostalgia when the rise of printing enabled members of the hoi polloi to read and write. From the view of history, there is very little sorrow for the monks’ monopoly.

Crowd sourcing killed punk rock
The reason to lament the loss of the elite, says Weingarten, is that “people have awful taste.” If opinions about music are left up to people who aren’t professional critics, then the only thing left will be mediocrity. The thing is that the internet isn’t just “people” it’s a ton of individuals with widely varying tastes, backgrounds, and expressive skill. The beauty is that on the internet, you are not forced to pay attention to people you think are mediocre or dull. On Twitter you choose who to follow. You choose which blogs to read, based on your evaluation of taste. Unlike the mass media world, you’re not stuck with a handful of magazines and radio stations.

Not only that, the argument that he makes is applies even more strongly mass market hit-based model that’s being replaced. “All this music that rises to the middle – boring, bland white people with guitars.” Remember the good old days of clearchannel radio? You couldn’t possibly get any more bland than that. It was the mass market model that drove extreme homogenization of music, and it’s the “long tail” on the internet that is facilitating the recovery of things that have audiences smaller than mega-popular.

Down with Guitars!
To prove his point about value of being jaded and opinionated, Weingarten makes a point of trashing “guitar bands”. Now, I have to admit that I’ve never been particularly fashionable. Clearly I missed the memo to purge guitars from my iTunes, and can’t say I regret it. This probably puts me into one of the many categories of listeners that he disdains. (To be a slightly less snarky, there is plenty of boring music with guitars, synthesizers, fiddles, horns, you name the instrument used in popular music. Picking on an instrument as the epitome of dull seems philistine to me.)

Shakespeare in 140 characters
If you can’t beat them join them – Weingarten is taking his music criticism to Twitter. There, Weingarten subscribes to the absurd fallacy that writers now need to compress their writing into 140 character chunks. Following this fallacy, Weingarten is spending this year writing 1000 reviews of albums on Twitter in 140 characters or less. Social media savvy folk know that Twitter is the new headline — when you have something extended to say, you don’t write 100 tweets, you write an essay and post a link to it from Twitter.

Compressing his reviews to 140 characters this limits Weingarten to the tone of savviness and snark that bedevils the critic tribe. Recent examples of snark:
473: Major Lazer/Guns Don’t Kill People… Lazers Do: Bug-style dancehall dumbed down for people that wear scarves in the summer.#4
472: Cheap Trick/The Latest: There’s more to power-pop than just hooks.#3

Let’s say out of those 1000 recordings he likes 50. I’d much rather he write longer posts on the 50 and link to them. Unless there’s some really interesting reason he doesn’t like something, I don’t want to read it.

Discovery and aggregation
So, what to do now that new bands are being discovered by people on blogs and Twitter. One of the roles that critics can continue to play is to aggregate information discovered around the web. This, too, is displeasing to Weingarten, who looks back fondly on the time that critics helped spot bands.

Web-savvy journalists from Dan GIllmor to Josh Marshall and others take happy advantage of the state of affairs where, in Gillmor’s words “My readers are smarter than I am”. They realize that their readers include people with information and expertise, and rely on their broad community for tips, fact-checks. If Weingarten respected his audience more, he might be happier about picking up information from readers.

Weingarten’s rant applies to music criticism the full range of fallacious, self-interested arguments by old media journalists lamenting the decline of their once-privileged position. The arguments are even inconsistent — the internet is somehow leading bland homogenization and narrow specialization at the same time. Critics on the internet don’t bother to explain “why”, and the response is 140-character reviews.

There are real challenges and opportunities in the new world of social media influenced music. I don’t see Christopher Weingarten articulating compelling problem definitions or solutions. In a world where everyone is trying to understand and adapt to new conditions, I don’t want to be too hard on Weingarten. It would be easier to be more generous if his rant didn’t take aim at the listening public and many of its subcultures. Attacking fans instead of adapting only increased the depth of the music industry’s woes. In music distribution, initiatives like Trent Reznor’s to reach out to fans are working a lot better than strategies attacking fans. Hopefully as more people engage and innovate, we’ll see the music commentary equivalent of this superb presentation by Michael Masnick on Trent Reznor’s innovations in music distribution.

Updated last paragraph to sound less hard on Weingarten and harsher on fan-bashing.

Social and conceptual models for Google Wave

Over the last decade, wikis, blogs, social networks, social messaging, social sharing apps, google docs and other tools have been providing lighter weight, faster vehicles for collaboration and communication that the old lumbering battleships, office documents and email. Now Google’s Wave is a depth charge aimed at the battleships. Google Wave is based on a powerful technical concept, using a realtime chat protocol and stream model as the foundation for communication and collaboration applications. For these reasons, Google deserves a lot of credit for pushing innovation, rather than simply cloning the old models using servers in different closets.

Fundamentally, Google Wave is technology-driven innovation. And Google Wave raises some pretty large questions about the cognitive and social models that people will need to understand and use Wave-based tools.

Conceptual model

The first big set of questions relate to the conceptual model. Wave attempts to mash up email threads, documents, and streaming communication. Each of these is familiar and not that hard to understand. The combination seems a bit mind-bending.

Email and forums are clunky in many ways, but they mirror conversational exchanges in an understandable way. Albert says something, and Betty replies. However, when replies are interspersed between paragraphs, and the conversation digresses, it can get difficult to follow. Wave uses a collaborative document-like model to make the changes visible in real time. This is cool and clever. It also needs a rich combination of social conventions and features to not get completely incomprehensible. Communities using wikis rely on rich social conventions and gardening tools to dispense with the need for inflexible pre-defined workflows. Wave is a toolset with even more flexibility than a wiki, with even more interactive content. This poses even greater challenges to help people understand how to use it and be productive.

The model of time has perhaps the greatest potential for confusion. In an email or forum thread, the latest contribution appears at the top of the thread. In a document, including a collaboratively edited document, there is a “face” to the document that appears as a working model of a final version. In a chat room, the latest comments appear at the bottom of the screen. In a rich “Wave”, it’s harder to tell which items in the wave are newer, older, more or less definitive, without scrolling through the whole process from the beginning. It is easy to imagine getting seasick.

Another conceptual innovation is “replaying” a wave. In the conventional model, there are known techniques to reflect the current state of understanding. When there are comments interspersed between paragraphs in email/forum threads, it can be difficult for newcomers get the gist of what has occurred. But there is a time-honored way to bring people up to speed – summarize the conversation to date. The summary has a social purpose, too, it steers the discussion toward a state of current understanding. A document or PowerPoint presentation can look deceptively finished, and close off potentially warranted conversation. A document is an artifact that reflects the end of a collaborative process. But a document can also be summarized and skimmed.

The presenters kvelled, and the audience cheered, when the demonstration showed new participants using “playback” to recap a wave to date. But this seems like world’s most inefficient way to get up to speed – to understand the end result of a conversation, you need to spend nearly as much time as the initial participants did in getting to that point. A streaming audio/video/screencast presentation, or a realtime chat, can be quite rich, and can be played back, but it isn’t skimmable or summarizable. It’s not clear that introducing that model to summarizeable documents and threads is a great thing.

My biggest areas of doubt about the Google demo in particular is that in some ways the hybrid combines the worst traits of its parents. Does the result have hybrid vigor or mutant weakness? What mental models are needed to understand this psychedelic blend of realtime, threaded, and document content?

Missing social model

The second set of questions relates to the social model. The Google Wave demo truly begged a large number of questions about social models for wave-based tools. The demo seemed to use a fairly primitive concept – an individual’s address book that lets that person add a new person to an email thread.

As someone involved in designing social models for tools used by organizations, this model is an intuitive way to start, but does not go very far. First of all, who has the ability to add people to the conversation? Is it everyone, or only the person who created it? Can invitation be delegated? Can a person add himself or herself? Do these permissions vary by wave? What about existing group and networks? In social sharing tools like Facebook, sharing a message or object shares it with one’s social network (or a defined subset). Twitter, sharing is easly visible to followers, and visible with a little more effort by everyone. In organizations, there are pre-defined groups (say, the marketing team) that one might want to share with. The differences between these models make a vast difference between how the tools are used and what they are good for.

Another issue is social scale. Adding people and making interspersed comments could be intuitive in small groups, but could easily get confusing or chaotic in large groups. Long ago, Roberts Rules of Order were invented to facilitate orderly conversations with large groups of people to debate contentious topics. Group blogs and forums have developed reputation and rating tools to address the signal to noise ratio on large groups. What sorts of rules, tools, and processes will be needed to have socially effective communication and collaboration in larger groups when Wave is used in the world?

What the world saw in May was merely a demo. The Google team was up front about the state of affairs. They weren’t doing FUD-style theater claiming to have already created a completed application to scare competitors and stop other developers in their tracks. They were describing a prototype application built on a new platform, and encouraging developers to explore and extend the concepts they demonstrated.

Next exploratory steps

The reality of open-ness has not yet lived up to the promise. In order to join the developer program, you need to tell Google exactly what you plan to build with their new platform. Which is rather hard to say when you haven’t had the chance to play with it yet. Google is also promising to open source the technology. Open source works well when there’s a community engaged with the technology and contributing. It will be interesting to see if Google can be successful in turning its as-yet-private code and process into something that others participate in.

In order for the social practices and designs to be worked out, people need to be using the technology. Google needs to get this technology out of the lab and into the hands of users and developers so people can start to figure out how and whether the conceptual and social model issues can be addressed.

But it’s early days. As someone wisely observed on Jerry Michalski’s Yi-Tan call, an audio online salon that addresses emerging technology topics, it took three years for Twitter to get to critical mass, and Twitter has an extremely simple usage model and a trivially easy model for extensibility. Google Wave isn’t even out in the world yet, and is a lot harder to grok for users and developers. One of my favorite quotes is from Paul Saffo, “never mistake a clear view for a short distance.” Like hypertext did, the concepts embedded in Google Wave could take decades to make their way into common usage. As with hypertext, there may be many years of tools that instantiate concepts of real-time blending before achieving mainstream adoption. Google’s tools and apps may or may not be the catalyst that gets us there.

In the mean time, this is pretty deep food for thought about how and where to integrate real-time communication and collaboration into regular work and life. Much praise is due to Google and the Wave teams for pushing the boundaries instead of cloning familiar models.

Of course Twitter is conversation

A couple of weeks ago, Mark Drapeau wrote a post that alleged that Twitter was not a tool for conversation, but for broadcast. It’s a provocative point, and is clearly false. Twitter isn’t a very good medium for extended conversation – but it’s obviously used for both conversation and broadcast.

The article uses statistics about the number of posts per Twitter account to infer that most Twitter activity is publishing. This isn’t a good interpretation of the facts for a couple of reasons. The low number of posts per account is almost surely evidence of a high rate of “dabbler” use. People sign up for Twitter, look around, and go away. The data about number of posts per account doesn’t say anything about people who are active on Twitter but use it primarily to consume content produced by others. There isn’t any evidence about the relative ratio of reading vs writing.

The second misreading relies on the Pareto principle – the highest volume of posts comes from a few people. This is true but irrelevant. Let’s say CNN has a service that publishes 100 updates per day on news stories. And two people have a conversation consisting of 5 posts each. These are two different, valid use cases. The existence of high-volume broadcast messages doesn’t somehow negate the fact that some people are talking to each other.

Direct evidence that that Twitter is conversation can be seen in Tweet Tweet Retweet a research paper by danah boyd and fellow researchers studying the use of Twitter. According to the paper, “36% of tweets mention a user in the form ‘@user’; 86% of tweets with @user begin with @user and are presumably a directed @reply.” The data uses on “a random sample of 720,000 tweets captured at 5-minute intervals from the public timeline over the period 1/26/09-6/13/09 using the Twitter API. This sample includes tweets from 437,708 unique users.” Another study with over 1 million tweets shows the same pattern – 39% tweets have an @user mention and 19% contain questions. (Thanks, Juan Carlos Muriente, founder of )

That looks like conclusive proof of the conversational use of Twitter. This surely dovetails with my own experience. In the last week, I’ve had conversations on distributed social networks, music, and Bay Area public transit. In these conversations I learned new information, met new people, shared ideas, and set the stage for follow-on activity. Twitter works for conversation, and the open nature of twitter sparks conversations that might not occur otherwise. It is true that Twitter is a not a good medium for in-depth, extended conversations. Messages are restricted to 140 characters. There isn’t visible threading (although thread info is kept in the data, allowing for threaded views such as Tweetboard.) The richest conversations sparked by Twitter often take place on Friendfeed, where replies are threaded in FriendFeed.

Twitter is good for short, fun and/or productive conversations that bring in often-unexpected relevant people through the social network. Deeper conversation and deeper collaboration need to segue into other modes. The next frontier for development, being pushed in different ways Google Wave,Citability, and other tools and concepts, will be means to connect shorter, real-time conversations with more in depth conversation and collaboration.

WordPress MU, BuddyPress, and distributed community

Over the 4th weekend I did a test install of WordPress MU and BuddyPress. There are several community projects that I’m involved with that could use this sort of technology, and I wanted to explore how far these new tools get there. The answer, I think, is not quite that far yet.

WordPress MU allows you to create a multi-blog site (for example, a blog hosting service, multiple blogs for local food in different communities). BuddyPress lets you set up a social network with profiles, a “shoutbox-like” feature, activity streams, and groups. In theory, this could let you connect a social network of social networks. In theory, the “open stack” of standards would enable independent sites to hook into the network, too. But we’re not there yet.

Here’s the vision that would mirror the structure of existing communities in the world. Say, the SF Bay Area environmental community. There is a large loosely connected overall community. There is no way to get a big picture of what’s going on. Individuals have closest ties to a number of smaller groups in their town, subject matter area, political group, affinity groups. I’m using the environmental community as an example, but see this model everywhere – in politics, music, sports, many places people get together.

So, imagine:
* a main site that aggregated posts, calendar events, and a view of the overall people network, giving an overview of the community.
* “chapter” sites that have their own posts, discussions, calendar items, and social ties
* independent sites, with existing urls and applications, that register with the central community and have their news, calendar events, and activities aggregated into the main site view.
* each “chapter” and independent site has substantial power to communicate with its group of users (unlike the FaceBook model.)

An individual has a single login for the main site and its chapters. Oauth is used to bridge authentication for people whose primary identity is kept at an independent site.

The OpenStack conversation is currently focused on solving authentication technical and usability problems. These are needed and useful. But authentication is just convenience. We’re saving people from typing another username and password.

Distributed communities are about killer applications – about doing powerful, bottom up community organizing and political campaigns, about building hyper-local news sites with a sense of community that reflects how people affiliate and feel, about enabling networks of people who engage with music, sports, gardening, some sort of culture. I’m really eager to see progress at the functional end of the stack – the standards and sample apps that actually let you bridge and aggregate social networks.

I wrote a bit about this topic earlier here, focusing on distributed profile aspect.