New Models for the Book Industry

MBS/CMCL/IPRIA Seminar on Book Publishing. 9 Feb 2011

Traditional book publishers have been increasingly challenged by e-books and other digital technologies. We decided to organize a public seminar with industry participants to learn about new opportunities in this area.

A common theme among our speakers was of the growing fault lines between those who create content and those who distribute it. From the point of view of content creators, digital technology is not a bad thing. It presents new ways to reach customers. To a firm like Lonely Planet, printed books, e-books and apps are alternative and useful delivery mechanisms. The heterogeneity is a good thing since each delivery mechanism has its strengths and weaknesses. For example a map-based application on your mobile phone may be useful for navigating the streets of Melbourne, while a printed travel book might be preferred if you are traveling the Australian outback (books are more durable than electronic devices; they also require no electrical power).

Authors are beginning to explore new pricing schemes. For example several authors are trying to sell a larger volume of e-books at lower prices (around $2.99 – $3.99) instead of a small number of regular books at higher prices (say, $10). Other authors are trying “pay what you want” schemes. Our guest speaker Max Barry will be selling his next book as a real time electronic serial, distributing it directly from his website in small chunks and for an attractive price ($6.95). It is too early to know which of these will work well and for whom because the book industry has many different segments of customers with different needs. Furthermore, there are concerns with e-books around the issue of digital piracy. However, we were reminded by one of the speakers that for many authors, obscurity is worse than piracy. Besides, piracy has long been a threat even with printed books: you will of course remember the photocopy machine which has existed for quite awhile, as well as those suspiciously inexpensive textbooks printed on poor quality paper brought in from various developing countries. It seems to me at least that in the digital world, selling a large volume of e-books at a low price makes a lot of sense. In this context, the serialized e-book has an added advantage because it builds a repeated interaction between the reader the author. Over time this may help create loyalty towards the author.

I see three areas of opportunity and these arise along the fault lines described above.

The first opportunity is with “apps”. It crossed my mind earlier this month that simply repackaging a book as an app gives the author tremendous freedom. With books, the author is stuck with publishing delays, parallel import laws and other legal impediments, not just the need to physically deliver products. With apps, all that is gone. Re-purpose a book as an app and it morphs into a software program, so different rules apply. If you go one step further and make the app exciting to use, you can counteract the myth that printed books are superior. Those who have tried The Elements on an iPad will find it hard to go back to a printed Periodic Table. Similarly, having compared both this app and the book version, I much prefer learning about photography using the app version which is more interactive and has built-in videos.

A second opportunity lies in offering new skills combinations. In order to serialize his next novel, Max Barry combined his computer programming expertise with a passion for writing: he is essentially selling each subscriber a private RSS feed as a separate product. Most people do not have this combination of skills, especially the generation of authors that went to journalism school and did not acquire a technical background. An opportunity exists for people who can bridge this divide and provide new tools and services to help content authors to craft their products and reach customers easily. For example, Graeme Connelly spoke to us about the new “expresso printer” at Melbourne University Bookstore which produces small print runs that were uneconomical in the past. I believe this is only a starting point, e.g., we don’t yet have the equivalent of WordPress for creating books with existing tools being either too complex or too amateurish.

The third opportunity lies in further disaggregating the value chain. I learned from the session that one of the benefits to authors of going with traditional book publishers is their expertise in editing. Publishers convert the messy raw material that is a manuscript into a curated experience that is proof-read, edited and checked. I suspect that the editing activity will split apart into a distinct industry segment, just as has happened in other industries such as semiconductors, which used to be vertically integrated but which now has some firms focusing exclusively on system development and others on chip design or manufacturing. This is pure speculation on my part, but I don’t see why the editing process, while valuable, needs to be tied much longer to the manufacture and distribution of physical products.

It is hard to predict how things will work out and I don’t think the traditional book will completely disappear. This industry is definitely going to be interesting to watch over the next few years.

Videos now available for “Who Owns The News?” seminar

Click Image for Video Album

Last week MBS hosted a public seminar on “Who Owns the News?” exploring the impact of the internet on the news industry. The event was organized by IPRIA, CMCL and MBS CITE. It serves to clarify the key issues and lays the groundwork for a discussion of these issues. I had fun and hope that the 110+ people who attended it did too.

Sam Ricketson, Professor at Melbourne Law School, chaired the event and did a great job orchestrating the Q&A session. Mark Davison from Monash spoke about changes in copyright law and expressed concerns over the “Hot News” doctrine, an approach currently being proposed by news organizations in the US to prevent others from copying their content. Stephen King outlined the economic issues and has posted his very thoughtful comments at https://economics.com.au/?p=5909.

As the discussant, I described what I had learnt from Mark and Stephen and also tried to consider various options faced by a CEO in this industry. My pdf slides are at http://works.bepress.com/kwanghui/18. While my comments might have been perceived as pessimistic by Stephen and others, I am actually quite optimistic about the future of the industry, but mainly for individuals and firms trying out innovative ways of gathering and delivering the news. I am however pessimistic about existing firms: if history has taught us anything, it is that many of them will struggle to adapt with these drastic changes.

The video recordings for “Who Owns the News?” are now available. I have posted them at http://vimeo.com/album/253549. Portions were removed to protect the identity of audience members. We thank the speakers for permission to share their insights online. Enjoy the show ?

NTP Sues Apple, Google, Motorola, HTC, LG, Microsoft

Last year David Weston and I wrote a teaching case on how in 2000, NTP sued Research in Motion (makers of the popular BlackBerry device) for infringing its patents that cover the wireless delivery of email (free download from WIPO). Well, NTP is at it again, and has just sued a number of firms including Apple, Google, LG, Motorola, HTC and Microsoft that make smartphones. The Washington Post has a brief description of the patents. The earlier case ended with a $600+ million settlement, but that large amount was partly the result of (a) RIM was found to have willfully infringed NTP’s patents and attempted to deceive the court when presenting evidence of “prior art” in 2002, and (b) as the case escalated, RIM faced the very real threat of having its US operations closed down in 2005. A number of the original patent claims were subsequently revoked, but I imagine that NTP is hoping that the larger base of email users these days will give it enough licensing revenue from each of the mobile operators. If you haven’t heard of NTP, that is because the company is sometimes thought of as a patent troll and is not well-loved. In my opinion, the lawsuit also highlights a more subtle problem with the patent system. When successful firms like RIM and Nokia choose to settle with companies like NTP, it gives NTP an incentive and the financial resources to then attack a broader group of other firms. A precedence is also set. It would be better if such firms fought back, e.g., by establishing prior art that invalidates such patents or by pushing back on the claims.

Video Podcast – IPRIA Seminar on Banning Tobacco Logos

Last week, IPRIA organized a public seminar on the banning of tobacco logos. I have just posted videos at http://vimeo.com/album/232376. Drop by for an interesting debate on private versus social costs, Government policy and WIPO/TRIPS. Details of the seminar and Powerpoint slides from each presenter are on the IPRIA website.

The Australian Government recently announced its intention to ban the use of artwork and logos in the branding of tobacco products, effective from 2012. In this seminar, four distinguished speakers, comprising: Professor Mark Davison (Law, Monash University); Professor John Freebairn (Economics, University of Melbourne); Associate Professor Angela Paladino (Marketing, University of Melbourne) and Mr Tim Wilson (Institute for Public Affairs), consider the economic, legal, ethical and marketing implications of this decision.

How attractive is pricing for the proposed National Broadband Network?

Today the Government released a report by McKinsey and KPMG suggesting it could build a National Broadband Network — without Telstra — for about $43 billion. There are potentially strong benefits of widespread public access to the internet, even if these benefits are hard to add up and may not be realizable today, especially for faster broadband speeds. One of the features highlighted in the new report is open access at a low price, around $30 wholesale for the cheapest tier, which would translate to about $50 retail. In an interview with ABC News Radio this afternoon, I was asked if this really is an attractive price. By today’s standards, it does seem low. However there are two important assumptions being made. First, there will be no cost blowouts beyond the mild scenarios outlined in the report (try not to think of Myki). Second, that $50/month will still be attractive when the network is ready in about a decade. Let us not forget that even over the past few years prices have fallen dramatically. OECD data shows that a broadband plan in Australia costing $130/month in 2005 only cost $70 per month in 2008. Prices are falling across the world and this trend is likely to continue: telecommunications technology (both wired and wireless) is experiencing rapid innovation. I’m not saying that the Government should not proceed but that we should view these projections with a bit of caution.

A separate issue is whether Telstra is likely to partner with the Government on this project. They have to decide by June. While there are potential cost savings involved, I suspect it is unrealistic. Leaving aside past personality issues and legal threats, the reality is that both parties have different objectives. The government wants to offer broad-based access at a low cost, including to non-metropolitan areas that are expensive to serve. Telstra would probably find it profitable to offer fiber in metropolitan areas and at a higher price. Would they really want to go all the way up to serving 93% of the population with fiber as the Government intends? In the report, costs are a lot higher for serving the last 10%. This may matter to voters, and politicans, but to Telstra the remaining 10-20% of the population may be adequately served if they had NextG coverage, or less. Plus there is the matter of Telstra’s existing copper lines to complicate matters…

Eyjafjallajökull and substitutes for air travel

After teaching a class last night during which we discussed substitutes, I realized that the recent eruption by Eyjafjallajökull, while sad for all involved, presents a good teaching example. The exogenous elimination of air travel led predictably to a scramble for substitutes. Eurostar ran out of capacity and quadrupled their ticket prices; a black market also naturally emerged. Meanwhile bus companies, facing more rivals than Eurostar, kept the same price but temporarily boosted the number of buses they ran. Taxi drivers cashed in on customers including John Cleese who paid $5000 for his ride. I couldn’t help but reflect upon our trip to Japan last month, where we enjoyed riding on the Shinkansen bullet train. The ride was quick and smooth, there were no long waits at security lines and elaborate rituals at airports, legspace was ample, and our electronic equipment did not have to be switched off during takeoff and landing. Air travel is overrated.

Japanese Shinkansen

Computer Worms are Getting Smarter

Our computational server was just hit by a worm that has also affected several other machines at our university. What’s remarkable is the rate and sophistication of innovation in this field (not that it’s a good thing). The worm that hit us is called Downad.ad, a recent member of a family known as the Conficker. Early versions of this worm simply gave its mysterious authors remote access to an infected machine. However, over time the worm’s main task has changed: its primary job is now to infect machines, keep hidden and make itself difficult to eradicate. It does so by using sophisticated encryption techniques, blocking antivirus tools and software upgrades, and most interestingly by making deep changes to the operating system and to itself to remain obfuscated. Once lodged into the victim’s computer, it doesn’t actually harm its host but acts as a parasite, forming a node in a gigantic virtual supercomputer that enables other nasty bits of software to be downloaded and run in a distributed fashion. Amazingly these bits of code are themselves encrypted and distributed using a very sophisticated system. After running the downloaded code, the infected machine sleeps for some time before repeating the cycle. I’m not a computer security expert, but it seems to me that the strategy is very clever – basically the worm writers have decided to create a General Purpose Technology that can be used in numerous ways. Now I wish they had popped up a screen right into Stata on our infected machine and offered me some of that computing power for number crunching.

Lara Bingle and the cost of privacy

My colleagues at the Law School have just written an interesting analysis of the Lara Bingle nude photo case. They think she doesn’t have a strong legal case based on either privacy law or defamation law. Lara appears to be earning a tidy sum from the publicity generated, so I suppose its not an entirely bad strategy. The Bingle incident is one of an increasing number of clashes among conflicting goals to maintain privacy, copyright protection, and freedom. It is tempting to blame the technology (cellphones, cameras, iphones, etc.), and to suggest that people should not be allowed to take photographs or videos unless permitted. Countries like the UK and USA now have strict but vague rules on what you can photograph. The problem is that it is difficult to articulate what these parameters would be in a way that is generally acceptable. This creates high enforcement costs and generates unfortunate incidents where people are stopped for doing seemingly legitimate things. Blanket bans do not work well and lead to a climate of censorship and fear. Instead of focusing on the creation of images, a better solution is to concentrate on managing how images are used. Allow people to take photos and videos unfettered. There are so many photos and videos being taken these days that most of these will never see the light of day anyways. Meanwhile establish clearer guidelines on what kinds of images may not be used for various applications: the arts, news, online blogs, commercial advertising and education (also, in each case be clear whether permission is needed from those in the image). While this suggestion may not entirely solve the problem, it will at least take us partways there. Social and legal systems have some ways to go before catching up with the reality of living in a media-rich world.

Is Secrecy Always A Good Thing? The Tale of Apple Aperture vs Adobe Lightroom

Apple is known for its penchant for secrecy. Products are developed as top-secret projects and unveiled to the public with great fanfare. This has brought it tremendous benefit, for example with during the dramatic launch of the iphone by Steve Jobs (http://www.youtube.com/watch?v=vZYlhShD2oQ#t=2m20s). However secrecy carries costs, and in some cases the costs outweigh the benefits. Yet Apple retains this approach across a whole range of its products; secrecy is apparently “baked into the corporate culture” (http://www.nytimes.com/2009/06/23/technology/23apple.html). Consider Aperture 3.0, the newly updated photo-management product by Apple aimed at professional photographers. It was launched last week following Apple’s usual “secret till the last minute” approach. It is instructive to compare Aperture to Lightroom, a very similar product by Apple’s rival Adobe which has taken a very different approach.

There have been two effects of the secrecy surrounding Apple’s Aperture 3.0. First, the direct effect of launching poorly-tested software. Twitter and the Apple forums are full of complaints by anguished customers who have been unable to upgrade older photo libraries (e.g., http://discussions.apple.com/thread.jspa?threadID=2331026). No doubt there is a selection bias and users with a trouble-free experience are less likely to visit these forums and complain. But this is hardly the “awesome” and polished experience that is expected from Apple, a company that uses “it just works” as a tagline. Among the reports are complaints by customers whose computers have totally frozen during the upgrade, those who succeeded in upgrading but then found it unstable, and those who gave up but were unable to reinstall earlier versions of that software. It is clear from these reports that Aperture 3 was insufficiently tested before being sold, especially against real-world photo libraries in use by existing users.

A second effect of secrecy is that professionals have been increasingly adopting Adobe Lightroom. While the buzz of unveiling a new product may matter for consumer-oriented products like the iphone or ipad, Aperture is aimed at professional photographers, design companies and media organisations. For this audience, surprise may be less important and even counterproductive. Instead , advance knowledge of upcoming features and a stable product at launch time are probably more important. These allow the client to anticipate changes and plan for its integration into existing workflows and business processes.

In contrast to Apple, Adobe has taken a different approach with Lightroom. In October last year it launched the new version as a public beta, available for anyone to download and try for free (the software expires automatically at the launch of the actual product). The public beta gives Adobe precious information from real-world customers on a massive scale. In addition, customers are able to experiment with features likely to be included in the final version, rather than being kept in the dark with no way to anticipate and plan their own businesses around Adobe’s roadmap. Lightroom has its share of detractors, but generally the response online has been positive. The important thing to point out is that Adobe isn’t one of these “open source” players. Lightroom is commercial software that is quite expensive and the guts of the software are heavily protected. However, by being less secretive than Apple, Adobe is able to engage better with its customers. This applies not just to the public beta: in earlier versions of Lightroom, Adobe took a more open stance towards allowing third-party plugins and introducing user-created presets.

Looking more broadly, my sense is that Apple’s secrecy is costing it not just with Aperture but also with other recent product launches. For example, iPad developers are in a scramble to develop software for the new device which ships in about 2 months. Apparently even Apple’s close allies were introduced to the iPad just weeks before it was publicly announced. Even Apple’s new Snow Leopard operating system had its share of bad surprises after it was launched, causing some cases of data corruption. To this day, none of my colleagues are able to print from it to our enterprise-quality printer down the hallway using the Safari web browser or the Preview tool without causing the software to crash. The lesson to be learnt is that while secrecy may be useful for some products, firms (including Apple) should revisit the question as to whether they need to be secretive across all their products.

Do share your thoughts and comments on our discussion board.

—- update on 17 March 2010

A quick update – after writing this article I received a surprising number of emails. Quite a few photographers and media professionals wrote to say they agreed with my perspective. A few disagreed, including some folks who said Adobe also had its share of problems. A few people also wrote to complain that I am biased and “anti-Apple”; I contend this is untrue seeing that I personally own a lot of Apple products.

A couple of people asked what the benefits were of secrecy, and to give a quick answer, it generates greater consumer buzz when the product is launched (as mentioned). In addition, secrecy is one of the mechanisms by which firms attempt to protect intellectual property (e.g., the oft-told story of Coca Cola’s secret recipe). Keeping something secret may also help prevent competitors from hiring the relevant people to develop similar products, although this is controversial as it depends on how scarce the relevant skills are. I hope this helps give my article some balance. I’m not saying secrecy is bad in general, but that it should be used when appropriate. It may be somewhat less effective for professional rather than consumer products, especially software which involves network effects and benefits from a cohesive developer community.

A spokesperson from Apple wrote to me to say that a number of photographers did work with Apple on the beta prior to launch (but as I understand it from people in the industry, this was a private beta and a non-disclosure agreement was involved). Apple also said many of the issues have been addressed in a recent upgrade to the software, and they dispute the market share data used by John Nack which I linked to in my article. They also made a few other points. I am sharing this so that their view is represented and they are welcome to post a reply too, however I don’t think it takes away from the main points of my article. Subsequent to my post, I learnt that Apple’s secrecy was also a concern raised by various photography blogs (e.g., http://photofocus.com/2010/02/17/aperture-3-0-very-cool-but-not-ready-for-prime-time/). Moreover, the extensive fixes that were made soon after Aperture’s release shouldn’t have been needed in the first place if the software had been properly field-tested. Fundamentally, secrecy means missing out on engaging with the professional community and developers in an extensive way prior to the product’s launch. That is the price to pay, and while in some cases this is worthwhile, in other cases its not always a net benefit.

Background Briefing: Internet Piracy

This week, Australia Radio National ran a Background Briefing on internet piracy. Going beyond just arguing whether “downloading” is good or bad, this podcast discusses changes in copyright law over the centuries, why these tensions came about, and puts copyright infringement in a broader context. I like it that they present a balanced view with both sides represented, that they trace where the myth of the starving artist came from, and that they make a distinction between the debate on illegal downloads and that on remix culture. Relevant sound clips from remix artistes (DJ Danger Mouse, Girl Talk, Steinsky) and various radio/TV programs are included, as well as comments from IPRIA affiliate Kim Weatherall. The program could have been improved with a more in-depth discussion of how internet piracy fits with the future strategies of firms and other economic actors, as well as possible impacts of changes in the Law across various jurisdictions including Australia. But that might have made it less appealing to a general audience. Overall, an excellent podcast. Listen to the audio or read the transcript at http://www.abc.net.au/rn/backgroundbriefing/stories/2009/2726710.htm.