Assessing the benefits of the NBN

My colleague Richard Hayes is working on a project to analyze various methodologies that could be used for assessing the benefits of a national broadband network (a companion project exists on the cost side). Richard recently described key aspects of his project on ZdNet’s Twisted Wire program.

The main thing I’ve learnt from that podcast is that an accurate and precise measure of the NBN’s benefits will be difficult to calculate. There are two constraints, the first being the availability of data and the second being our difficulty in estimating externalities across economic sectors. For example, one approach would be to estimate a discrete choice model, asking people to choose between hypothetical bundles of broadband options. This would provide an estimate of their willingness to pay for specific characteristics. The approach would require data that does not currently exist, and even if such data were obtained (e.g., through surveys), it is unclear people can accurately assess their utility for some broadband-related goods/services that do not yet exist. A broader approach involves using a Computable General Equilibrium model which would yield an economy-wide estimate of the NBN’s impact on activity, but is especially difficult to implement where there are lots of interdependencies (such as with broadband). I also learned from the podcast that some benefits are easier to quantify than others, especially those that are already in use by large existing organizations.

It’s not entirely clear what this implies. However, it seems to me we can learn from parallel situations of how R&D projects are managed within large firms. Perhaps, we should stop looking at the NBN as an all-or-nothing investment. It is perhaps not realistic to do a complete analysis and match incremental costs to incremental benefits ex-ante. However, by breaking the project up into stages (geographically or by some other criteria), one could postpone the decision of whether to do later stages until additional information is obtained. Consider the example of Google’s decision to build a fiber broadband network for communities in the US. It would be difficult for Google to value the overall benefits of this network ex-ante. But that hasn’t stopped it from trying out this “experiment” with a few communities initially with the possibility of scaling up later. Shouldn’t we take a similar approach with the NBN?

e-books are overtaking printed books

Australia Radio National recently did a radio program on e-books at the Brisbane Writers Festival. Of the 4 panelists, only one actually owned an electronic book reader. A number of benefits were cited of e-books, including convenience of purchase, lower book prices (especially compared to the prices of printed books in Australia), and better access from rural locations. However, the overall the impression was that printed books and traditional bookstores will continue to exist for some time. One of the panelists stated that printed books will still constitute 70% of the market within a decade. Another panelist felt that bookshops will continue to exist because they are a nexus of social activity.

Let me be the first to say I love bookshops and have a large library of printed books. That said, these people clearly did not get the memo from Jeff Bezos that the number of e-books sold by Amazon has already overtaken hardcover books and it will overtake paperbacks by next year. The recent launch of the ipad, multimedia e-books, and this week’s launch of the third generation Kindle (only US$139) are going to accelerate the process. Having used both e-books and printed books for some time, all I can say is that many of the complaints people mentioned in the podcast have been addressed, or are being addressed, in the newer ebook readers. Change is happening faster than many people think. This week alone I bought 7 books on Kindle for a course I’m teaching, and I have no complaints.

One way to address the gap between perception and reality is to allow more customers to get their hands on an e-book reader, such as at retail outlets and other public places. From personal experience, people who complain about e-books are often surprised by how usable they are after I’ve put an actual device into their hands for the first time. I’ve also noticed that at a lot of places where e-book readers are sold, they are displayed all wrapped up or inside glass cabinets, rather than in a way that invites people to experience them. This is is something e-book retailers such as Amazon and B&N should address, maybe taking a page out of Apple‘s book to make the shopping experience much more hands-on.

Videos now available for “Who Owns The News?” seminar

Click Image for Video Album

Last week MBS hosted a public seminar on “Who Owns the News?” exploring the impact of the internet on the news industry. The event was organized by IPRIA, CMCL and MBS CITE. It serves to clarify the key issues and lays the groundwork for a discussion of these issues. I had fun and hope that the 110+ people who attended it did too.

Sam Ricketson, Professor at Melbourne Law School, chaired the event and did a great job orchestrating the Q&A session. Mark Davison from Monash spoke about changes in copyright law and expressed concerns over the “Hot News” doctrine, an approach currently being proposed by news organizations in the US to prevent others from copying their content. Stephen King outlined the economic issues and has posted his very thoughtful comments at https://economics.com.au/?p=5909.

As the discussant, I described what I had learnt from Mark and Stephen and also tried to consider various options faced by a CEO in this industry. My pdf slides are at http://works.bepress.com/kwanghui/18. While my comments might have been perceived as pessimistic by Stephen and others, I am actually quite optimistic about the future of the industry, but mainly for individuals and firms trying out innovative ways of gathering and delivering the news. I am however pessimistic about existing firms: if history has taught us anything, it is that many of them will struggle to adapt with these drastic changes.

The video recordings for “Who Owns the News?” are now available. I have posted them at http://vimeo.com/album/253549. Portions were removed to protect the identity of audience members. We thank the speakers for permission to share their insights online. Enjoy the show ?

NTP Sues Apple, Google, Motorola, HTC, LG, Microsoft

Last year David Weston and I wrote a teaching case on how in 2000, NTP sued Research in Motion (makers of the popular BlackBerry device) for infringing its patents that cover the wireless delivery of email (free download from WIPO). Well, NTP is at it again, and has just sued a number of firms including Apple, Google, LG, Motorola, HTC and Microsoft that make smartphones. The Washington Post has a brief description of the patents. The earlier case ended with a $600+ million settlement, but that large amount was partly the result of (a) RIM was found to have willfully infringed NTP’s patents and attempted to deceive the court when presenting evidence of “prior art” in 2002, and (b) as the case escalated, RIM faced the very real threat of having its US operations closed down in 2005. A number of the original patent claims were subsequently revoked, but I imagine that NTP is hoping that the larger base of email users these days will give it enough licensing revenue from each of the mobile operators. If you haven’t heard of NTP, that is because the company is sometimes thought of as a patent troll and is not well-loved. In my opinion, the lawsuit also highlights a more subtle problem with the patent system. When successful firms like RIM and Nokia choose to settle with companies like NTP, it gives NTP an incentive and the financial resources to then attack a broader group of other firms. A precedence is also set. It would be better if such firms fought back, e.g., by establishing prior art that invalidates such patents or by pushing back on the claims.

How attractive is pricing for the proposed National Broadband Network?

Today the Government released a report by McKinsey and KPMG suggesting it could build a National Broadband Network — without Telstra — for about $43 billion. There are potentially strong benefits of widespread public access to the internet, even if these benefits are hard to add up and may not be realizable today, especially for faster broadband speeds. One of the features highlighted in the new report is open access at a low price, around $30 wholesale for the cheapest tier, which would translate to about $50 retail. In an interview with ABC News Radio this afternoon, I was asked if this really is an attractive price. By today’s standards, it does seem low. However there are two important assumptions being made. First, there will be no cost blowouts beyond the mild scenarios outlined in the report (try not to think of Myki). Second, that $50/month will still be attractive when the network is ready in about a decade. Let us not forget that even over the past few years prices have fallen dramatically. OECD data shows that a broadband plan in Australia costing $130/month in 2005 only cost $70 per month in 2008. Prices are falling across the world and this trend is likely to continue: telecommunications technology (both wired and wireless) is experiencing rapid innovation. I’m not saying that the Government should not proceed but that we should view these projections with a bit of caution.

A separate issue is whether Telstra is likely to partner with the Government on this project. They have to decide by June. While there are potential cost savings involved, I suspect it is unrealistic. Leaving aside past personality issues and legal threats, the reality is that both parties have different objectives. The government wants to offer broad-based access at a low cost, including to non-metropolitan areas that are expensive to serve. Telstra would probably find it profitable to offer fiber in metropolitan areas and at a higher price. Would they really want to go all the way up to serving 93% of the population with fiber as the Government intends? In the report, costs are a lot higher for serving the last 10%. This may matter to voters, and politicans, but to Telstra the remaining 10-20% of the population may be adequately served if they had NextG coverage, or less. Plus there is the matter of Telstra’s existing copper lines to complicate matters…

Eyjafjallajökull and substitutes for air travel

After teaching a class last night during which we discussed substitutes, I realized that the recent eruption by Eyjafjallajökull, while sad for all involved, presents a good teaching example. The exogenous elimination of air travel led predictably to a scramble for substitutes. Eurostar ran out of capacity and quadrupled their ticket prices; a black market also naturally emerged. Meanwhile bus companies, facing more rivals than Eurostar, kept the same price but temporarily boosted the number of buses they ran. Taxi drivers cashed in on customers including John Cleese who paid $5000 for his ride. I couldn’t help but reflect upon our trip to Japan last month, where we enjoyed riding on the Shinkansen bullet train. The ride was quick and smooth, there were no long waits at security lines and elaborate rituals at airports, legspace was ample, and our electronic equipment did not have to be switched off during takeoff and landing. Air travel is overrated.

Japanese Shinkansen

Computer Worms are Getting Smarter

Our computational server was just hit by a worm that has also affected several other machines at our university. What’s remarkable is the rate and sophistication of innovation in this field (not that it’s a good thing). The worm that hit us is called Downad.ad, a recent member of a family known as the Conficker. Early versions of this worm simply gave its mysterious authors remote access to an infected machine. However, over time the worm’s main task has changed: its primary job is now to infect machines, keep hidden and make itself difficult to eradicate. It does so by using sophisticated encryption techniques, blocking antivirus tools and software upgrades, and most interestingly by making deep changes to the operating system and to itself to remain obfuscated. Once lodged into the victim’s computer, it doesn’t actually harm its host but acts as a parasite, forming a node in a gigantic virtual supercomputer that enables other nasty bits of software to be downloaded and run in a distributed fashion. Amazingly these bits of code are themselves encrypted and distributed using a very sophisticated system. After running the downloaded code, the infected machine sleeps for some time before repeating the cycle. I’m not a computer security expert, but it seems to me that the strategy is very clever – basically the worm writers have decided to create a General Purpose Technology that can be used in numerous ways. Now I wish they had popped up a screen right into Stata on our infected machine and offered me some of that computing power for number crunching.

LED Lighting in Japan

Click for more of our Sakura images

A new innovation is all the rage in Japan… and yes, it’s even better than the iPad ;-). LED lighting is starting to reach the mainstream, and it is both efficient and good. For example we saw this one being advertised on a train as a drop-in replacement for any 60 Watt household lightbulb. Each LED tube consumes only 7.5 Watts and lasts 40,000 hours, or about 4.5 years continuously. While elsewhere people talk about LED lighting, here in Japan regular families are starting to buy them for home use. With prices as low as AUD25 and often ranging AUD50-100, it is starting to become an affordable option. The benefits are not just in energy efficiency. LED lights are cool and the color can be made to appear “natural”. One common complaint is that each LED unit produces only a limited amount of brightness, but it should be sufficient for most households; in any case you can use multiple units. New innovations are allowing for super-bright LEDs, and during the weekend we enjoyed the jaw-dropping experience of “night sakura”: several hundred fully flowering cherry blossoms gracefully lining the moat of the Imperial Palace. These were lighted using LEDs, and I was amazed that each lighting unit was just about the size of a 7-inch frying pan but a couple of inches deep. Only two or three units were needed to light up each cherry tree. They were very bright, but in a manner that was pleasing to the eye and did not overpower the delicate texture of the cherry blossoms. The park claims to have reduced CO2 emissions by 90% to 0.2 Tonnes by using LED instead of conventional lighting. I imagine LED lighting will become widespread pretty soon, not just in Japan but around the world too.

Lara Bingle and the cost of privacy

My colleagues at the Law School have just written an interesting analysis of the Lara Bingle nude photo case. They think she doesn’t have a strong legal case based on either privacy law or defamation law. Lara appears to be earning a tidy sum from the publicity generated, so I suppose its not an entirely bad strategy. The Bingle incident is one of an increasing number of clashes among conflicting goals to maintain privacy, copyright protection, and freedom. It is tempting to blame the technology (cellphones, cameras, iphones, etc.), and to suggest that people should not be allowed to take photographs or videos unless permitted. Countries like the UK and USA now have strict but vague rules on what you can photograph. The problem is that it is difficult to articulate what these parameters would be in a way that is generally acceptable. This creates high enforcement costs and generates unfortunate incidents where people are stopped for doing seemingly legitimate things. Blanket bans do not work well and lead to a climate of censorship and fear. Instead of focusing on the creation of images, a better solution is to concentrate on managing how images are used. Allow people to take photos and videos unfettered. There are so many photos and videos being taken these days that most of these will never see the light of day anyways. Meanwhile establish clearer guidelines on what kinds of images may not be used for various applications: the arts, news, online blogs, commercial advertising and education (also, in each case be clear whether permission is needed from those in the image). While this suggestion may not entirely solve the problem, it will at least take us partways there. Social and legal systems have some ways to go before catching up with the reality of living in a media-rich world.

Is Secrecy Always A Good Thing? The Tale of Apple Aperture vs Adobe Lightroom

Apple is known for its penchant for secrecy. Products are developed as top-secret projects and unveiled to the public with great fanfare. This has brought it tremendous benefit, for example with during the dramatic launch of the iphone by Steve Jobs (http://www.youtube.com/watch?v=vZYlhShD2oQ#t=2m20s). However secrecy carries costs, and in some cases the costs outweigh the benefits. Yet Apple retains this approach across a whole range of its products; secrecy is apparently “baked into the corporate culture” (http://www.nytimes.com/2009/06/23/technology/23apple.html). Consider Aperture 3.0, the newly updated photo-management product by Apple aimed at professional photographers. It was launched last week following Apple’s usual “secret till the last minute” approach. It is instructive to compare Aperture to Lightroom, a very similar product by Apple’s rival Adobe which has taken a very different approach.

There have been two effects of the secrecy surrounding Apple’s Aperture 3.0. First, the direct effect of launching poorly-tested software. Twitter and the Apple forums are full of complaints by anguished customers who have been unable to upgrade older photo libraries (e.g., http://discussions.apple.com/thread.jspa?threadID=2331026). No doubt there is a selection bias and users with a trouble-free experience are less likely to visit these forums and complain. But this is hardly the “awesome” and polished experience that is expected from Apple, a company that uses “it just works” as a tagline. Among the reports are complaints by customers whose computers have totally frozen during the upgrade, those who succeeded in upgrading but then found it unstable, and those who gave up but were unable to reinstall earlier versions of that software. It is clear from these reports that Aperture 3 was insufficiently tested before being sold, especially against real-world photo libraries in use by existing users.

A second effect of secrecy is that professionals have been increasingly adopting Adobe Lightroom. While the buzz of unveiling a new product may matter for consumer-oriented products like the iphone or ipad, Aperture is aimed at professional photographers, design companies and media organisations. For this audience, surprise may be less important and even counterproductive. Instead , advance knowledge of upcoming features and a stable product at launch time are probably more important. These allow the client to anticipate changes and plan for its integration into existing workflows and business processes.

In contrast to Apple, Adobe has taken a different approach with Lightroom. In October last year it launched the new version as a public beta, available for anyone to download and try for free (the software expires automatically at the launch of the actual product). The public beta gives Adobe precious information from real-world customers on a massive scale. In addition, customers are able to experiment with features likely to be included in the final version, rather than being kept in the dark with no way to anticipate and plan their own businesses around Adobe’s roadmap. Lightroom has its share of detractors, but generally the response online has been positive. The important thing to point out is that Adobe isn’t one of these “open source” players. Lightroom is commercial software that is quite expensive and the guts of the software are heavily protected. However, by being less secretive than Apple, Adobe is able to engage better with its customers. This applies not just to the public beta: in earlier versions of Lightroom, Adobe took a more open stance towards allowing third-party plugins and introducing user-created presets.

Looking more broadly, my sense is that Apple’s secrecy is costing it not just with Aperture but also with other recent product launches. For example, iPad developers are in a scramble to develop software for the new device which ships in about 2 months. Apparently even Apple’s close allies were introduced to the iPad just weeks before it was publicly announced. Even Apple’s new Snow Leopard operating system had its share of bad surprises after it was launched, causing some cases of data corruption. To this day, none of my colleagues are able to print from it to our enterprise-quality printer down the hallway using the Safari web browser or the Preview tool without causing the software to crash. The lesson to be learnt is that while secrecy may be useful for some products, firms (including Apple) should revisit the question as to whether they need to be secretive across all their products.

Do share your thoughts and comments on our discussion board.

—- update on 17 March 2010

A quick update – after writing this article I received a surprising number of emails. Quite a few photographers and media professionals wrote to say they agreed with my perspective. A few disagreed, including some folks who said Adobe also had its share of problems. A few people also wrote to complain that I am biased and “anti-Apple”; I contend this is untrue seeing that I personally own a lot of Apple products.

A couple of people asked what the benefits were of secrecy, and to give a quick answer, it generates greater consumer buzz when the product is launched (as mentioned). In addition, secrecy is one of the mechanisms by which firms attempt to protect intellectual property (e.g., the oft-told story of Coca Cola’s secret recipe). Keeping something secret may also help prevent competitors from hiring the relevant people to develop similar products, although this is controversial as it depends on how scarce the relevant skills are. I hope this helps give my article some balance. I’m not saying secrecy is bad in general, but that it should be used when appropriate. It may be somewhat less effective for professional rather than consumer products, especially software which involves network effects and benefits from a cohesive developer community.

A spokesperson from Apple wrote to me to say that a number of photographers did work with Apple on the beta prior to launch (but as I understand it from people in the industry, this was a private beta and a non-disclosure agreement was involved). Apple also said many of the issues have been addressed in a recent upgrade to the software, and they dispute the market share data used by John Nack which I linked to in my article. They also made a few other points. I am sharing this so that their view is represented and they are welcome to post a reply too, however I don’t think it takes away from the main points of my article. Subsequent to my post, I learnt that Apple’s secrecy was also a concern raised by various photography blogs (e.g., http://photofocus.com/2010/02/17/aperture-3-0-very-cool-but-not-ready-for-prime-time/). Moreover, the extensive fixes that were made soon after Aperture’s release shouldn’t have been needed in the first place if the software had been properly field-tested. Fundamentally, secrecy means missing out on engaging with the professional community and developers in an extensive way prior to the product’s launch. That is the price to pay, and while in some cases this is worthwhile, in other cases its not always a net benefit.