Podcasts on innovation by Tucci, Cassiman & O’Sullivan, Lim

I recently uploaded video podcasts for a couple of events organized by Melbourne Business School and IPRIA:

  • Chris Tucci presented last week on “Creative Destruction and Intellectual Property: What’s an Incumbent to Do?” Part 1 covers the key concepts and Part 2 presents examples from his research.
  • Bruno Cassiman and Don O’Sullivan presented several months back on R&D strategy and executive compensation, respectively. Bruno’s talk was on how collaboration on research and development (through open innovation and science linkages) can dramatically affect R&D outcomes. Don spoke on how the structure of executive compensation relates to the valuation of intangible assets.

Thanks to each speaker for allowing us to share their presentations online.

In addition I was recently featured in an interview on the University of Melbourne Up Close podcast. It is on the effect of acquisitions on inventor productivity and based on my research with Rahul Kapoor.

Apple’s Final Cut Pro is part of a broader strategic change

Apple’s new final cut pro is causing unhappiness. But it is only part of two broader changes: a shift in Apple’s strategy towards consumers and a broader change in the demand for videos.

Last week Apple launched the new version of their movie-editing software, Final Cut Pro X (FCPX). This led to a firestorm of criticism by professional video editors (see here, here, and here). Even Conan O’Brien decided decided to chip in :-). The main complain is that it lacks sophisticated features used by broadcasters and video professionals and that are available in earlier versions of the software. FCPX doesn’t even open projects built using earlier versions! On the positive side it is slick and easy to use.

Apple is consolidating its strategy

I suspect that two things are happening. The first is that Apple is consolidating their strategy around mobile computing, the iCloud and end-customers. The price tag for the new FCPX is an indication (US$299.99 versus around $1000 for the earlier version). The move by Apple away from “professional” markets has been happening for some time now and across multiple products. It happened with Aperture, which is now basically an upgrade for those using iPhoto, and a nice one at that, but distinct from Lightroom+Photoshop. Earlier this year Apple decided to discontinue its professional xserve rack mounted server. This year’s Macbook Pro notebook was the first to receive several new high-end features (Thunderbolt and 6Gb/s SATA) that have yet to appear in Apple’s high-end Mac Pro desktop aimed at professionals. This is not a surprise as Apple is now selling 2.4 times more notebooks than desktops.

Focusing on the consumer market makes good business sense for Apple because (a) it fits with their capabilities, which are about making complicated things simple to use, whereas a lot of professional software is by nature complicated and intricate, (b) they can cross-sell many more copies of the software to people upgrading from iMovie or iPhoto than they can to a niche audience of professionals, and (c) it fits well with their major strategic thrust on the iphone/ipad/icloud platform, which is consumer focused rather than enterprise focused.

Video consumption is changing

While video professionals are blaming Apple for not listening to their needs, there is a bigger trend that is happening here. Apple is responding to anticipated changes in the marketplace. Just as with the newspaper and book publishing industries, there are big changes happening with video production and broadcasting. An increasing number of videos are being made by “advanced amateurs”. This is driven by the proliferation of inexpensive video cameras, as well as new platforms for online video distribution. When I think about my own personal consumption of video, I am amazed how little television I watch anymore. I do watch the occasional movie, but an increasing amount of my video consumption is on Youtube, Vimeo and other sites sent to me via Facebook, twitter and email. Are these videos as well-made as those by professional broadcasters? No. But are they good enough for the general public? Often, yes. For these people, the new Final Cut Pro X is a terrific tool for the most part.

Beyond traditional video, there are other interesting developments, such as animoto that takes the pain out of making simple music videos. There is software like Toontastic that lets you make animated skits and apparently even the Gans family is now into it. Each minute of our free time we spend watching these things is probably a minute less spent watching professionally-produced video content.

I’m not saying this to defend Apple. As David Pogue pointed out, in the case of FCPX, Apple Blew It. Some of my friends are in this business and I can’t help feeling concerned for them. As one of them wrote to me, “the industry has gone nuts over this ‘upgrade’. it’s really bad and sad”. I think Apple should have launched FCPX as a different product, instead of discontinuing Final Cut Pro. But the knee-jerk reaction among video professionals right now is leading them to be angry about some some video editing tool. Fair enough. But they need to assess the bigger question: where will their industry be in 5 years, and where do they want to be in their careers?

“What really went wrong for Borders and Angus & Robertson” on The Conversation

Final Closing Down Sale at A&R

” data-medium-file=”https://coreeconomicsblog.files.wordpress.com/2011/03/arclosingdown.jpg?w=300″ data-large-file=”https://coreeconomicsblog.files.wordpress.com/2011/03/arclosingdown.jpg?w=320″ class=”size-full wp-image-6852″ title=”Final Closing Down Sale at A&R” src=”https://i0.wp.com/economics.com.au/wp-content/uploads/2011/03/arclosingdown.jpg” alt=”” width=”224″ height=”168″/>
Final Closing Down Sale at A&R

Today, Melbourne University along with 8 other University partners and CSIRO launches a new media venture, The Conversation. It will present independent commentary by academics and researchers. The Business+Economy section features my article on “What really went wrong for Borders and Angus & Robertson.” Using publicly available data I explore several explanations for the demise of REDgroup, the firm which owned and operated the Borders and A&R stores in this region. Drop by to read the article and share your thoughts at:
http://theconversation.edu.au/articles/what-really-went-wrong-for-borders-and-angus-and-robertson-301

Update on Gene Patents 2010

Here’s an excellent update on Gene Patents covering the year 2010: http://genepatents.info/2011/02/24/gene-patents-2010-update/. It is written by my student Rachel Goh, a 5th year medical student at the University of Melbourne. She discusses the controversy surrounding the Myriad and Monsanto cases in the US and Europe, as well as legal decisions in Australia surrounding breast cancer tests and the Australian Senate review on gene patents. Of particular interest is her observation that we are moving increasingly towards “multi-genomic” tests, so the patenting of individual genetic sequences will cause greater problems for follow-on and systemic innovation. I see here a parallel to software patents and patent thickets, which have been said to have had similar effects. Rachel also included a thoughtful commentary along with her summary.

New Models for the Book Industry

MBS/CMCL/IPRIA Seminar on Book Publishing. 9 Feb 2011

Traditional book publishers have been increasingly challenged by e-books and other digital technologies. We decided to organize a public seminar with industry participants to learn about new opportunities in this area.

A common theme among our speakers was of the growing fault lines between those who create content and those who distribute it. From the point of view of content creators, digital technology is not a bad thing. It presents new ways to reach customers. To a firm like Lonely Planet, printed books, e-books and apps are alternative and useful delivery mechanisms. The heterogeneity is a good thing since each delivery mechanism has its strengths and weaknesses. For example a map-based application on your mobile phone may be useful for navigating the streets of Melbourne, while a printed travel book might be preferred if you are traveling the Australian outback (books are more durable than electronic devices; they also require no electrical power).

Authors are beginning to explore new pricing schemes. For example several authors are trying to sell a larger volume of e-books at lower prices (around $2.99 – $3.99) instead of a small number of regular books at higher prices (say, $10). Other authors are trying “pay what you want” schemes. Our guest speaker Max Barry will be selling his next book as a real time electronic serial, distributing it directly from his website in small chunks and for an attractive price ($6.95). It is too early to know which of these will work well and for whom because the book industry has many different segments of customers with different needs. Furthermore, there are concerns with e-books around the issue of digital piracy. However, we were reminded by one of the speakers that for many authors, obscurity is worse than piracy. Besides, piracy has long been a threat even with printed books: you will of course remember the photocopy machine which has existed for quite awhile, as well as those suspiciously inexpensive textbooks printed on poor quality paper brought in from various developing countries. It seems to me at least that in the digital world, selling a large volume of e-books at a low price makes a lot of sense. In this context, the serialized e-book has an added advantage because it builds a repeated interaction between the reader the author. Over time this may help create loyalty towards the author.

I see three areas of opportunity and these arise along the fault lines described above.

The first opportunity is with “apps”. It crossed my mind earlier this month that simply repackaging a book as an app gives the author tremendous freedom. With books, the author is stuck with publishing delays, parallel import laws and other legal impediments, not just the need to physically deliver products. With apps, all that is gone. Re-purpose a book as an app and it morphs into a software program, so different rules apply. If you go one step further and make the app exciting to use, you can counteract the myth that printed books are superior. Those who have tried The Elements on an iPad will find it hard to go back to a printed Periodic Table. Similarly, having compared both this app and the book version, I much prefer learning about photography using the app version which is more interactive and has built-in videos.

A second opportunity lies in offering new skills combinations. In order to serialize his next novel, Max Barry combined his computer programming expertise with a passion for writing: he is essentially selling each subscriber a private RSS feed as a separate product. Most people do not have this combination of skills, especially the generation of authors that went to journalism school and did not acquire a technical background. An opportunity exists for people who can bridge this divide and provide new tools and services to help content authors to craft their products and reach customers easily. For example, Graeme Connelly spoke to us about the new “expresso printer” at Melbourne University Bookstore which produces small print runs that were uneconomical in the past. I believe this is only a starting point, e.g., we don’t yet have the equivalent of WordPress for creating books with existing tools being either too complex or too amateurish.

The third opportunity lies in further disaggregating the value chain. I learned from the session that one of the benefits to authors of going with traditional book publishers is their expertise in editing. Publishers convert the messy raw material that is a manuscript into a curated experience that is proof-read, edited and checked. I suspect that the editing activity will split apart into a distinct industry segment, just as has happened in other industries such as semiconductors, which used to be vertically integrated but which now has some firms focusing exclusively on system development and others on chip design or manufacturing. This is pure speculation on my part, but I don’t see why the editing process, while valuable, needs to be tied much longer to the manufacture and distribution of physical products.

It is hard to predict how things will work out and I don’t think the traditional book will completely disappear. This industry is definitely going to be interesting to watch over the next few years.

New paper on research funding

Fiona Murray and I have just completed a new paper entitled “Funding Scientific Knowledge: Selection, Disclosure and the Public-Private Portfolio” (abstract over the fold). It was part of the 50th Anniversary of the NBER’s Rate and Direction of Inventive Activity conference. The volume is coming out later in the year. For now, you can download the here. Continue reading “New paper on research funding”

Assessing the benefits of the NBN

My colleague Richard Hayes is working on a project to analyze various methodologies that could be used for assessing the benefits of a national broadband network (a companion project exists on the cost side). Richard recently described key aspects of his project on ZdNet’s Twisted Wire program.

The main thing I’ve learnt from that podcast is that an accurate and precise measure of the NBN’s benefits will be difficult to calculate. There are two constraints, the first being the availability of data and the second being our difficulty in estimating externalities across economic sectors. For example, one approach would be to estimate a discrete choice model, asking people to choose between hypothetical bundles of broadband options. This would provide an estimate of their willingness to pay for specific characteristics. The approach would require data that does not currently exist, and even if such data were obtained (e.g., through surveys), it is unclear people can accurately assess their utility for some broadband-related goods/services that do not yet exist. A broader approach involves using a Computable General Equilibrium model which would yield an economy-wide estimate of the NBN’s impact on activity, but is especially difficult to implement where there are lots of interdependencies (such as with broadband). I also learned from the podcast that some benefits are easier to quantify than others, especially those that are already in use by large existing organizations.

It’s not entirely clear what this implies. However, it seems to me we can learn from parallel situations of how R&D projects are managed within large firms. Perhaps, we should stop looking at the NBN as an all-or-nothing investment. It is perhaps not realistic to do a complete analysis and match incremental costs to incremental benefits ex-ante. However, by breaking the project up into stages (geographically or by some other criteria), one could postpone the decision of whether to do later stages until additional information is obtained. Consider the example of Google’s decision to build a fiber broadband network for communities in the US. It would be difficult for Google to value the overall benefits of this network ex-ante. But that hasn’t stopped it from trying out this “experiment” with a few communities initially with the possibility of scaling up later. Shouldn’t we take a similar approach with the NBN?

e-books are overtaking printed books

Australia Radio National recently did a radio program on e-books at the Brisbane Writers Festival. Of the 4 panelists, only one actually owned an electronic book reader. A number of benefits were cited of e-books, including convenience of purchase, lower book prices (especially compared to the prices of printed books in Australia), and better access from rural locations. However, the overall the impression was that printed books and traditional bookstores will continue to exist for some time. One of the panelists stated that printed books will still constitute 70% of the market within a decade. Another panelist felt that bookshops will continue to exist because they are a nexus of social activity.

Let me be the first to say I love bookshops and have a large library of printed books. That said, these people clearly did not get the memo from Jeff Bezos that the number of e-books sold by Amazon has already overtaken hardcover books and it will overtake paperbacks by next year. The recent launch of the ipad, multimedia e-books, and this week’s launch of the third generation Kindle (only US$139) are going to accelerate the process. Having used both e-books and printed books for some time, all I can say is that many of the complaints people mentioned in the podcast have been addressed, or are being addressed, in the newer ebook readers. Change is happening faster than many people think. This week alone I bought 7 books on Kindle for a course I’m teaching, and I have no complaints.

One way to address the gap between perception and reality is to allow more customers to get their hands on an e-book reader, such as at retail outlets and other public places. From personal experience, people who complain about e-books are often surprised by how usable they are after I’ve put an actual device into their hands for the first time. I’ve also noticed that at a lot of places where e-book readers are sold, they are displayed all wrapped up or inside glass cabinets, rather than in a way that invites people to experience them. This is is something e-book retailers such as Amazon and B&N should address, maybe taking a page out of Apple‘s book to make the shopping experience much more hands-on.

Videos now available for “Who Owns The News?” seminar

Click Image for Video Album

Last week MBS hosted a public seminar on “Who Owns the News?” exploring the impact of the internet on the news industry. The event was organized by IPRIA, CMCL and MBS CITE. It serves to clarify the key issues and lays the groundwork for a discussion of these issues. I had fun and hope that the 110+ people who attended it did too.

Sam Ricketson, Professor at Melbourne Law School, chaired the event and did a great job orchestrating the Q&A session. Mark Davison from Monash spoke about changes in copyright law and expressed concerns over the “Hot News” doctrine, an approach currently being proposed by news organizations in the US to prevent others from copying their content. Stephen King outlined the economic issues and has posted his very thoughtful comments at https://economics.com.au/?p=5909.

As the discussant, I described what I had learnt from Mark and Stephen and also tried to consider various options faced by a CEO in this industry. My pdf slides are at http://works.bepress.com/kwanghui/18. While my comments might have been perceived as pessimistic by Stephen and others, I am actually quite optimistic about the future of the industry, but mainly for individuals and firms trying out innovative ways of gathering and delivering the news. I am however pessimistic about existing firms: if history has taught us anything, it is that many of them will struggle to adapt with these drastic changes.

The video recordings for “Who Owns the News?” are now available. I have posted them at http://vimeo.com/album/253549. Portions were removed to protect the identity of audience members. We thank the speakers for permission to share their insights online. Enjoy the show ?

NTP Sues Apple, Google, Motorola, HTC, LG, Microsoft

Last year David Weston and I wrote a teaching case on how in 2000, NTP sued Research in Motion (makers of the popular BlackBerry device) for infringing its patents that cover the wireless delivery of email (free download from WIPO). Well, NTP is at it again, and has just sued a number of firms including Apple, Google, LG, Motorola, HTC and Microsoft that make smartphones. The Washington Post has a brief description of the patents. The earlier case ended with a $600+ million settlement, but that large amount was partly the result of (a) RIM was found to have willfully infringed NTP’s patents and attempted to deceive the court when presenting evidence of “prior art” in 2002, and (b) as the case escalated, RIM faced the very real threat of having its US operations closed down in 2005. A number of the original patent claims were subsequently revoked, but I imagine that NTP is hoping that the larger base of email users these days will give it enough licensing revenue from each of the mobile operators. If you haven’t heard of NTP, that is because the company is sometimes thought of as a patent troll and is not well-loved. In my opinion, the lawsuit also highlights a more subtle problem with the patent system. When successful firms like RIM and Nokia choose to settle with companies like NTP, it gives NTP an incentive and the financial resources to then attack a broader group of other firms. A precedence is also set. It would be better if such firms fought back, e.g., by establishing prior art that invalidates such patents or by pushing back on the claims.