The more important aspects of the verdict are that it found Apple’s patents to be valid and that Samsung wilfully and knowingly copied Apple.
Apple has won a massivevictory in the latest round of its dispute against Samsung. Part of the case is on patents, and part of it is on “trade dress” (the look and feel of the iPhone).
The $1bn award sounds like a lot, but it isn’t really the most interesting part of the decision. The RIM/Blackberry case was much narrower but saw a $600m+ decision some years back. The more important aspects of the verdict are that it found Apple’s patents to be valid and that Samsung knowingly copied Apple. The validity of Apple’s patents will probably allow it to earn a healthy stream of licensing revenue from other smartphone companies into the distant future. It will also give a well-needed jolt to the rest of the industry to explore different technological trajectories and to develop smartphones that do not resemble the iPhone as much. The willful nature of Samsung’s copying is why I believe the jury reached a surprisingly quick decision while others had expected it to be a protracted case, i.e., once they decided in their minds that Samsung willfully copied Apple, it was only a step away to reach the conclusion that Samsung infringed across a broad range of its products (see this chart at TheVerge). Very bad news for Samsung.
Some people view this as part of Steve Job’s vendetta against Google, which created the Android operating system running on Samsung’s phones. While this may or may not be true, it is not the whole story. The Android operating system is quite versatile and it is possible to build quite a diverse and novel ecosystem around it without copying the iPhone. An example of this is Sony with its aesthetically elegant Xperia phone and Android-based Walkman. Another is Nikon which has just released an Android camera and is an iteration away from it becoming an actual phone.
No doubt the Samsung/Apple ruling will be appealed, but it will inevitably shape the future of smartphones.
Blue Ocean strategies promise to break the tradeoff between costs and willingness to pay. But they don’t really. The tools offered by the blue ocean approach are useful such as the strategy canvas and ERRC framework, but irrespective of whether your ocean is blue, red or some other colour.
Yesterday my MBA students and I discussed “Blue Ocean Strategy”, a popular book on strategic management by Kim and Mauborgne. A good thing about the book is that it encourages managers to be innovative and to pursue new markets rather than competing in highly competitive existing arenas, i.e., playing in “blue oceans” instead of “red oceans”. According to the authors, this way of thinking has served well for companies like Cirque du Soleil, Nintendo and Casella, an Australian firm that has succeeded in selling easy-to-drink wine in the US. Managers are encouraged to use the Strategy Canvas as an organizing framework (see here for an example). This encourages managers to ask themselves whether their products and services are really distinct after all, and along what dimensions they actually differ from the competition.
So far so good. But the problem is that in their enthusiasm, Kim and Mauborgne go on to make a tantalizing claim that the blue ocean approach allows you to break the tradeoff between pursuing differentiation and low costs. This puts them at odds with many leadingstrategytextbooks, which argue that it is often difficult for firms to increase consumer “willingness to pay” (WTP) while simultaneously reducing cost, all else being equal. You usually have to spend money on R&D, marketing and better execution in order to increase WTP. The “blue ocean” claim leads to all sorts of confusion among MBA students.
Does the blue ocean approach actually offer a silver bullet? Unfortunately not. The truth lies in the details. For a blue ocean strategy to work, you aren’t just supposed to add new activities that increase willingness to pay. You are also supposed to look for opportunities to eliminate or reduce others in order to cut costs. This is presented as the “ERRC” framework (pg 35 of the book) which asks managers to raise and create new dimensions for their product/service, while eliminating or reducing others. For example, Cirque du Soleil increased willingness to pay by introducing broadway-style themes, artistic music and dance, and better stage lighting to their productions. Meanwhile they reduced costs by eliminating animal shows and star performers, both of which are expensive cost components for a circus.
From the above it should be apparent that you still face a tradeoff between costs and willingness to pay. But you are just avoiding it by removing some of the costly activities. In other words, it isn’t the case that all else is equal. If Cirque du Soleil were able to offer all the new features in addition to having animals and circus stars (but at no marginal cost), then it would be legitimate to make a claim that the cost-WTP tradeoff had been broken. But fundamentally this tradeoff remains, and while the exciting new features enabled Cirque du Soleil to differentiate themselves from ordinary circuses and to increase ticket prices, the removal of animal shows and star performers inevitably meant that some customers who valued those things were now less willing to pay for a show.
Overall, the strategy map and blue ocean approach are useful because they encourage managers to think outside the box when looking for new competitive opportunities. But personally I find the distinction between blue and red oceans somewhat forced, especially when you realize that a firm produces multiple products, and these are likely to fall along a spectrum ranging from red to blue and beyond. So while the Nintendo Wii was blue ocean in approach, other Nintendo products at that time such as the DS were clearly not. In a fundamental sense, increasing WTP and reducing costs are complementary (Athey & Schmutzler, 1995). Hence, finding new and innovative opportunities to increase WTP and reduce costs should be something a manager ought to do anyways, regardless of whether their ocean is blue, red, purple or some other colour.
I recently began using a new writing tool, iA writer. It is one of a slew of new programs that are “minimalist” writing tools including Omniwriter and Writeroom. They help you focus on actually writing rather than tinkering with fonts, layouts, hyperlinks, grammar checkers and other distractions. I was led to search for a new writing tool by Redmond’s Law of Large Numbers which states that a large and complex enough document will definitely crash Microsoft Word. I have been revising a paper for a journal and when it began crashing every ten minutes, I realized I was totally distracted by having to restart my word processor and guessing what changes had actually been saved. I was no longer focused on writing.
Initially I was skeptical and thought a minimalist tool was nothing new, just a modern version of Vi/Emacs or any of the LaTeX editors I’ve used. But it turns out to be a different user experience after all. Even compared to any of those, iA Writer is distraction free. There is no way to underline or italicize text. There are no styles, hyperlinks, or colors or fonts. There are no obscure Control/Alt/Esc commands to remember. There are however numbered headings which is useful. The overall effect is that your mind stays focused on paragraph structures, flow and generating interesting content.
The experience isn’t like using Notepad (Windows) or TextEdit(Mac) either. On iA Writer, one interesting feature — probably its only feature — is the “focus mode” which highlights the currently edited sentence and fades everything else into grey. This keeps your attention squarely on clarifying exactly what you are trying to express in the current sentence. I like that a lot. Oh and it does look great on screen, a bit like the typerwiters from days gone by.
iA Writer syncs to Apple’s iCloud, so you can edit on your Mac, iPhone or iPad and not worry about backups. You can roll back to different versions using iCloud’s built-in features. If you use Windows, the options include Darkroom, Focuswriter and Writemonkey but I haven’t tried any of those.
Because of its lack of features, a minimalist writing tool isn’t for everything, certainly not equation-laden articles. But it is great for a first draft and if you are primarily writing text. I am currently keeping iA Writer as part of my workflow, using it to draft things, then pasting the results into a word processor or other application for layout and finishing. If you have used such a tool, do share your experiences (good and bad) below.
I’m finally getting back to blogging after spending a couple of months traveling then catching up with work. This week I was invited to speak at a “guru forum” of managers and academics who work in information technology. Among the many issues that were discussed, two conflicting trends were identified. On the one hand many corporate organizations are moving towards cloud services and all-in-one outsourced solutions (Oracle, SAP, IBM, …). On the other hand individuals are moving towards a “bring your own” model, bringing their own computers, e-books, cellphones, iPads and other devices to their workplaces. With the advent of smartphones and social media platforms such as Facebook, computing is becoming more consumer-centric and primarily a means for social interaction, rather than just a tool for specific tasks like word precessing and accounting.
These opposing trends create a disconnect at the workplace between the ability of firms to manage and control information (especially proprietary information) versus the desire to give employees flexibility and freedom in choosing the tools they really want to use. My view is that the trend towards consumer-centric computing will dominate the other paradigm. There is no turning back the preferences of modern information workers who grew up with their iPads, Android phones and Kindles. Companies should embrace rather than fight the trend.
How do we solve this “me versus you” problem? i.e., organizing information on multiple devices in a way that separates private from work and other shared information in an easy but manageable way? Existing solutions are unsatisfactory because they do not adapt to the different and changing contexts that individuals find themselves in. Companies like Apple, SAP and Oracle take a fully integrated approach, allowing you to run everything on their software and leveraging their own cloud solutions, treating each device as a client. Bringing this to the extreme, you can run entire virtual machines from your own device with everything hosted on the service provider, such as via Amazon S3 or OnLive. Unfortunately this is often an all-or-nothing proposition, so while it creates separate contexts, the operation across contexts is not seamless. You’re basically running separate computers (or syncing to separate clouds) from within your own device, and it is slow and clunky to inter-operate between them.
In contrast, other firms like Dropbox provide services that integrate into your existing applications and folders, but end up being highly fragmented especially when it comes to setting permissions and giving access. Each application and each collaborator needs to be authenticated, so coordination can be a hassle. This week my colleague tried to set up a shared Dropbox folder for the faculty members at our school, and it seemed a lot more of a hassle than it needed to be, especially the bit about inviting each user and trying to get them to actually sign up to the Dropbox cloud.
The good news is that the solution of the “me versus you” problem is closer at hand than many might think. The architecture for such a solution already exists in products like Google Circles and VMware but is not yet pervasive. Here’s an example of what one such solution might look like. At present most operating systems support multiple workspaces, but for now they are all tied to the same set of permissions and applications. Well, imagine a future in which each workspace on your device is authenticated to different sets of applications and clouds. For example, your device could include a personal workspace that authenticates to Apple and Dropbox and which contains your personal files, apps and Facebook page. A second workspace could authenticate to your office, with the IT system at your office determining what apps and cloud services are made available and which of these you can transfer across workspaces. A third workspace could be one created by your friend so that when you visit her house, her workspace would appear along with some of the data and services from her home network that your friend is willing to share with you.
A small number of us already have something close to this setup running on our computers by using multiple virtual machines that are active simultaneously. But it isn’t the same thing. I’m thinking of something with much more integration than is available in existing virtual machines and with much less of the “heavy machinery” that is needed to support multiple operating systems on the same machine (the action is in the data and apps, not in the operating system itself anymore). I also have in mind something more dynamic, for example with the ability to seamlessly add or remove workspaces when the context around a person changes. In the example above, if your friend defines a workspace that is shared with you when you visit her, that workspace should actually exist in a virtual sense, and it should slide on and off your various devices in a consistent manner including your smartphone, iPad and notebook computer.
Granted, the ideal solution in my head might be a bit far-fetched. However I suspect it will become prevalent in the next several years. I don’t know what it would cost in terms of implementation and adoption. However, the fundamental issues are of great concern among industry practitioners such at those attending the IT Guru forum, so I suspect that over the next few years entrepreneurial firms will end up exploring solutions and frameworks along these lines.
Several months ago I wrote about a public forum we organized on the future of book publishing. Our panelists included Piers Pickard (Editorial Director at Lonely Planet), Graeme Connelly (CEO of Melbourne University Bookstore), Nathan Hollier (Manager at Monash University Press), Max Barry (independent author) and Emmett Stinson (Melbourne University lecturer in publishing and communications). Since then, dramatic changes have occurred. Lonely Planet has reorganised while moving aggressively into apps and digital publishing. Amazon has entered the publishing business, bypassing traditional publishers. Books have gotten shorter with efforts like Amazon’s Kindle Singles and TEDBooks being particularly interesting. Closer to home, Melbourne University Bookstore will be privatised soon. So, I decided to spend some time during the weekend editing the video from our public discussion. The podcast is now online. Please follow the link and watch it if you are interested in book publishing.
Bruno Cassiman and Don O’Sullivan presented several months back on R&D strategy and executive compensation, respectively. Bruno’s talk was on how collaboration on research and development (through open innovation and science linkages) can dramatically affect R&D outcomes. Don spoke on how the structure of executive compensation relates to the valuation of intangible assets.
Thanks to each speaker for allowing us to share their presentations online.
Mainstream Australian retailers need to improve: their online stores lag behind those overseas as well as those of Australia-based eBay traders.
A couple of weeks ago, I ordered two similar items online, one from a company in Sydney and another from Philadelphia. To avoid any trouble at home, let me be vague and just say that both of these items can probably be found on the same shelf in a physical camera store. The item from Philly arrived at my desk in just under a week. Furthermore, I received timely updates about my order, an email from the supplier when the package was shipped and a follow-up message after it arrived. Meanwhile, the order status from the Sydney store went through several stages and got stuck at “ready soon”. When when I finally telephoned them, a customer service officer mumbled an excuse and said it would be shipped soon. The item finally arrived today, a total of 15 days from start to finish.
I have had similarly poor experiences ordering a variety of products online from mainstream Australian retailers. Their online presence is often just an afterthought, with high prices, weak product variety, clunky websites and unhelpful customer service. As a result of the strong Aussie dollar, shoppers are increasingly buying online and from overseas. Rather than complain about poor business conditions, Australian retailers need to buck up. I don’t believe it can’t be done because there is a category of Australian retailers that is already as efficient as those overseas: eBay-based Australian stores. These are small and medium sized entities that use eBay as a storefront. One reason they are responsive is that when searching eBay, overseas competitors’ offerings appear on the same page as theirs, so rivals are not even a mouse click away. A second reason is user feedback. Each time a transaction clears, buyers and sellers can leave feedback about each another and eBay reports a breakdown of ratings over time (see sample image below). This generates an incentive to continually maintain good customer service so as to avoid a fall in reputation. It generally seems to work pretty well. I would go further and argue that we should expect even better service from big-name Australian retailers than from these eBay based stores, but we aren’t receiving anything close to it right now.
Apple’s new final cut pro is causing unhappiness. But it is only part of two broader changes: a shift in Apple’s strategy towards consumers and a broader change in the demand for videos.
Last week Apple launched the new version of their movie-editing software, Final Cut Pro X (FCPX). This led to a firestorm of criticism by professional video editors (see here, here, and here). Even Conan O’Brien decided decided to chip in :-). The main complain is that it lacks sophisticated features used by broadcasters and video professionals and that are available in earlier versions of the software. FCPX doesn’t even open projects built using earlier versions! On the positive side it is slick and easy to use.
Apple is consolidating its strategy
I suspect that two things are happening. The first is that Apple is consolidating their strategy around mobile computing, the iCloud and end-customers. The price tag for the new FCPX is an indication (US$299.99 versus around $1000 for the earlier version). The move by Apple away from “professional” markets has been happening for some time now and across multiple products. It happened with Aperture, which is now basically an upgrade for those using iPhoto, and a nice one at that, but distinct from Lightroom+Photoshop. Earlier this year Apple decided to discontinue its professional xserve rack mounted server. This year’s Macbook Pro notebook was the first to receive several new high-end features (Thunderbolt and 6Gb/s SATA) that have yet to appear in Apple’s high-end Mac Pro desktop aimed at professionals. This is not a surprise as Apple is now selling 2.4 times more notebooks than desktops.
Focusing on the consumer market makes good business sense for Apple because (a) it fits with their capabilities, which are about making complicated things simple to use, whereas a lot of professional software is by nature complicated and intricate, (b) they can cross-sell many more copies of the software to people upgrading from iMovie or iPhoto than they can to a niche audience of professionals, and (c) it fits well with their major strategic thrust on the iphone/ipad/icloud platform, which is consumer focused rather than enterprise focused.
Video consumption is changing
While video professionals are blaming Apple for not listening to their needs, there is a bigger trend that is happening here. Apple is responding to anticipated changes in the marketplace. Just as with the newspaper and book publishing industries, there are big changes happening with video production and broadcasting. An increasing number of videos are being made by “advanced amateurs”. This is driven by the proliferation of inexpensive video cameras, as well as new platforms for online video distribution. When I think about my own personal consumption of video, I am amazed how little television I watch anymore. I do watch the occasional movie, but an increasing amount of my video consumption is on Youtube, Vimeo and other sites sent to me via Facebook, twitter and email. Are these videos as well-made as those by professional broadcasters? No. But are they good enough for the general public? Often, yes. For these people, the new Final Cut Pro X is a terrific tool for the most part.
Beyond traditional video, there are other interesting developments, such as animoto that takes the pain out of making simple music videos. There is software like Toontastic that lets you make animated skits and apparently even the Gans family is now into it. Each minute of our free time we spend watching these things is probably a minute less spent watching professionally-produced video content.
I’m not saying this to defend Apple. As David Pogue pointed out, in the case of FCPX, Apple Blew It. Some of my friends are in this business and I can’t help feeling concerned for them. As one of them wrote to me, “the industry has gone nuts over this ‘upgrade’. it’s really bad and sad”. I think Apple should have launched FCPX as a different product, instead of discontinuing Final Cut Pro. But the knee-jerk reaction among video professionals right now is leading them to be angry about some some video editing tool. Fair enough. But they need to assess the bigger question: where will their industry be in 5 years, and where do they want to be in their careers?
Here’s an excellent update on Gene Patents covering the year 2010: http://genepatents.info/2011/02/24/gene-patents-2010-update/. It is written by my student Rachel Goh, a 5th year medical student at the University of Melbourne. She discusses the controversy surrounding the Myriad and Monsanto cases in the US and Europe, as well as legal decisions in Australia surrounding breast cancer tests and the Australian Senate review on gene patents. Of particular interest is her observation that we are moving increasingly towards “multi-genomic” tests, so the patenting of individual genetic sequences will cause greater problems for follow-on and systemic innovation. I see here a parallel to software patents and patent thickets, which have been said to have had similar effects. Rachel also included a thoughtful commentary along with her summary.
Traditional book publishers have been increasingly challenged by e-books and other digital technologies. We decided to organize a public seminar with industry participants to learn about new opportunities in this area.
A common theme among our speakers was of the growing fault lines between those who create content and those who distribute it. From the point of view of content creators, digital technology is not a bad thing. It presents new ways to reach customers. To a firm like Lonely Planet, printed books, e-books and apps are alternative and useful delivery mechanisms. The heterogeneity is a good thing since each delivery mechanism has its strengths and weaknesses. For example a map-based application on your mobile phone may be useful for navigating the streets of Melbourne, while a printed travel book might be preferred if you are traveling the Australian outback (books are more durable than electronic devices; they also require no electrical power).
Authors are beginning to explore new pricing schemes. For example severalauthors are trying to sell a larger volume of e-books at lower prices (around $2.99 – $3.99) instead of a small number of regular books at higher prices (say, $10). Other authors are trying “pay what you want” schemes. Our guest speaker Max Barry will be selling his next book as a real time electronic serial, distributing it directly from his website in small chunks and for an attractive price ($6.95). It is too early to know which of these will work well and for whom because the book industry has many different segments of customers with different needs. Furthermore, there are concerns with e-books around the issue of digital piracy. However, we were reminded by one of the speakers that for many authors, obscurity is worse than piracy. Besides, piracy has long been a threat even with printed books: you will of course remember the photocopy machine which has existed for quite awhile, as well as those suspiciously inexpensive textbooks printed on poor quality paper brought in from various developing countries. It seems to me at least that in the digital world, selling a large volume of e-books at a low price makes a lot of sense. In this context, the serialized e-book has an added advantage because it builds a repeated interaction between the reader the author. Over time this may help create loyalty towards the author.
I see three areas of opportunity and these arise along the fault lines described above.
The first opportunity is with “apps”. It crossed my mind earlier this month that simply repackaging a book as an app gives the author tremendous freedom. With books, the author is stuck with publishing delays, parallel import laws and other legal impediments, not just the need to physically deliver products. With apps, all that is gone. Re-purpose a book as an app and it morphs into a software program, so different rules apply. If you go one step further and make the app exciting to use, you can counteract the myth that printed books are superior. Those who have tried The Elements on an iPad will find it hard to go back to a printed Periodic Table. Similarly, having compared both this app and the book version, I much prefer learning about photography using the app version which is more interactive and has built-in videos.
A second opportunity lies in offering new skills combinations. In order to serialize his next novel, Max Barry combined his computer programming expertise with a passion for writing: he is essentially selling each subscriber a private RSS feed as a separate product. Most people do not have this combination of skills, especially the generation of authors that went to journalism school and did not acquire a technical background. An opportunity exists for people who can bridge this divide and provide new tools and services to help content authors to craft their products and reach customers easily. For example, Graeme Connelly spoke to us about the new “expresso printer” at Melbourne University Bookstore which produces small print runs that were uneconomical in the past. I believe this is only a starting point, e.g., we don’t yet have the equivalent of WordPress for creating books with existing tools being either too complex or too amateurish.
The third opportunity lies in further disaggregating the value chain. I learned from the session that one of the benefits to authors of going with traditional book publishers is their expertise in editing. Publishers convert the messy raw material that is a manuscript into a curated experience that is proof-read, edited and checked. I suspect that the editing activity will split apart into a distinct industry segment, just as has happened in other industries such as semiconductors, which used to be vertically integrated but which now has some firms focusing exclusively on system development and others on chip design or manufacturing. This is pure speculation on my part, but I don’t see why the editing process, while valuable, needs to be tied much longer to the manufacture and distribution of physical products.
It is hard to predict how things will work out and I don’t think the traditional book will completely disappear. This industry is definitely going to be interesting to watch over the next few years.
You must be logged in to post a comment.