How to measure innovation: a quick guide for managers and leaders

Over the past couple of months, I received multiple requests to explain how innovation can be measured. The Covid19 pandemic has caused managers at many organisations to consider innovating for the first time, as established business models were threatened and they began exploring new markets, products or services.

Here is a short note I thought I’d share for those interested. If you have thoughts, comments or resources to share, please post them in the comments section.

Measure dilligently, but be careful what you measure.

Continue reading “How to measure innovation: a quick guide for managers and leaders”

Opportunities for innovation in Australia

The Australian startup ecosystem is growing too slowly, but existing firms are becoming more interested in innovation as a source of competitive advantage.

MBS students brainstorming during the Innovation Bootcamp

” data-medium-file=”https://coreeconomicsblog.files.wordpress.com/2015/09/dsc2144-mbsbootcamp.jpg?w=300″ data-large-file=”https://coreeconomicsblog.files.wordpress.com/2015/09/dsc2144-mbsbootcamp.jpg?w=840″ class=”wp-image-10133 size-medium” src=”https://i0.wp.com/economics.com.au/wp-content/uploads/2015/09/DSC2144-mbsbootcamp-300×199.jpg” alt=”MBS students brainstorming during the Innovation Bootcamp” width=”300″ height=”199″/>
Students brainstorming during the MBS Innovation Bootcamp

Australia performed poorly in the global startup ecosystem ranking 2015 which was published recently (http://goo.gl/UXcGcO). Sydney fell 4 spots and now ranks 16th in the world, while Melbourne fell entirely out of the top 20 despite being on that chart in the previous version of the report published three years ago. The study expresses concerns about the Australian ecosystem that echo those in other studies performed by academics as well as in the Australian Government’s Innovation System Report (http://goo.gl/kvQZhK). The 2014 AIS report sums it up nicely: “Australia performs relatively poorly on ‘new to market’ innovation”.

Yet on the ground, interest in innovation and startups has never been stronger than before in Australia. Compared to five years ago, we now have many more ‘meetup’ groups in Melbourne and Sydney for founders and entrepreneurs, a variety of incubators and accelerators, and a number of innovation-oriented programs at leading universities including Melbourne, UTS, Swinburne and QUT. There is strong interest in courses on “design thinking” and “lean startups”. MBS has our innovation bootcamp for MBA students, while the University of Melbourne now has an accelerator and is about to launch a new Masters in Entrepreneurship program. A growing number of entrepreneurs are contacting me to discuss new business models, market entry and how to protect their innovations. These will take time to bear fruit.

How do we reconcile the weak findings at the ecosystem level with growing interest at the ground level? Part of what’s happening is that other startups ecosystems are maturing faster than the one in Australia. Many ecosystems abroad have continued to enjoy stronger government support, better access to venture capital and closer industry-university linkages. The most successful ecosystems (including Silicon Valley, New York, Los Angeles, Boston, Tel Aviv) have continued to develop and reinforce a coherent system for connecting resources, talent, funding and market access. Here in Australia, we have bits and pieces that are good in each major city, and we also have specific firms and sectors that are incredibly innovative. But that distribution is uneven and the parties involved are not as seamlessly interconnected as they could be.

A second part of the explanation is due to the business environment in Australia. Given our small domestic market, many of our startup entrepreneurs will continue to sink at least one foot (if not both feet) into other ecosystems. This makes sense from the point of view of being close to market and expertise.

A big change however is the growing interest in innovation by existing firms. In recent years, incumbent firms in industries ranging from retail to energy, news and financial services have been jolted out of a comfortable (often monopolistic or duopolistic) existence due to the threat of entrants, both online and offline.

The embrace of innovation by Australian firms has taken a long time, partly due to the difficulty of changing the mindsets of senior executives who run these organizations. However, it is clear that in a variety of industries across the globe, the terms of competition have changed and Australia is no exception. In conversations with senior managers at Australian organizations, I am discovering a growing interest in innovative strategy, business transformation, ‘design thinking’ and ‘business model innovation’. These conversations often begin with a reactive or defensive tone reflecting a need to respond to market or technological threats. However at some organizations the discussions have begun to advance beyond that stage: managers at some firms start to view innovation as an opportunity to reconsider their existing ways of doing things, engage new stakeholders and to develop new capabilities.

In the short run, I see a good opportunity in helping existing Australian firms learn to innovate and become more agile and competitive. In the longer run, it would be nice to see the startup ecosystem flourish in Australia, but that is something that will take time and sustained effort.

Note: I was invited to write this article for the Melbourne Business School student newsletter. It is reprinted above, sightly edited.

Should you activate fingerprint authentication on your new iPhone (or other mobile device?)

Bottom line: if you care about security you should avoid activating fingerprint authentication. Use an alphanumeric password in place of the 4-digit PIN and deal with the inconvenience. If you don’t care much about security but are careless about where you leave your phone or which networks you connect to, you should also probably skip it. For everyone else, it depends on your risk appetite. Good luck.

Yesterday, Apple launched two new iPhones. The flagship model, the 5s, is impressive and includes many new features including fingerprint based authentication. It is part of a trend towards using biometrics on mobile devices, e.g., facial recognition on Android and voice recognition on the new Moto X.

The use of fingerprint authentication is not new (a family member has that on their Lenovo notebook), but deployment by Apple usually signals the onset of mainstream adoption. At present the iPhone offers it as an option, so you can still choose to use a traditional password instead. The main benefit of fingerprint technology is slightly faster unlocking than using a PIN code. Also the Apple device is said to be accurate and fast, unlike some earlier consumer-oriented implementations. At present Apple is allowing its use for iTunes and Apps Store purchases but one can imagine third-party applications are around the corner.

Before you activate this system, you should consider several issues. Online forums are abuzz about whether your fingerprint can be spoofed, whether the NSA might be spying on you, and whether you can be legally forced to unlock your device. In turn, Apple has tried to allay fears by stating that your fingerprint only exists in a “secure enclave” on the phone (strictly speaking, it is an electronic description rather than an image of your actual finger). However, there are several issues that I believe need consideration:

1. It is hard to replace your fingerprint.
If your password is compromised, you can just revoke it and create a new one. Replacing your finger can probably be done, but it will involve a bit of pain. If you lose your phone and a hacker gets in, or if they are able to remotely access your fingerprint data, the personal costs may be rather high. We also we have no information about know how cleanly (if at all) the data is erased when you sell your phone or recycle it; can the data be extracted afterwards?

2. The fingerprint encryption scheme will be hacked.
This is not a possibility but a certainty. The only questions are how long before it happens and whether you will get to hear about it. People are worried that the NSA is helping Apple keep a backup copy of the master encryption key (i.e., can you trust them to keep it secret, since they lost thousands of documents to some junior guy without knowing it?). But the problem is more fundamental than that: in order to make use of that encrypted data, your phone must contain the key. This is unlike the case where a password is kept separate from your encrypted fingerprint data, or a design in which a password (or some other security token) is needed in addition to your fingerprint data. Keeping the decryption key on the device makes it vulnerable, since with enough effort the key will be recovered, or some weakness in the encryption software can be found. If you think you have heard this story before, it’s because the same thing happened with DVDs. Any DVD player must contain the decryption key and mechanism for doing so, otherwise you won’t be able to view the movie contained on the disc. When DVDs were launched, manufacturers thought their encryption was sufficient, but were quickly proven wrong. Same thing with BluRay.

3. A magnet for attack
Some are worried about the NSA, but they probably already have your fingerprints. The real threat is elsewhere: encryption is broken and various encryption standards have been compromised (including at an atomic level involving encryption libraries used to build software). Thus, storing the data in encrypted format is just a deterrent. Apart from the NSA, you should worry about the other, possibly more nefarious organizations and governments out there. The fact that we know it is possible implies that others will try to get in, either through the same means or by creating new methods. Nathan Rosenberg calls these “inducement mechanisms” that focus the efforts of others; I have observed it in my own fieldwork on semiconductors. All over the world next week, communities of hackers and spy organisations will probably be posting “do not disturb” signs on their doors and begin working on this new challenge.

4. Large attack surface
The data on the fingerprint chip itself might be fairly secure but IOS, like all operating systems, is complex and has been compromised. Every year we hear of interesting exploits at events like Black Hat. There is no such things as a completely safe program, especially one as elaborate as a modern operating system. Your phone or mobile device is not locked down, unlike the scanning device at your neighborhood immigration counter. You bring it everywhere: to airports, cafes, public places, friends’ homes and to pubs). It is exposed to many angles of attack: physical hacking, software backdoors, security holes, hidden code in apps, and compromised websites that you might visit on the phone’s web browser. Another way in is through your computer that syncs to the phone via iTunes because your phone treats it as a trusted connection. Apple claims that the operating system has no access to the fingerprint data on the chip itself, but you’ll have to go on trust with that one as it is not verifiable (Apple also said it did not store your GPS data!). The question remains of how separate the fingerprint system really is, since iTunes and the App Store will be able to authenticate using the fingerprint sensor, suggesting there may be some indirect paths available to hijack the authentication process, even if one does not touch the data itself.

Conclusion
While these risks are real, they do not necessarily imply that you will be hacked. That depends on whether you are a high enough value target. It also depends upon your personal habits and whether these practices expose you to a larger or smaller attack surface. And it depends upon your luck. Even with a regular old password, you could still end up being hacked, but at least you won’t risk losing your fingerprint data along with your other stuff. It is just a question of being aware of the risks. By no means am I dissuading you from buying that shiny new iPhone.

Bottom line: if you care about security you should avoid activating fingerprint authentication. Use an alphanumeric password in place of the 4-digit PIN and deal with the inconvenience. If you don’t care much about security but are careless about where you leave your phone or which networks you connect to, you should also probably skip it. For everyone else, it depends on your risk appetite. Good luck.

Image source: https://commons.wikimedia.org/wiki/File:Fingerprint_picture.svg

Unlocking DRM Lets You Open Multiple eBooks Simultaneously

The Amazon Kindle, Apple iPad and other e-readers are fast becoming mainstream and their usability has improved tremendously over the past years. However there is one area in which printed books are still much better: the ability to open multiple books at once. This might not matter if you are reading the latest “50 shades” novel and want to be uninterrupted. However, if you are working on a research project and constantly need to switch across multiple books, you will find that current eBook readers are a nightmare. Switching eBooks involves creating bookmarks, returning to a main menu (library page), going to another book and navigating it. This quickly becomes tedious. I cannot understand why tabbed browsing is absent from eBook software since it is rudimentary and exists in practically every web browser.

One solution is to buy multiple eBook readers and open one book per device. This turns out to work quite well. One might argue that the savings from not having to ship printed books will more than cover the cost of additional eBook readers. However it occurred to me recently that another solution exists: simply remove the DRM from your existing books. This is really easy to do. You can then manage your books using software like calibre, which allows multiple eBooks to be opened at the same time. On a fast computer with a large screen, this is a liberating experience! A 27″ or 30″ screen is sufficient to give me as good an experience as with 3-4 printed books. You can even do things that you cannot with regular books (without mutilating them) such as opening multiple instances of the same book for quick cross-referencing across different sections. If you take the extra step and export your library into pdf format, you then have the ability to manage, annotate and search your eBooks using software like Papers 2, treating them just like any other pdf file and merging them with your collection of journal articles.

There are other benefits of unlocking DRM, including the ability to prevent vendor lock-in (e.g., read your Amazon ebooks using Apple iBooks), avoid arbitrary and unfair removal of your books, and to overcome silly device download limits. For some of us, opening multiple books at the same time is another big plus. I suspect that over time, eBook DRM will go away. We are at the stage of the eBook industry that we were at with music 10 years ago, when we had to rip music from our personal CD collections or the proprietary formats on iTunes and convert them into unlocked files that were more flexible. Today music is sold unlocked and I don’t see why it should end up otherwise with eBooks.

(ps: yes I know eBooks are licensed, not sold, but lets save that for another discussion).

Reading multiple books at once

” data-medium-file=”https://coreeconomicsblog.files.wordpress.com/2012/11/multiple-books.jpg?w=300″ data-large-file=”https://coreeconomicsblog.files.wordpress.com/2012/11/multiple-books.jpg?w=840″ loading=”lazy” class=”wp-image-9485″ title=”Reading multiple books at once” src=”https://i0.wp.com/economics.com.au/wp-content/uploads/2012/11/multiple-books.jpg” alt=”Reading multiple books at once” width=”717″ height=”492″/>
Your 30″ monitor can show all these books at the same time

LTE is a Game Changer Because of Upload (not Download) Speeds.

What makes LTE a game changer is not its download speed but its upload speed instead. LTE is faster than the internet connection to many homes.

I recently upgraded to a smartphone that supports LTE, a new “pseudo-4G” standard that claims much faster speeds than 3G networks. Around the world, telecommunications operators are just beginning to roll out LTE. My first impression when using LTE was one of incredulity. This thing is smoking fast! The screenshot below shows 2 tests performed on my cellphone within minutes of each other in Melbourne’s CBD. The panel on the left is with my phone connected to 3G, while the one on the right is for the same phone connected via LTE. Download speeds for LTE are in the 21+ Mbps range as compared to 3+ Mbps on 3G. The phone feels noticeably faster when browsing the web or running web-connected apps such as Facebook. 3-dimensional maps appear really quickly on LTE. It is really a pleasure to use, but truth be told the old 3G speeds were already respectable for a mobile device.

What makes LTE a game changer is its upload rather than download speed, which is shown in the photo at around 20Mbps. On 3G, uploads are 16 times slower (at 1.2Mbps), and that is being generous as I am often only able to connect at half or a quarter of that speed around Melbourne. The amazing thing is that 20Mbps is much faster than most residential broadband connections. Many people are connected to the internet at home via ADSL2+ technology, which typically has download speeds of 5-8Mbps (despite what your Telco’s marketing brochure says) and upload speeds limited to a measly 1Mbps. In contrast, at various places around Melbourne’s CBD I have measured LTE upload speeds ranging from 6 to 20 Mbps, but of course this is not a scientific test.

In practical terms, what this means is that on an LTE phone, I can upload photos and videos much faster than many people can from their home networks. Uploading to Youtube, Instagram and Flickr from my cellphone while on the move has become amazingly practical, and no longer feels like a hopeless endeavour. Video conferencing over LTE is quite smooth, e.g., using FaceTime or similar programs. Applications that capture data locally and process it remotely (including Siri and other voice recognition apps) work quite well. This make the end-user experience so much better. While better download speeds are certainly welcome, the new upload speeds have removed a critical bottleneck that existed before. I believe it will open up all sorts of new opportunities for innovation and new applications.

It remains to be seen whether LTE speeds will remain impressive after everyone piles onto the network. I hope it won’t slow down to a crawl. The design of LTE incorporates better traffic handling than earlier networks, plus LTE has theoretical download and upload limits of 300Mbps and 75Mbps respectively, but how well will it cope in practice? Before it gets too congested, I am enjoying the boost in speed, glad to be working and living downtown and bathed in LTE goodness.

speedtest.net: LTE vs 3g on Optus Australia

” data-medium-file=”https://coreeconomicsblog.files.wordpress.com/2012/10/speedtest-comparison.png?w=300″ data-large-file=”https://coreeconomicsblog.files.wordpress.com/2012/10/speedtest-comparison.png?w=840″ loading=”lazy” class=”size-full wp-image-9396″ title=”speedtest.net: LTE vs 3g” src=”https://i0.wp.com/economics.com.au/wp-content/uploads/2012/10/speedtest-comparison.png” alt=”speedtest.net: LTE vs 3g” width=”892″ height=”178″/>
Speedtest.net: LTE vs 3G on Optus Australia’s Network

Improving Wireless Ordering at Restaurants

Last night, we got to order dinner on a wireless touchscreen, actually an iPad in Aluminum body armour. Pretty cool. This was not our first time, but it was a surprise because we were not at some fancy restaurant but instead at a modest place in Chinatown. It just goes to show how widely this technology has diffused. The use of a touchscreen menu was useful in this context for overcoming language barriers as the waiters weren’t the most fluent English speakers and although some of us spoke Chinese it was not the same dialect.

Unfortunately, like at many other places, we found that the restaurant was using a smart tablet in the same old “non-smart” way. i.e., just as an electronic version of their printed menu but with ordering capability built-in. I suspect that we’ll be seeing smarter devices soon. For instance, the computer should make customised recommendations based on your dining preferences, group composition and the chef’s knowledge of which dishes and beverages go well together. It should be more interactive, adapting the menu recommendations as you progress through a meal based on whether you liked a particular dish. This could change the dining experience from being a static one, where you order at the start and cannot make changes, to one that is more interesting and dynamic.

At a basic level, many restaurants are using the wrong device: instead of investing in their own tablets they should be offering a software application that downloads directly to your own smartphone/tablet as soon as you sit down at a table. This would allow you to make more personalised selections, for example using your own (private) dining history and food restrictions to help find suitable matches in the menu as well as making recommendations based on reviews posted by others online, or maybe even via a transitory peer-to-peer network of other diners.

ps: now that I’ve put these ideas out, they become “prior art” so hopefully this prevents companies from patenting them and filing frivolous lawsuits, thus ruining my future dining experiences.

Ordering on a Tablet

” data-medium-file=”https://coreeconomicsblog.files.wordpress.com/2012/09/ordering-on-a-tablet.jpg?w=300″ data-large-file=”https://coreeconomicsblog.files.wordpress.com/2012/09/ordering-on-a-tablet.jpg?w=800″ loading=”lazy” class=”wp-image-9386″ title=”Ordering on a Tablet” src=”https://i0.wp.com/economics.com.au/wp-content/uploads/2012/09/ordering-on-a-tablet.jpg” alt=”Ordering on a Tablet” width=”640″ height=”450″/>
Ordering on a Tablet

 

Apple wins $1bn case against Samsung

The more important aspects of the verdict are that it found Apple’s patents to be valid and that Samsung wilfully and knowingly copied Apple.

Apple has won a massive victory in the latest round of its dispute against Samsung. Part of the case is on patents, and part of it is on “trade dress” (the look and feel of the iPhone).

The $1bn award sounds like a lot, but it isn’t really the most interesting part of the decision. The RIM/Blackberry case was much narrower but saw a $600m+ decision some years back. The more important aspects of the verdict are that it found Apple’s patents to be valid and that Samsung knowingly copied Apple. The validity of Apple’s patents will probably allow it to earn a healthy stream of licensing revenue from other smartphone companies into the distant future. It will also give a well-needed jolt to the rest of the industry to explore different technological trajectories and to develop smartphones that do not resemble the iPhone as much. The willful nature of Samsung’s copying is why I believe the jury reached a surprisingly quick decision while others had expected it to be a protracted case, i.e., once they decided in their minds that Samsung willfully copied Apple, it was only a step away to reach the conclusion that Samsung infringed across a broad range of its products (see this chart at TheVerge). Very bad news for Samsung.

Some people view this as part of Steve Job’s vendetta against Google, which created the Android operating system running on Samsung’s phones. While this may or may not be true, it is not the whole story. The Android operating system is quite versatile and it is possible to build quite a diverse and novel ecosystem around it without copying the iPhone. An example of this is Sony with its aesthetically elegant Xperia phone and Android-based Walkman. Another is Nikon which has just released an Android camera and is an iteration away from it becoming an actual phone.

No doubt the Samsung/Apple ruling will be appealed, but it will inevitably shape the future of smartphones.

“Blue Ocean” strategy? Actually it does not matter what colour your ocean is

Blue Ocean strategies promise to break the tradeoff between costs and willingness to pay. But they don’t really. The tools offered by the blue ocean approach are useful such as the strategy canvas and ERRC framework, but irrespective of whether your ocean is blue, red or some other colour.

The blue yonder, Mt Eliza

Yesterday my MBA students and I discussed “Blue Ocean Strategy”, a popular book on strategic management by Kim and Mauborgne. A good thing about the book is that it encourages managers to be innovative and to pursue new markets rather than competing in highly competitive existing arenas, i.e., playing in “blue oceans” instead of “red oceans”. According to the authors, this way of thinking has served well for companies like Cirque du Soleil, Nintendo and Casella, an Australian firm that has succeeded in selling easy-to-drink wine in the US. Managers are encouraged to use the Strategy Canvas as an organizing framework (see here for an example). This encourages managers to ask themselves whether their products and services are really distinct after all, and along what dimensions they actually differ from the competition.

So far so good. But the problem is that in their enthusiasm, Kim and Mauborgne go on to make a tantalizing claim that the blue ocean approach allows you to break the tradeoff between pursuing differentiation and low costs. This puts them at odds with many leading strategy textbooks, which argue that it is often difficult for firms to increase consumer “willingness to pay” (WTP) while simultaneously reducing cost, all else being equal. You usually have to spend money on R&D, marketing and better execution in order to increase WTP. The “blue ocean” claim leads to all sorts of confusion among MBA students.

Does the blue ocean approach actually offer a silver bullet? Unfortunately not. The truth lies in the details. For a blue ocean strategy to work, you aren’t just supposed to add new activities that increase willingness to pay. You are also supposed to look for opportunities to eliminate or reduce others in order to cut costs. This is presented as the “ERRC” framework (pg 35 of the book) which asks managers to raise and create new dimensions for their product/service, while eliminating or reducing others. For example, Cirque du Soleil increased willingness to pay by introducing broadway-style themes, artistic music and dance, and better stage lighting to their productions. Meanwhile they reduced costs by eliminating animal shows and star performers, both of which are expensive cost components for a circus.

From the above it should be apparent that you still face a tradeoff between costs and willingness to pay. But you are just avoiding it by removing some of the costly activities. In other words, it isn’t the case that all else is equal. If Cirque du Soleil were able to offer all the new features in addition to having animals and circus stars (but at no marginal cost), then it would be legitimate to make a claim that the cost-WTP tradeoff had been broken. But fundamentally this tradeoff remains, and while the exciting new features enabled Cirque du Soleil to differentiate themselves from ordinary circuses and to increase ticket prices, the removal of animal shows and star performers inevitably meant that some customers who valued those things were now less willing to pay for a show.

Overall, the strategy map and blue ocean approach are useful because they encourage managers to think outside the box when looking for new competitive opportunities. But personally I find the distinction between blue and red oceans somewhat forced, especially when you realize that a firm produces multiple products, and these are likely to fall along a spectrum ranging from red to blue and beyond. So while the Nintendo Wii was blue ocean in approach, other Nintendo products at that time such as the DS were clearly not. In a fundamental sense, increasing WTP and reducing costs are complementary (Athey & Schmutzler, 1995). Hence, finding new and innovative opportunities to increase WTP and reduce costs should be something a manager ought to do anyways, regardless of whether their ocean is blue, red, purple or some other colour.

Quick review of IA Writer – a minimalist writing tool

I recently began using a new writing tool, iA writer. It is one of a slew of new programs that are “minimalist” writing tools including Omniwriter and Writeroom. They help you focus on actually writing rather than tinkering with fonts, layouts, hyperlinks, grammar checkers and other distractions. I was led to search for a new writing tool by Redmond’s Law of Large Numbers which states that a large and complex enough document will definitely crash Microsoft Word. I have been revising a paper for a journal and when it began crashing every ten minutes, I realized I was totally distracted by having to restart my word processor and guessing what changes had actually been saved. I was no longer focused on writing.

Initially I was skeptical and thought a minimalist tool was nothing new, just a modern version of Vi/Emacs or any of the LaTeX editors I’ve used. But it turns out to be a different user experience after all. Even compared to any of those, iA Writer is distraction free. There is no way to underline or italicize text. There are no styles, hyperlinks, or colors or fonts. There are no obscure Control/Alt/Esc commands to remember. There are however numbered headings which is useful. The overall effect is that your mind stays focused on paragraph structures, flow and generating interesting content.

The experience isn’t like using Notepad (Windows) or TextEdit(Mac) either. On iA Writer, one interesting feature — probably its only feature — is the “focus mode” which highlights the currently edited sentence and fades everything else into grey. This keeps your attention squarely on clarifying exactly what you are trying to express in the current sentence. I like that a lot. Oh and it does look great on screen, a bit like the typerwiters from days gone by.

iA Writer syncs to Apple’s iCloud, so you can edit on your Mac, iPhone or iPad and not worry about backups. You can roll back to different versions using iCloud’s built-in features. If you use Windows, the options include Darkroom, Focuswriter and Writemonkey but I haven’t tried any of those.

Because of its lack of features, a minimalist writing tool isn’t for everything, certainly not equation-laden articles. But it is great for a first draft and if you are primarily writing text. I am currently keeping iA Writer as part of my workflow, using it to draft things, then pasting the results into a word processor or other application for layout and finishing. If you have used such a tool, do share your experiences (good and bad) below.

ps: this blog post was written in iA Writer.

Cloud computing and the “me versus you” problem

A Personal Workspace

I’m finally getting back to blogging after spending a couple of months traveling then catching up with work. This week I was invited to speak at a “guru forum” of managers and academics who work in information technology. Among the many issues that were discussed, two conflicting trends were identified. On the one hand many corporate organizations are moving towards cloud services and all-in-one outsourced solutions (Oracle, SAP, IBM, …). On the other hand individuals are moving towards a “bring your own” model, bringing their own computers, e-books, cellphones, iPads and other devices to their workplaces. With the advent of smartphones and social media platforms such as Facebook, computing is becoming more consumer-centric and primarily a means for social interaction, rather than just a tool for specific tasks like word precessing and accounting.

These opposing trends create a disconnect at the workplace between the ability of firms to manage and control information (especially proprietary information) versus the desire to give employees flexibility and freedom in choosing the tools they really want to use. My view is that the trend towards consumer-centric computing will dominate the other paradigm. There is no turning back the preferences of modern information workers who grew up with their iPads, Android phones and Kindles. Companies should embrace rather than fight the trend.

How do we solve this “me versus you” problem? i.e., organizing information on multiple devices in a way that separates private from work and other shared information in an easy but manageable way? Existing solutions are unsatisfactory because they do not adapt to the different and changing contexts that individuals find themselves in. Companies like Apple, SAP and Oracle take a fully integrated approach, allowing you to run everything on their software and leveraging their own cloud solutions, treating each device as a client. Bringing this to the extreme, you can run entire virtual machines from your own device with everything hosted on the service provider, such as via Amazon S3 or OnLive. Unfortunately this is often an all-or-nothing proposition, so while it creates separate contexts, the operation across contexts is not seamless. You’re basically running separate computers (or syncing to separate clouds) from within your own device, and it is slow and clunky to inter-operate between them.

In contrast, other firms like Dropbox provide services that integrate into your existing applications and folders, but end up being highly fragmented especially when it comes to setting permissions and giving access. Each application and each collaborator needs to be authenticated, so coordination can be a hassle. This week my colleague tried to set up a shared Dropbox folder for the faculty members at our school, and it seemed a lot more of a hassle than it needed to be, especially the bit about inviting each user and trying to get them to actually sign up to the Dropbox cloud.

The good news is that the solution of the “me versus you” problem is closer at hand than many might think. The architecture for such a solution already exists in products like Google Circles and VMware but is not yet pervasive. Here’s an example of what one such solution might look like. At present most operating systems support multiple workspaces, but for now they are all tied to the same set of permissions and applications. Well, imagine a future in which each workspace on your device is authenticated to different sets of applications and clouds. For example, your device could include a personal workspace that authenticates to Apple and Dropbox and which contains your personal files, apps and Facebook page. A second workspace could authenticate to your office, with the IT system at your office determining what apps and cloud services are made available and which of these you can transfer across workspaces. A third workspace could be one created by your friend so that when you visit her house, her workspace would appear along with some of the data and services from her home network that your friend is willing to share with you.

A small number of us already have something close to this setup running on our computers by using multiple virtual machines that are active simultaneously. But it isn’t the same thing. I’m thinking of something with much more integration than is available in existing virtual machines and with much less of the “heavy machinery” that is needed to support multiple operating systems on the same machine (the action is in the data and apps, not in the operating system itself anymore). I also have in mind something more dynamic, for example with the ability to seamlessly add or remove workspaces when the context around a person changes. In the example above, if your friend defines a workspace that is shared with you when you visit her, that workspace should actually exist in a virtual sense, and it should slide on and off your various devices in a consistent manner including your smartphone, iPad and notebook computer.

Granted, the ideal solution in my head might be a bit far-fetched. However I suspect it will become prevalent in the next several years. I don’t know what it would cost in terms of implementation and adoption. However, the fundamental issues are of great concern among industry practitioners such at those attending the IT Guru forum, so I suspect that over the next few years entrepreneurial firms will end up exploring solutions and frameworks along these lines.