A second article in the Times (following up on yesterday’s reporting on Apple’s smart Watch) says Apple has been oh-so-smart to produce a developer’s kit so that the market can decide the killer app for the device. Killer app here is code for justifying the product’s purchase in the first place.
All hardware goes through a period where we wonder what its utility really is and it took word processing and spread sheets to justify PCs, graphics packages to do the same for laptops and email kind of knitted everything together. When the smart phone came along (i.e. iPhone), it was already a Swiss Army Knife of a sort being able to make calls, check email, play music, and take pictures. The application ecosystems that followed were largely frosting on the cake.
And when Steve Jobs introduced iPad, he vaguely referred to it as a content consumption device, which it is despite the fact that users tried heroically to link wireless keypads to it. As it turns out tablets spawned a market for light and detachable laptops though many vendors like Microsoft and HP are still trying to convince us that their keypadded devices are really tablets.
Wearables are different because they present us with a 2D matrix to figure out. The platform is amorphous, first off, meaning there are wearables for your face and eyes (Glass and its kin), your wrist (various watches), and your pocket/neck (pendant-like things). Each will have, I think, a different killer app though I can easily see spillover. For example a watch could also identify you entering a building and emit your location to your nanny but so could a pocket device. And every device maker worth its patents wants to be the next Fitbit. However, I think the face appliances that capture and show video will be in a class by themselves.
So the point isn’t that there needs to be a killer app, it is that each device type could do with one or risk being eliminated by other more versatile solutions. We’ve seen this before for instance when the smart phone replaced the candy bar model and the flip phone — it wasn’t because call quality could only improve with a handheld, it was because for the same money you got so much more.
Apple is not the only vendor in the wearables space with a developer’s kit and you could argue that its kit might not even be the best if all it does is generate apps for a single device or device type. For more robust developer functionality you need to look at something like the Salesforce Wear developer’s kit, or whatever it’s being called. Just last week the company announced significant momentum in its wearables initiative nearly doubling its core developer partner group to eleven. And yes, this includes multiple device types.
Salesforce Wear can generate apps for all of the major device types including watches, glasses, and pendant or pocket thingies. In this early market that appears to be a superior approach because it gives the market a voice in determining not only the killer app but also the killer device or more precisely the killer device for a particular circumstance. You can’t separate those two converging needs.
So good luck to Apple on launching the Watch, the company picked a safe platform compared to Google and its Glass unit, though in retrospect Glass seems an inspired idea. Apple should have the wind at its back due to its reputation, its huge developer ecosystem, and its choice. As the market heats and consolidates, it would not be surprising to see Apple and Salesforce battling for app supremacy but right now I don’t think Apple is in the catbird seat. Apple is still a hardware maker while Salesforce has been down this road of providing developer resources for many platforms several times already.
Wearable computing hove into view in a big way yesterday when Salesforce.com announced Salesforce Wear, a capability that enables developers to build new apps for teeny tiny screens and devices that you, well, wear. Wearables is a market poised for takeoff. Last year, for instance, Apple cornered the world markets capturing all the copyrights to “iWatch,” which I think was not a coincident. And let’s not forget the things that are not worn but which simply exist through sensors on the Internet of Things (IoT).
But what does wearables as a class of devices mean? It’s time we began asking hard questions because if Say’s Law (supply creates demand) ever had any applicability it will show itself in this still emerging market and we really want to get demand right. There either are, or there will shortly be, wearables for your wrist, your neck, and pocket. Each does something different and each will need software so the question about the killer app, not seen for real since early laptop days, seems to be relevant again.
More importantly, though, you can’t answer that question until you also ask and answer questions about what we’ll be doing with wearables. The long evolution of technology beginning with the mainframe is a story of ever more personal and relevant information. Mainframes automated back office functions, PC’s, laptops, and networks automated rank and file workers and ignited a productivity explosion. Handhelds made us social and computing personal in ways that had never been done.
Wearables is a different kettle of fish. You’ll notice right off that at least this generation of wearables is not intended to do every compute function. Wearables seem to be context specific so a device might monitor your vital signs, but not your golf swing and vice versa. Or something like Google Glass will deliver needed content to your cornea but it won’t help you get into a secure zone.
So very quickly you can see that wearable computing as a class has a great deal more complexity to it than any of the preceding generations of computing. That makes developing for individual platforms challenging and building development tools that can address all of the form factors and their uses, even more so.
Heck, just imagining the potential uses for wearables is a challenge so I was glad to see that Salesforce didn’t just say, come and get it with some half baked developer tool designed to enable you to recreate your GL on your wrist because that would be a complete non-starter.
Instead, Salesforce did some smart things. First, they made available some reference applications based on their developer pack. The apps showcase a number of innovative business use cases that won’t exactly be second nature to you unless you are an oil rig worker needing to fix some complex bit of technology (yes, there’s a reference app for that). Second, they made these reference apps available as code for anyone to see, evaluate and modify in an open source way. This will help speed the adoption of Salesforce Wear and identify missing components and new opportunities.
Finally, Salesforce is not limiting their deployment to a small number of devices, they’re casting a broad net in an attempt to support the fledgling market. Imagine if Microsoft had done the same thing with Office when the iPad was first announced.
There are other things in the announcement that I think are not only cool but needed to make the product take off — like security in the form of 2-way identity flow to keep you from having to constantly re-log-in — what a hassle that would be on a small device. Also it goes without saying that these things all need to connect with one or more mother ships across the wireless web before the internet providers try to chop up this last bit of the public commons.
So that’s that. Now, what does this mean? Are wearables just another kind of hardware that we can use for checking email? I definitely hope not. Wearables represent a new approach to being in the world and as such their applications and the business processes that they support have not been fully figured out yet — and we’ll be saying that five years hence too.
Wearable computing is a new, new thing, a paradigm being born and because it is, its success will be as dependent on a killer app or three, as the laptop depended on Harvard Graphics and later on PowerPoint. Understanding this drives the next question. What kind of world will we inhabit that drives the development of these apps?
Without getting all Kurzweillian on you the permutations can be very interesting. Wearables can deliver content, take your vital signs, prove your identity, and follow your motion just for starters. The implications for me are that wearables will support more independent yet thoroughly connected life styles. If handhelds enable us to be connected from anywhere to anywhere at any time, then I think wearables will enable us to optimize that existence with presence.
So, application development for wearables is a big deal if you ever expect to do more with that fancy watch than tell time and check basic Office functions. But it also marks another turning point in which technology will become a part of your extended life.
Sometime in the not too distant future we will all wonder aloud not only at how we ever got along without wearables, we will also wonder why it took so long for us to fully realize the vision of the 1960’s era Star Trek show.
I was discussing Watson, the IBM super silicon brain that won Jeopardy! the other day with a reporter writing an article. Around the same time, I was also looking into Google Glass, the wearable computer that enables people to record what they see and to see what they’re recording through a teeny tiny screen mounted on a frame over their eyebrows. It’s all very Buck Rogers or Dick Tracy or Special Forces or just so last week depending on your worldview.
Then it occurred to me, as I am sure it has occurred to many other people, that the two things could/would/should merge especially as wearable computing is already nearly passé and on the way to being replaced by implantable computing which could rapidly give way to ingestible computing once they figure out how to, you know, make the ingestion part more or less permanent. But now I am racing ahead of myself by at least seven years. Besides, what kind of sauce goes with ingestible computing? I’m betting on black mole.
Watson has a tremendous ability to learn and to digest huge volumes of input to come up with an answer. I was told that to appear on the Jeopardy! program, Watson’s handlers force fed it about 200 million pages of structured and unstructured content. I also recall from the show, that most of Watson was not in the studio with the other contestants because it’s less of a computer and more of a network. No matter though. Google Glass would be the thing that could put all that hardware out of mind. Glass: the killer app for Watson? Hmmm.
Glass Watson might be great in any high-pressure situation where snappy answers are important. That would not include things like stock trading assuming Watson is already doing that sans any help from carbon-based life. Glass Watson might be a lot of fun in a traffic stop. “Really officer the light was yellow” or “I was going 37.593 MPH in the 35 MPH zone officer.” And in social situations where it’s hard to read the social cues Glass Watson might be a boon to introverts like me who want to be more lifelike.
Naturally, those social situations extend to vendor customer interactions especially where selling might be involved. A retail clerk with a Glass Watson might be able to assess a customer better and faster than a human though I have to say that twenty years later I am still in awe of the woman in a Nordstrom store who nailed my shirt size in the blink of an eye.
But what if the customer has a Glass Watson too? Would we then be in a situation where the people are skipped over entirely? I worry that my Glass Watson would like everything it sees and the retail Glass Watson would scour my bank accounts even finding the one I misplaced in Geneva (maybe it could retrieve my passwords) and over sell me on everything thus turning into a Glass Watson Hoover as in a vacuum cleaner for my wallet. All of this could happen in the blink of an eye.
Still, I always come back to Groucho Marx, an interest I share with Paul Greenberg. Groucho once quipped, “Time flies like an arrow. Fruit flies like a banana.” What would Watson do with that? I wonder if you can get Google Glass with one of those fake Groucho moustaches? Now that would be cool.
Remember when the meteor hit Siberia a few months ago? There was awesome video of the event on all of the news outlets and we all wondered how that happened given the early time of day and the randomness of the event.
The answer was stranger than the act of nature it reported on. Apparently, Siberians are in the habit of equipping their cars with dash-mounted video cameras that record everything that the driver sees happening in front of the car. The cameras, it was revealed, are needed to provide evidence when a pedestrian accidentally falls on a hood and tries to collect damages. The cameras are also useful for refuting spurious claims from corrupt police who might pull motorists over. Some news organizations played a variety of those more pedestrian videos as proof.
So, the car cameras provided serendipitous coverage of the meteor shower that in all likelihood would have been otherwise missed. That’s how I imagine Google Glass — the lensless eyewear that Google is now field testing — becoming part of our lives.
Google Glass is an apparatus that we’ll someday wear to capture reality and enhance our recollections of meetings and other everyday events. It will significantly enhance recall and make memories more precise and I am not looking forward to it. However, I see no reason to be agitated or afraid of Google Glass either.
Nonetheless, I do think that we should have a conversation about memory and storage. How long should these every day videos of normalcy be kept around? Do others have rights concerning our video? What are our responsibilities to others — in other words, how does this affect the social contract?
Memory degrades over time and experts who study this kind of thing have shown that even eyewitness accounts can be faulty. Moreover, even with perfect fidelity and reproduction digital data can also produce faulty information over time.
Really? That’s a tantalizing statement, if I say so myself.
Last week in the New York Times Bill Keller told an interesting story of how such a thing can happen. According to Keller, most states have laws that enable criminal records for some offenses to be wiped clean. In Connecticut the law states that a person with an erased record can even legally testify in court that he or she has never been arrested and booked once the official erasure has been approved by a court.
That sounds good and fair. After all, we all make mistakes, especially when we are “Young and irresponsible,” as George W. Bush once quipped. The trouble is that any ancillary records, such as news articles and video of the moment, are not automatically erased simply because a court erases the record of an incident. The result is an Internet full of historically accurate but legally untrue information, some of it damaging to the individual.
What’s to be done? I really don’t know. The problem of the historically accurate and legally false piece of information is akin to the problem of unsmoking a cigarette. All of this comes crashing down around the Google Glass beta project, which makes it possible for a huge number of these situations to exist someday.
It occurs to me that we have reached a point when we need to acknowledge that 1) No one ever anointed the Internet as the official historian of everyone’s lives and 2) We may need a multi-tiered Internet in which some data is freely available and some is archival and either redacted or otherwise updated to subsequent events.
Perhaps a more natural approach would be useful. What if we treated everyday data as a perishable inventory like milk and produce on a store shelf? After a reasonable period of time, the data could either be automatically deleted or downgraded or placed in private storage and not generally available to the public.
This won’t solve all data security and veracity problems but it will get a conversation going in which some very good ideas are sure to emerge. The European Parliament is now contemplating legislation regarding data security and the individual’s right to be forgotten. The whole issue has many facets that ought to be explored. If you have some ideas about long-term storage of personal experience data that incidentally captures information about other people, I would love to hear from you.