So much is happening as we approach the end of Q2, our industry’s busiest quarter, at least by some measures. I’m flying around seeing things but not always able to comment from a middle seat on a red-eye. So this piece is an attempt to catch up and set some markers for the traditionally slower summer.
Last time, I was searching for a word to describe a new category I see. You can call it various things like on-demand services or even services as a service which somewhat distinguishes them from things as a service such as SaaS but it’s also confusing.
SaaS has led the way in things as a service and while it’s a perfectly good descriptor the rapid evolution of IoT, Internet of things, has introduced some confusion. Things as a service describe any traditional good delivered as a service, such as software or a car or a cell phone. Services delivered as services often don’t have a physical component or that component is of a different type, perhaps not even human.
For instance you can get software as a service but the training or consulting that needs to go with it is very different. First, it’s delivered by people who show up, do a job, and disappear; you don’t employ them and you certainly don’t own them and their work product is pure service, often leaving behind only thoughts in other’s minds or software code.
Another example, and my favorite right now, is earth moving. Various makers of things like excavators and bulldozers now offer earth moved as a service obviating the need to purchase the big device. The difference is that the service is intentionally and decidedly temporary. These companies calculate amount of earth moved (in a simple example) and charge by a meaningful metric such as tons or cubic yards moved. Moreover, these are short-term services; the equipment and people to run it show up one day, do a specific task and then are gone. Or perhaps they are idle for one week per month—how do you charge for this?
In a SaaS model you might buy a specific number of seats per month and that’s it, if your people don’t use it, too bad. But in the earth-moving example, an idle machine still has overhead for a vendor. How does the vendor capture revenue when the device is idle? It’s not hard to do but it gets into some branching logic that typical billing systems might not cover. So very quickly, we see that service as a service is different from a thing as a service. What do you call that? And what’s the name of the business model and how do you account for these services?
My thoughts include words like precision services or discrete services. Each conveys a sense of the ad hoc, a temporary, specialist thing that won’t become part of the status quo in the sense that it will be gone at some point. Just think of the earth-moving equipment required to build a tall building and understand that it’s not there any more once the building is completed.
So that’s one thing I’ve been noodling on. Send me a note with your thoughts.
* * *
Also on the docket have been Oracle’s results for the last quarter. It’s only important to look at the direction, which is up and to the right of the graphs, to know that the company has hit stride on cloud computing. I am happy for them and have previously written that their model is uniquely suited to their customer base. It includes all phases of cloud computing including infrastructure, applications, and platform to support customers in various stages of the move.
Oracle’s big footprint attracts lots of competition from Amazon’s AWS at the infrastructure end to Microsoft, Salesforce, and SAP on applications and platform. I am not even sure if they all agree on what platform is and that’s important. It tells us that the tip of the spear is platform and that’s the competitive landscape. It’s also the metric that we need to use to analyze and understand the quality of any software vendor’s earnings.
Infrastructure is heading toward pure commodity status and it’s getting close if in fact it has not already arrived there yet. But ironically, you can’t be wildly successful in the other phases of the game if you don’t have a credible infrastructure offering. So you have to look with great interest at Oracle’s infrastructure number which equals just north of $400 million on what I believe is a $1 billion cloud base. Is it a good thing? I think it’s a necessary thing and it might set the company up to do well in other phases but that jury is still out.
* * *
Finally, there was a piece on AI in the New York Times Sunday Review, “The Real Threat of Artificial Intelligence” by Kai-Fu Lee that I am in complete disagreement with. I’ve seen the argument before: AI will swallow up jobs leaving a large and unemployable group of people who will require some form of guaranteed income support. But rather than offer an opinion, let me supply an analysis and some data.
Massive income assistance has never worked well in human history. You might go all the way back to the Roman Empire and recall the idea of bread and circuses as an example of such welfare. But if you do, you also need to factor in that it didn’t work out well for guys named Caesar. In modern times the top earners have always objected to the confiscatory taxes needed to make such a scheme work.
This kind of analysis is too dependent on straight-line thinking. What’s missing is any sense of the dynamism of free markets in a democracy. Free markets enable innovation and entrepreneurship and with them come new industries and new jobs. I know things look kind of bleak for people with high school educations or even people with BA’s in literature or philosophy. But the fact of the matter is that since the Industrial Revolution there have been 5 ages when an industry or a clutch of them took off and did really well for a few decades only to fall back to earth later killing some of the jobs it created in the name of efficiency and commoditization.
What we’re going through with AI is cyclical and not one of a kind. I just wrote a book about it and it’ll be out in September, a time when we come back from the beach and put our game faces back on and rediscover that a machine really can’t do what we do.
I don’t know why more subscription vendors don’t do this. Subscription companies collect mountains of data from their customers and analyzing the aggregations can deliver profound insights virtually for free. Yet too often subscribers are reluctant to let their data be stripped of identifying characteristics and used for research. Too bad because there’s gold in that big data.
One subscription provider that isn’t afraid to do the analysis or to ask its customers to contribute to generating new knowledge is Xactly, the sales incentive compensation ninjas. For many years the company has captured data about sales performance and provided concrete information to its subscribers about things like attainment vs. quota and how they compare with peers. One of the early findings they released was that women sales reps were more loyal and were better at delivering on-quota performance. Yet they were paid a little less. Sad.
Xactly has been doing this kind of analysis because they have the data and because their customers understand the value of seeing how they compare to the averages. Good on them. The most recent data to come out of Xactly involves understanding the nuances of different cities as places to plant your next sales office. That might sound a little in the weeds but I guarantee you every sales VP thinks about it at some point.
The data concretely shows that cities on the west coast and in the south have a lot to offer while some of the big cities of the North and East are surprisingly…different. Conventional wisdom says that you simply must have an office in New York and/or Boston and Chicago, yet these cities have some of the lowest average staff attainment rates—New York 60%, Boston 54% and Chicago 46%. Interestingly turnover in these markets is very low relative to other places with New York at 16%, Boston, 19%; and Chicago 24%. Compare places like Austin with turnover at only 20% but attainment at 75% and Austin isn’t alone. The west coast has some of the most productive cities but Denver, at 83% attainment, takes the cake.
Xactly also tracks things like the cost per square foot for real estate, which you’ll need to plug into your cost calculation when aiming to open a new office. But what does this all mean?
How do you analyze this?
It would be easy to say that sales reps in the Northeast are terrible and the people on the west coast have it all together but maybe not. It takes more than meets the eye to analyze this.
First, everyone wants an office in the big cities especially in the Northeast where there are loads of big corporate headquarters—a condition one of my sales managers once described as target rich—so the region attracts sales managers and their people. But if everyone has the same idea this also drives up competition to the point that no one really wins, which is one interpretation of the data.
Second, variable pay is a much larger share of compensation in the East than the West. For instance just over $47k in New York but only $8.8 K in Seattle. This leads me to wonder about turnover; it’s 41% in Seattle and only16% in New York. Do people leave more readily in Seattle because there’s never much on the table? Or flip it around—are companies reluctant to fire people in the Northeast because they’re all taking serious draw against commissions? Fire people and say good-bye to the recoverable draw.
You can understand a lot from a few charts but as a manager, you need to make the right calls when hiring and expanding. A compensation plan has to be competitive for the market the office serves or you’ll risk not being able to hire good people because their perception will be that another offer more in line with local norms is better.
Real costs of real estate
The least expensive cities for real estate costs per square foot are Atlanta $23.51, Denver $25.80, and Chicago $30.13. So at the end of the day you find yourself calculating quota attainment vs. real estate costs vs. fixed and variable costs per rep. At some point you need to factor all of it into the actual cost of a rep and play that against expected revenues. But still you’re not done because as good as these numbers are, they are all largely extraneous to most businesses.
You still need to understand where your customers are and how they like or expect to be treated and for that there’s a big overlay of experience and industry clustering. So as good as this information is, we need to take it as a first cut when determining where to situate the next sales office. That’s not particularly surprising. What’s great though is with this information and more to come, we can begin to rely less of gut instinct and manage better by the numbers.
It gives some measure of how seriously we take AI today that I went to two conferences last week and sat through two panel sessions on the subject. At CRM Evolution, I was part of the discussion in a breakfast session Paul Greenberg organizes each year, then I flew to Las Vegas for the Oracle CX show. There executives involved in the adaptive intelligent applications product line presented a session for analysts and press members that tried to define the basics.
I have to say that neither session was especially illuminating which is not to cast aspersions on any of the participants but more to provide a gauge of how early we still are in the market cycle. If it seems hard to define AI today it’s equally difficult to wrap our heads around its potential.
In Washington, at Evolution, people talked about the trust factor and how easy or difficult it will be to accept that an algorithm might know more about a situation than the user. For instance, a GPS system will “know” about road conditions that humans can’t see.
In Las Vegas the discussion started with the now typical dystopian fear that algorithms or bots might be about to steal our jobs and for some reason this seems to engender visceral fear in the population the way that packing up factories and shipping them to low wage countries might not.
It struck me due to an accumulation of research that while we might talk about lost jobs or trust issues that the reasons for unease about AI—or whatever we decide to call it—might be more fundamental. It might be that AI signals the significant diminution of a style of thinking that is uniquely human, something that has evolved with us, to be replaced by a style of thought that has been with us only since the Renaissance and the development of the Scientific Method.
First, let’s agree on terms. The broadly knowledgeable silicon and metal-based intelligent life form that has lurked in science fiction for the better part of a century, is still fiction and will be for some time. Those who are concerned about such an entity replacing us will have to wait many more years before something like HAL is available and then like the first steam engines we’ll discover it’s too big to move around so it will be limited.
The AI that we increasingly see in CRM and other business apps is rather one dimensional, able to tell you the traffic but nothing else. It’s analogous to the robots on car assembly lines, each programmed to make a weld or grind a surface but that’s it. Making an assembly line is a matter of setting up many robots in a row each doing something different and not empowering some super machine to do it all.
So what’s everyone so concerned about? Simply put it’s the difference between deductive and inductive reasoning and now we enter the weeds, just a little. Deductive reasoning is something we humans do well and it involves beginning with a premise and deriving conclusions. Surprisingly math consists of a lot of deductive reasoning. Certain assumptions or postulates start off our reasoning from which we make deductions. More generally you can deduce from basic ideas too like the famous,
All men are mortal
Socrates is a man
Therefore Socrates is mortal
Note however that getting a true and useful conclusion requires a true and useful assumption, postulate, or statement. If we’d started with “All men have feathers,” we would have gotten nowhere fast even though our logic would have been impeccable. Politics is like that today and without trying to hurt anyone’s feelings, there are a lot of examples of situations where we move backward from conclusions to discover the premises it would take to get there. But that’s not the purpose of this piece.
On the other hand inductive reasoning is the logic of science and the kind of thinking we all do sometimes, especially when there’s time and probably paper and pencil. Inductive reasoning involves gathering data and applying statistics to discern patterns. It’s the heart of the scientific method and the reason we live in the world we do instead of one where we’re all subsistence hunters and farmers.
Inductive reasoning involves the language of hypothesis and proof and theory but not belief. We believe what the data tell us, not what we assume and when the data reveal something wrong about our beliefs we change beliefs. We don’t work backward to discover our premises. Inductive reasoning is what drives AI and I think it is the heart of our heartburn.
In both sessions I attended last week someone in the audience inevitably brought up the trust issue as in, I can’t see how I can trust an algorithm and feel I simply must have the option to override it with my gut instinct. If I unpack this I get the notion that we’re comfortable with our deductions and the premises they spring from and it’s rather frightening to have to rely on not much more than statistics. Yet the times in human history when we’ve made progress are precisely those times when we pushed back the boundaries of premise and belief and substituted cold, hard, facts derived from data.
What’s different today is that we don’t have a single man like Galileo proposing that the earth revolves around the sun, because that’s what his data tells him. We have millions of them and their proposals are both profound and banal. In the process, we are rapidly pushing deduction back to a smaller footprint than has ever been the case for humanity and that can feel frightening.
Oracle jumped into the AI and machine learning space for its CX products (a.k.a. CRM) and other applications like HCM at OpenWorld with an interesting difference—a huge data store to help educate the algorithms that work for you. Now we’re waiting for products to be delivered this year.
Machine learning depends on data about prior situations that the learning algorithms can use to get smart about a situation. Ten examples are good, 100 are better and generally the more samples there are the more refined a recommendation can be. That’s why machine learning never really ends. Like a great player or team, the learning and practice never stop and neither does the improvement. But it’s worth understanding that improvement beyond a point of basic competency will slow down regardless of what you’re modeling.
When you’re a kid you can make great strides in almost any sport but as you progress those strides become smaller and they’re harder won. Consider Olympic swimming or track and field where athletes try to shave fractions of seconds from world records. Often the difference between gold and silver can be an arcane difference in technique.
In business and machine learning, algorithms don’t stop learning for a very good reason—every new bit of data suggesting some fractional difference could be the harbinger of an evolving trend and the only way to stay abreast of that evolution is to stay current with the data. So you can quickly see that data is critical to the success of machine learning and that’s a big deal because few organizations possess all of the data they would ideally need to feed the algorithms that drive decisions.
Moreover, data quality is also a major issue and while a business might hold a great deal of customer data, its quality or lack—the duplications, misspellings, ambiguous designations, and incompletions—have, for years been the bane of data scientists and analytics users wanting to get information from their data.
Data quality is one thing that will distinguish Oracle’s Adaptive Intelligent Applications. Scheduled for delivery soon, Adaptive Intelligent Applications will work with customer data as well as Oracle’s Data Cloud, a collection of more than 5 billion consumer and business profiles, with over 45,000 attributes. The combination of a business’s specific customer data combined with this third party data will yield important insights that are unique to a business and its customers.
Businesses have always sought out fine differentiators like these solutions can provide to separate them and their rivals. Depending on the stage of market evolution that could mean product differentiation, value added services, product line extension—almost anything. The problem with all of these approaches is that they’re superficial. It’s all vendor, brand, or product centric because that’s all that a business could control prior to the development of very powerful computing and modern analytics and machine learning. If you wanted to peer into the mind of your customers you had to rely on gut feel—usually that of an executive who’d been involved in the industry for a long time.
The trouble with gut instinct is that it’s often wrong. The research that led to a Nobel Economics Prize for Daniel Kahneman—see Michael Lewis’s new book, “The Undoing Project”—shows that the rules of thumb or heuristics that we use in every day fast decision-making are often wrong or reveal a bias. Interestingly, since machine learning is definitely not human, it can avoid heuristics and biases and work the way we work when we concentrate and work slow and perhaps use pencil and paper. But the point of machine learning is to have the benefits of thinking slow and with a pencil but without having to do the work. In the process, machine learning is able to reach more users and prevent more incorrect assumptions from coloring business decisions.
To be clear this does not amount to a one size fits all approach to analytics. The Adaptive Intelligent Applications that Oracle has built also come with supervisory controls that enable users to fine tune their analyses to the specifics of a business’ needs. So the power of Oracle’s Adaptive Intelligent Applications will come from its well-crafted algorithms but also its Data Cloud. But the fact that it might prevent users from using an estimate or rule of thumb might turn out to be just as valuable.
Artificial Intelligence and Machine Learning (AI and ML) have taken the industry by storm with some saying they will usher in a new age of better business processes and customer orientation while others fret that automation will kill jobs. Both might be true.
There will definitely be jobs that no longer make sense for humans to do thanks to automation. Generally they’re entry level and not much fun but this begs the question of what, then, becomes an entry-level job. I took a briefing last week with, Conversica, a company that uses AI to do general-purpose triage for early stage sales leads. I am not affiliated with Conversica and I am simply reporting, but the technology seems pretty cool.
Conversica is essentially a bot that responds to things like email with appropriate information and then follows up. When a customer indicates a need to interact with a human, the bot makes the transfer. The bot is tireless and can work 24/7 so there’s a lot to like. This is a good example of automation replacing a human but I am told the bot simply makes it possible to redeploy the human to another job that requires real human thinking. I am sure it can.
Often when I see something like this it’s a new technology applied to an old problem and truth be told, that’s generally how new technology gets disseminated. For example, it has taken a long time for social media to establish a niche different from being a cheaper email platform. Some people haven’t made the leap in understanding but generally we’re there. So I fully understand employing modern AI technology to support the sales effort; sales is one of the first stops for new technologies.
But I’d like to divert your attention for a moment to some business processes that are nearly neglected where AI and ML could make real contributions. They would likely not replace anyone because a job isn’t being done in many cases. Customer loyalty is an area that could use some help, even the help of bots. Loyalty has always been a passive thing in which customers are expected to do something that demonstrates loyalty in the moment, rather than an active thing that businesses pursue.
We expect customers to do something loyal, usually making another purchase, for which we then give out rewards. That’s okay but it misses the point. Rewards are by their nature backward looking and loyalty ought to be something of the present and future. Say I advocate on behalf of a favorite product that may drive future business for the company. Often that’s not part of a business’ idea of loyalty and so it goes unrewarded even though it is a good loyalty indicator. Too bad.
Now imagine if you tuned some of your AI muscles toward loyalty. What if you had a bot that caught customers doing good things for which you rewarded that behavior? The reward, not associated with a purchase, might inspire more loyal conduct and all this could be done without human intervention. So this little exercise just 1) invented a job for a bot, 2) replaced no humans, but it did 3) improve business performance.
My point is that there are lots of incremental improvements like this example waiting to be discovered as we contemplate using AI. Many of AI’s early deployments will be like this, not very sexy but useful. That’s what happens on the far side of the hype cycle, after everyone’s tried applying the hot new technology to the oldest of problems. It’s maybe even after a lot of people have become mildly disenchanted with the failed promises that the new category couldn’t deliver.
I’ve seen this hype cycle movie before and I am wondering if we might be able to speed up the process. It’s really nothing more than imagining new processes rather than being happy pursuing a well-worn path.