There’s lots of talk pro and con these days about bots, AI, and intelligent assistants. A lot of this talk is not necessarily new; it’s been percolating around the industry for decades. Vinnie Mirchandani, a friend and truly gifted analyst, wrote a book recently, Silicon Collar that accepts that this automation might be eliminating jobs but optimistically holds out for the silver lining. Mirchandani firmly believes and documents how businesses and individuals are taking advantage of an opportunity to build new human mediated processes (and jobs) that leverage intelligent systems.
Another friend, Esteban Kolsky who is also a gifted analyst, says not so fast. Like many of us Kolsky has seen this movie before. He points out that adoption has been painfully slow—so slow in fact that AI fails what I call the Gates Test. You might recall that Bill Gates once said that we over-estimate what we can do in two years and underestimate what we can do in ten. Indeed, the gestation period for an overnight success seems to be ten years these days.
But as Kolsky points out in a recent post, “The latest survey, to be shared at Dreamforce 2016 and published soon thereafter, says that from the single digits in adoption they enjoyed for the entire 2002-2012 decade we are seeing adoption nearing 15% now for automated bots and intelligent assistants.” Slow indeed. What’s been holding things back has been a lack of two things: 1) not enough computing power and 2) a clear need.
We can all think up scenarios where a little help from something with AI embedded might be good but on closer inspection we realize there are other ways to get the job done. AI is a heavy lift, or at least it was once. Back when the working models of AI were set down, computing power was not up to the job but really fast processors, multiple cores, flash memory, and the cloud have made it possible to concentrate the power needed to drive AI. But this still leaves us with finding a clear need.
I offer the following analogy: we live in a spreadsheet-dominated world with a linear mindset but we are moving to a world where the lines are anything but straight. To make sense of curved lines you need calculus. It’s calculus, especially the integral variety that tells us what’s going on in a process that has plenty of funky ups and downs. In the spreadsheet era, which I firmly believe is ending or at least transitioning, we searched for averages and made straight-line derivatives from them.
This led to some dumb ideas like calculating what an average deal is and trying to fit all deals into it as if it were a straight-jacket. It also harkens back to the statistical awakening in the 19th century when the term “average man” first came into use. The average man is a fiction but a highly usable one that gives us a basis for modeling.
But when you go for an average you have to ignore some profitable outliers or other things that don’t fit your model. In the age of business by transaction, the straight-line model was good enough. Nonetheless over most of this century so far, we’ve seen that model become less effective as the vendor-customer relationship moved toward the micro-transactions of subscriptions. A straight-line model doesn’t work very well in subscriptions because at a micro level all transactions look the same. It’s only when you expand your view that you can see the micro-transactions that show trends that might be good or bad. As a result we’ve been left without a model.
A model for the vendor-customer relationship that works involves calculus, at least at the metaphorical level. Calculus gives us the flexibility to model many variables involving customer demographics, purchase history, life-cycle stage, and of course the transaction before us.
I think many people in business have a working appreciation of all this, though they are certainly still in the minority and this is where AI comes in because I see its algorithms as calculus in a box. AI gives the average businessperson who has no interest in calculus, or who might have studied it decades ago, the ability to apply more sophisticated modeling to increasingly complex business.
So this is a long-winded attempt to say that at last we have a clear need for AI as well as the horsepower to run it. The need is all around us and if you’ve ever caught yourself wondering at how sophisticated business and our supporting systems have become, you’ll likely be grateful that there’s a new weapon in the arms race.
If you read a lot like me, you might notice almost daily there’s a new study that contradicts some earlier research. Something causes cancer then it’s good for you. You know the drill. What’s going on here? Do we simply not know what our research is saying? Can nobody correctly interpret the data? None of this would mean much to CRM except that with the advance of big data and analytics, the front office, i.e. the relationship between vendors and customers, is coming to resemble many other endeavors that rely on data analysis. Here is my take on all of this.
Very often the research we get in the popular press and in business interactions represents the findings of correlation studies. Simply put correlation tells how strongly two events are to one another and it takes some sophistication to understand.
We can think of correlation as probability but we need to understand what it means. A coin toss has a 50/50 chance of coming up heads or tails (.5 probability). So 50% is exactly neutral. If something had a 30 or 40 percent chance of happening, it would be negatively correlated. In other words, the probability of something not happening would be greater. However, a 30 or 40 percent chance of something happening is not zero which is why we still get rain on days when there’s less than a 50 percent chance of it.
So, a probability of greater than 50 percent is what we’re usually looking for and the higher the number the better the correlation. A 90 percent probability is interesting but 60 or 70 percent—not so much for reasons that are obvious by now. Still a 90 percent correlation is not a sure thing and using the weather analogy, we sometimes see sunny days when rain has a 90 percent chance of occurring.
In business, we’re beginning to use correlation a lot but that disappoints many because correlation alone won’t tell us another important part of the story, causation.
Causation is the reason behind the correlation. It’s the data that, added to the correlation data, will provide the necessary information on which to make a decision. So, for example, a sales person evaluating prospects might look for high correlation between a prospect’s need profile and the vendor’s solution. That’s a good start but it’s missing something very important. It says nothing about the prospect’s motivation which might only be found through more traditional means like making a sales call.
What? Correlation isn’t enough? Consider this—at the correlation level a prospect in need of a solution looks just the same as one that just bought something from your competitor. Causation in this case is another word for a buy signal and if you look at buy signals and not just correlation a customer that just bought will look very different in this one dimension than one still looking.
In sales and marketing analytics we’re mostly focused on correlation and that means we’re far from foolproof in our predictions. I am not trying to get on anyone’s case but the fact that we’re so vested in correlation simply tells us where we are in the lifecycle of analytics as applied to CRM—there’s more work to do.
Another way to look at the situation is through the lens of qualitative vs. quantitative data. So far I’ve been focused on quantitative analysis like getting those 90 percent signals. Very often when we’re dealing with quantitative findings we’re looking at correlation data. Finding causation requires more sophistication but it is often qualitative findings that tip the balance. Interestingly, you can develop quantitative findings over qualitative findings but it takes a little more work. You need to ask questions differently and you might need to score the answers to get a quantifiable result.
Finding causation starts with asking open-ended questions. In my book, Solve for the Customer, I use the example of creating a new candy bar. The quantitative approach might ask about preferences like do you like coconut, prefer milk chocolate or dark, peanuts, almonds, pistachios, nougat—the possibilities are almost limitless. At the end of your research you might have a very detailed understanding of how much your target audience likes various components of a candy bar but you wouldn’t be any closer to making something that would sell.
The qualitative approach is less sexy in many minds because it implies that you won’t get enough information to work with, but consider this. In designing a candy bar, it would benefit you a lot if you also asked open-ended questions about what people like most about them or their favorite memories involving candy bars, or how they fit into a person’s day. Those questions are almost limitless too and the answers would surprise you and possibly tell you a lot about unmet needs in a crowded market.
If you don’t believe that’s useful, consider the story of Howard Moskowitz. Back in the day there were two competing makers of jarred spaghetti sauce Ragu and Prego. Prego was the perennial number 2 in the market and wanted to take the lead and they hired market researcher Moskowitz to figure out how. At the time there were also only two kinds of sauce on the market, plain and spicy. That’s it, just two. Moskowitz hired chefs to make what was ultimately 45 kinds of sauce, many with chunks of things in them like tomato, meat, and other veggies.
Moskowitz discovered that about one third of the American public wanted chunky sauce but incredibly, there was none on the market. Previous research was concentrated on getting quantitative answers to questions about existing choices, which can be boiled down to how do you like our sauce? There were no open-ended questions about what caused people to like spaghetti or Italian food. The Moskowitz taste tests provided the open-ended questioning leading to discovery of a new market that’s been worth billions ever since.
My point in all this is that you need both quantitative and qualitative information to arrive at correlation and causation if you hope to understand customers. If you’ve embarked on an analytics journey that’s great but keep looking and formulating your strategy. Buying a single product is definitely not the end of the journey but a beginning. If you’re a vendor, don’t make the mistake of thinking that your single product is the final answer to market need. It’s a stepping stone and you need to position yourself accordingly.
We are now through almost 15 years of the century and for all of that time I have been analyzing the CRM industry as it has evolved. This year, rather than simply reviewing some of the progress we made in the industry for the last 12 months, I think taking a broader view of the decade and a half might be more interesting. It certainly gives us a great perspective on how far we’ve come.
CRM was already a thing at the turn of the century and it had been gathering steam throughout the 1990s. But it suffered from the same troubles that ERP was having at the time. The on-premise software was expensive and it usually cost somewhere between 2 and 3 times the license fee to get the products to work in your business. Part of the high cost of integration was that few vendors had all of the products under one roof — i.e. well integrated. You could buy SFA from one vendor but you’d need to buy call center and service products from others and subscriptions were embryonic.
Marketing automation as we know it today didn’t exist except in the mind of a Canadian entrepreneur, Mark Organ. It was largely an accounting system designed to help manage marketing spending. It is worth noting today that marketing spending only came under control once we were able to apply analytics and capture great heaps of customer data. We got costs under control because we were able to be smarter about where we spent our resources, not because we watched the pennies.
In this scheme there was no thought of using software to support business processes beyond being able to cover actual transactions, which is not the same. CRM grew up in a time when transaction management was all you needed so no one knew the difference. But try applying 15-year-old sales or call center software transaction management tools to today’s process oriented business world and you’ll have your clock cleaned by your customers.
Process orientation is, to me, the greatest difference between then and now. Even though our businesses for the most part still do a middling job of process support today, the tools are much better and any company that wants to take on the challenge can bring together a suite of end-to-end processes supported by modern software.
So why don’t we do a better job of supporting front office business processes today? I suspect it’s because we don’t know what it is. For instance, consider the customer onboarding process. I don’t think it existed 15 years ago and if it did in your company, it was completely manual. On boarding was well served in cases where there was a big need for installation and training — recall that 2x to 3x multiplier. But onboarding for smaller products was overlooked either because products were assumed to be intuitive (though many weren’t) or because a manual came with them. At any rate, because there was no such thing as a subscription as we know it now, once the sales transaction was completed a vendor simply went on to the next opportunity.
Onboarding became a real issue when vendors noticed an avalanche of calls to the service center. With some products you could sense customer resentment, especially as third party sentiment sites and communities gained traction. Manuals went on line and FAQs proliferated and that satisfied enough people though it barely solved the problem.
Today, vendors treat onboarding more seriously, in part because so many things are now bought as subscriptions. With subscriptions if you don’t onboard customers successfully they can easily leave you taking their revenue streams and any investment you’ve made in them out the door. So vendors chase customers today to get them involved with their products after the purchase. They have databases and procedures designed to get customers up to speed and happily involved so that frustration doesn’t rise and lead to attrition.
That was just onboarding. How many other front office business processes need the same treatment? There are two answers to that question — a lot of them and all of them. There are a lot of business processes that the front office engages in and they all need to come under the jurisdiction of a unified system for many of the same reasons onboarding is so important.
You might be able to automate support for a single process like onboarding through a Herculean programming effort but if you try to duplicate it for all of your front office processes you’ll go crazy and broke. It’s hard to say which happens first. Yet here we are on the eve of 2015 still talking about individual apps as if it was the eve of the new century. We have apps that will shave a few minutes off a sales rep’s day or apps that will enable a call center rep to get off one call and on to another 10 seconds faster but we still don’t get the results we want.
Those apps make about as much sense as installing an accounting function in marketing did a long time ago. The route to better front office business processes runs through data and analytics. But the findings must be incorporated into the next customer encounter through machine learning and other nifty tools. Doing this brings us up to the present and reveals the importance of platform, the thing that incorporates all of our data, analytics, social, and mobile technologies into a powerful tool that enables our businesses to change as quickly as our markets.
That’s the biggest difference between today and 15 years ago. Back then you could run a business based on the old mass production and mass marketing paradigms because few things changed much. Today change is constant and our ability to keep up resides not in our systems but in our platforms. What makes up our platforms and how they’re brought into business challenges is what the future of CRM is about.
Sales people have been demanding better leads for a long time and today marketing is in a position to provide them. At the same time, marketers have discovered that the kind of data they collect is as important as its volume.
Marketers need to provide rich prospect profiles that answer many of sales people’s most important questions including: Is there a need? A budget? An executive sponsor? This is information that doesn’t come from simply buying a target list and getting this information requires more than collecting a small set of demographic data.
A few years ago sales people were happy with basic demographics — a name, a title, a phone number — and with that they’d schedule a meeting to capture what was really important such as need, budget, the identities of the decision makers, and more. But with today’s high quotas sales people don’t have time to invest in this basic data gathering and managers want more meetings that advance sales processes rather than performing simple qualification. So all this has caused marketing to re-think its processes to meet sales’ demands.
Today marketers collect a variety of data through multiple techniques to enrich the leads that they ultimately hand over to sales. This approach also weeds out leads that might look good on paper but that will never close.
So, marketers might start with a generic list and apply nurturing campaigns hoping to cultivate information that sales people can use. For instance, they may use social media to engage with prospects and in the process build a knowledgebase and share content. With nurturing and enhanced collection feeding more data to analytics the refined sales leads that marketers are delivering today are a thing of beauty. Unfortunately, that’s not enough any more.
In addition to all the data we collect and analyze to produce sales information, we also need to be mindful of the current situation in all target accounts. By definition, situations change almost daily and the information about change, when added to data already collected through other marketing channels can produce a potent combination.
Information about changing circumstances comes in many forms — press releases, earnings reports, news items, analyst reports, and much more. When added to what we already know about our territories and target accounts, this new information can turn a pile of routine marketing findings into powerful sales knowledge that approaches intellectual property.
If you view IP as the sum of a company’s research, knowledge, patents, processes and the like, then you really should add sales knowledge to your list. The knowledge you can develop about your markets and target customers, in relation to your business’s other knowledge, designs, and plans is unique. You own it, no one else has it and it is a competitive weapon.
But just like filing a patent, there’s a long process involved in bringing knowledge together so that it can be used effectively. Until fairly recently, marketers didn’t have the tools needed to find the disparate data scattered across the Internet that could complete the picture of a prospect’s need. Also that completed picture is most useful when the competition does not also possess it so there is an advantage in being a first mover in the race to capture and collate market knowledge.
That’s why savvy vendors are increasingly relying on sales and marketing intelligence tools to scour the Internet for those bits of information that can complete a marketing profile and turn it into a hot lead. Every day businesses give off data about their aims, ambitions, results, and shortcomings, which can be viewed as moments of truth, and all of this data can be useful for vendors with specific solutions.
This takes some of the randomness out of selling. By identifying moments of truth and being able to suggest specific solutions, a vendor can move from a position of hawking a product to becoming a trusted partner and, of course, this provides the vendor with a competitive advantage.
For these reasons, developing customer knowledge really is like developing any other form of intellectual property in a company. It is also why so many forward thinking businesses see sales and marketing intelligence tools as vital to their continued success.
Like any sane person operating in the wilds of the Internet, I keep a weather eye out for what others might be saying about me. In other words, I have a vanity search crawling the web to find what there is to find. Just the other day, someone else’s vanity search (The Blue Ocean Strategy Institute) collided with mine to produce a virtuous result.
W. Chan Kim, a smart guy out of HBS who wrote “Blue Ocean Strategy” a few years ago, heads the Blue Ocean Strategy (BOS) Institute. If you aren’t familiar with its precepts, they are like many valuable things in life that boil down to common sense. The website offers this wisdom,
“Stop benchmarking the competition.
The more you benchmark your competitors, the more
you tend to look like them.”
Amen to that.
The point, which I borrowed from Kim and have endorsed for a long time, is that really successful companies sail out on the blue waters of the deepest ocean to find novel ideas that they turn into products and services to delight their customers and make a ton of money. After all, once you look like your competition, what do you compete on, price? Be still my heart.
I think of Apple and iPods, iMacs, iPhones, iTunes and the Apple Store when I think about Blue Ocean Strategy. I also think about Salesforce.com, the leading enterprise software vendor with a Blue Ocean Strategy to incorporate social ideas in everything it does and more. It’s also the company leading the charge on platform dominated cloud computing and you can see the results all around you though not necessarily in the financial press.
The collision I mentioned is how their vanity search found an article I wrote on Salesforce and Blue Ocean Strategy, “Salesforce Opens New Channels with Chatter,” in SearchCRM (May, 2012). They’ve posted a link from their site to this one. Not bad, I say.
On to Zuora
To show you just how long it takes to get the conventional thinkers out there to adopt a new idea, we can turn to Tien Tzuo, CEO of Zuora the billing and payments company dedicated to subscription business. You might remember Tzuo from his Salesforce days as CMO and Chief Strategy Officer. Tzuo’s recent article on All Things D was background for an interview he gave to CNBC discussing Netflix’s very good financial results.
Original AllThingsD article referenced http://allthingsd.com/20121128/wall-street-loves-workday-but-doesnt-understand-subscription-businesses/
It seems that despite all the success of cloud computing and subscription business, Wall Street still has a hard time when it comes to evaluating the success and financial soundness of subscription companies. The basic issue from what I see is the old saw, “A bird in the hand is worth two in the bush.” Wall Street analysts would rather have that bird tucked away in hand than the two or more fluttering in the bush even if they had a big net.
Perhaps that’s just human nature handed down to us from our ancestors in the savannah. Nobel laureate Daniel Kahneman tells us all about it in his recent book “Thinking Fast and Slow”. It’s part of our nature. But we also have big brains and now and then we are supposed to let those brains out for a walk, to make progress, to evaluate the safe bromides of inherited wisdom to see if they really tell us the truth about reality. On Wall Street that’s a pretty short walk.
Our big brains have come up with mathematical models and metrics that describe how subscription businesses are different and how they should therefore be evaluated. According to Tzuo’s article, there are four really important metrics, Annual Recurring Revenue (ARR), which is self evident, and
“Growth Efficiency Index (GEI – the sales and marketing expense needed to acquire new dollars of ARR), retention rate, and recurring profit margin (how much non-sales and marketing dollars are spent on servicing existing ARR)…”
But caring more about that bird in hand means not using these metrics yet and instead it awards a company like Salesforce with a PE ratio normally reserved for startups with little revenue.
But getting back to the whole Blue Ocean Strategy, it’s those companies competing in the beauty pageant punctuated by quality earnings calls and metrics from the manufacturing era (roughly steam, rails, oil and cars) that get the ink and become obsessed over by the cognoscenti of the concrete canyons on lower Manhattan.
It’s a shame, really. W. Chan Kim and Daniel Kahneman would not approve, I think.