March, 2017

  • March 31, 2017
  • There’s a big difference between B2C and B2B analytics that no vendors seem to be addressing and it involves the consumption model. I spoke with K.V. Rao, founder and chief strategy officer of Aviso, an analytics company focused on sales recently and his unabashed opinion is that, “If you’re trying to expose insights, and make things consumable, you have to address workflow.”

    He has a very good point, especially for buyers who may be having trouble figuring out what they need. The decision process runs trough digital disruption (am I being left behind?) to big data (what do I do with it?) to analytics and machine learning (same stuff, right?). Usually at this point in a client discussion I ask people to tell me the kind of information they want to get but now I think this might be jumping the gun. Before asking what kind of information you want, it would be an excellent thing to better understand what processes or workflows you’re trying to influence.

    For example, we all want to sell more and we’ll try almost anything to do it, but that opens a big can of anacondas. What part of the marketing and sales process do you want to influence or spiff up? Is the problem quantity or quantity of leads? Do your reps get stuck in one part of the sales cycle? Are renewals off? Do your customers up and leave without warning? We could go on and there are analytics tools that can help with all of that but you need to get the diagnosis right.

    Workflow may not be the first thing that comes to mind when considering analytics but it might be the big silverback gorilla sitting in the corner. Workflows are vastly different in the B2C world than in a B2B situation. Simply put, when dealing with consumers, analytics is aimed squarely at aiding customer decision-making in the moment, so as Rao points out, “The workflow is almost non-existent.” Good point and not at all what decision-makers in the enterprise encounter.

    Consumers are trying to figure out whether or not to buy and that’s rather binary. On the other hand, enterprises buy by committee and need to develop information from whatever data they collect so their need is for long term information to build a purchase case.

    Marketing, sales, service and support might all benefit individual users through analytics for in-the-moment tactical decision-making but only to the point that they are already working on organizational goals and not individual choices. Last week IBM and Salesforce introduced a new partnership based in part on their respective analytics tools, Watson and Einstein. There’s great interest in how these two will work productively together without stepping on the other’s toes and the evolution of this relationship will say much about the future of analytics generally. As I see it and as the press release strongly hints, Watson will be important for providing strategic situational information while in CRM at least Einstein will support the tactics of a vendor-customer interaction.

    One example in the press release was Watson providing retailers with weather forecasting information that could easily be applied to better understanding the traffic pattern to expect for the day. Einstein on the other hand would still be responsible for understanding data about customers past purchases, new requirements, upsell and cross sell potential and more.

    Retailers have had all of this in mind for a very long time even before Watson and Einstein and they coped though not always well. As Mark Twain is supposed to have quipped, “Everyone complains about the weather, but no one does anything about it.” And as the great retailer John Wannamaker or possibly Marshall Fields once supposedly said, “Half of my marketing budget is wasted. I just don’t know which half.”

    So the different roles of analytics are intended to solve those and similar problems but before they do we still need to get a handle on the workflow we’re trying to influence. That’s still a job for the human mind as is the most important decision of all—whether to accept an analytics driven recommendation or to make a decision to rely on other information that the genius software is not privy to. For instance even with a great weather forecast it’s probably still vitally important to know if there’s a parade coming through downtown at noon.

    Published: 7 years ago


    One of the foundational ideas of business is the Pareto analysis that tries to identify the small portion of factors that are responsible for the majority of business results. You know the type—80 percent of profits come from 20 percent of the customer base and similar observations.

    Vilfredo Pareto an Italian engineer and economist formulated a business classic that bears his name and we now simply refer to as the 80/20 rule. In fact the rule is malleable and subject to change based on all kinds of factors that are unique to a business. Some businesses see a 90/10 effect or even a 60/40 split but the point is that a small portion of all contributions to business success make the lion’s share of the contribution. But which ones are right for your business?

    In a Harvard Business Review article, “AI Is Going to Change the 80/20 Rule,” analytics guru Michael Schrage points out three tips that can help any manager to do a better job leveraging big data and analytics. The one that most interests me is the idea that you can almost have too many Pareto analyses thus making the whole exercise full of noise and less predictive.

    For instance what’s most important to keeping customer churn and attrition low? You could develop models, metrics, or KPIs that assess all kinds of things about your business from product quality to customer service to availability of online help—the list is endless. But only some of the analyses you come up with will be really important and it’s more likely that you’ll only arrive at a clear understanding if you track the interaction of multiple KPIs and their relationships with each other.

    A couple of books ago, I wrote about the need for triangulation when using metrics and KPIs and Schrage seems to be on that path with his idea which is to perform Pareto analysis of your Pareto analyses. This, of course, sounds contradictory and it is but it also makes sense. Today a business can easily get to a point where it has not only too much data but too many analyses and when that happens going up a level of abstraction makes all the sense in the world—especially when you have the compute resources to perform the function automatically.

    Note that this is different from using one or a small set of models at the department level. Those models are more tactical and can significantly help individuals to do their jobs better. A model that can tell a sales rep that this opportunity won’t happen but that one might is a real benefit because it saves the sales team from investing resources where they are not likely to be productive while focusing them on the better opportunities. But that’s not enough to run a business with and the purpose of the triangulated or Pareto-ed Pareto’s is to get a bigger picture that can include multiple departments and even input from customer communities to better understand how the company is performing over all.

    This is a great example of two things, the power of modern intelligence systems to sift through a business’ voluminous data but also how technology advances open up new ways of thinking that were not available before. Not that long ago, a business leader might have had access to churn and attrition numbers and that leader might have even been able to see a month or two ahead to understand which customers were in danger.

    Too often those danger signals were a green light to offer discounts on a future purchase. That sounds shrewd but what if the customer’s problem had nothing to do with pricing? What if the customer needed an upsell of services or additional products to make everything work. Schrage offers examples of this and I’ll leave it to you to look him up.

    But today with this advanced approach, we’re becoming better able to understand the reasons why our primary indicators are flashing. With that we can be more confident about entering a customer interaction with relevant knowledge and we can drive our interactions toward conclusions that are more mutually beneficial.

     

    Published: 7 years ago


    I got a wakeup call from reading, “You Need to Manage Digital Projects for Outcomes, Not Outputs,” an article in Harvard Business Review by Jeff Gothelf and Josh Seiden and the headline says it all.

    How many times have we been lulled into complacency over getting a project or a product done, but not necessarily well received because that was someone else’s gig? But this is a great reminder that the job isn’t done simply because we produce something and shove it out the door.

    It’s also a cue to pay close attention to what our CRM systems tell us, more on this in a moment.

    If you look at this idea a certain way it’s telling us that our concept of a product has changed—actually it changed a long time ago though old rationales persist. A product is no longer the thing itself but the thing plus all the services, processes and procedures we attach. Geoffrey Moore and before him, Regis McKenna called it whole product.

    For a long time, whole product was an idea that many businesses could safely ignore and today that group is smaller but it still exists. The big turning point was the invention of the subscription or selling a product as a service. Suddenly it was a lot harder to push a product out the door and forget about it. Subscription vendors don’t make much money on an individual sale and must retain customers for repeat business or they’re lost.

    Recruiting new customers in this scenario is expensive and can drain the coffers. So getting to outcomes and not simply output for them is critical. What’s also critical is that customers get it and they’ve been trained to expect subscription-like services from any vendor regardless of what’s on offer.

    Gothelf and Seiden’s article introduces the idea of mission command something we might have just called taking initiative back in the day. The Prussian army of all groups was the home of this alternative viewpoint according to the article. It essentially said that we should rely on individuals to make good decisions to achieve outcomes and this is especially true in war, which the Prussians excelled at. In the fog of war, the best-laid plans can often be rendered useless by events so it’s important to instill in individuals the sense of mission and the objectives while giving great latitude to act in the moment. If you think the answer is to do more detailed planning then go back to the fog of war—it dashes plans with aplomb. Giving the individual latitude in achieving outcomes was therefore critical.

    Mission command sounds a lot like what it was like to be in sales a few years ago. Communication was primitive; you met with a sales manager once a month and reviewed the pipeline updating as you went along. The rep had a territory and was responsible for whatever happened in it so it was incumbent upon the rep to take actions that would benefit the company without checking in with headquarters all the time.

    But what’s new about is the emphasis it places on the individual to get things done despite of AI, machine learning and other nifty new decision support tools. We’re more accustomed to staying in our lanes and doing our jobs today expecting that once a thing is produced that others will take responsibility to get the desired outcome. That’s not a terrible division of labor but the authors point out this isn’t the way things have always been and Prussian retrospective was useful.

    This got me thinking about AI and ML, which have lately invaded our CRM space. It also made me rethink my assertion that these technologies can help us avoid the mistakes humans make when we let our brains short circuit by relying on heuristics rather than actually thinking something through.

    It tells me concretely that we need to find the right balance between being freewheeling independent actors in business and becoming slaves to the information that our systems spit out. It all comes down to what Eric Brynjolfsson and Andrew McAfee wrote about in “The Second Machine Age,” that we have to find optimal ways to leverage our machines in a 1+1=3 model.

    For me it all comes down to better listening skills which start with asking better questions. Open-ended questions about customers’ likes, dislikes, and especially customers’ feelings related to our products are the things most likely to tell us how we’re doing relative to outcomes and not simply output.

    The further we progress in this the more I see a bifurcation happening. We use a lot of quantitative data to determine success in our output goals but we need to do better with qualitative data to gauge success in outcomes. We still don’t do enough with qualitative data and if I was an investor, I might look into novel solutions in that area.

     

    Published: 7 years ago