April, 2010

  • April 28, 2010
  • I am flying Virgin America from San Francisco to Boston right now.  I had originally booked this flight for tomorrow but had to change.  The original fare was about $375 but when I made the change the equivalent seat would have cost more than $600 due to the shorter time before take off — standard airline pricing — so I accepted a downgrade to the same seat with fewer privileges for the same money plus a change fee.  Got that?  I am paying more for the same seat on a different day.  Got it.

    The differences included having to pay $25 to check my bag (it was free before) and no inflight TV.  The TV is literally turned off for moi though the people on either side of me (did I mention I traded an aisle for the B seat?) have it.  So I am not allowed to see the oh so special content (I so miss the little map with the airplane skimming over it) AND I can’t order food or drinks from my seat.  I am not sure this means they won’t sell stuff to me but this seems to skate very close to a Title IX violation or some such thing, but I am not a lawyer.  I am sure their legal beagles played the hypotheticals before implementing the, ah, policy.

    Personally, being denied airline food — even Virgin airline food — is like being denied rutabagas but the baggage charge is something else.  There are so many people carrying on that all of the flights and all of the airlines I have flown recently are tagging and gate checking bags for free.  The lesson here is to bring your bag and let the gate agent worry about it.

    I wonder if my status allows me to use the rest room or if I can only piss on the airlines metaphorically.

    Published: 14 years ago


    The VMForce announcement leaves us all with one big question — ok, many questions like when and how much.  But the question I am most interested in at the moment is whether or not this is a single or multi-tenant thing and what it means for the industry and Salesforce’s multi-tenant chops and possibly, how this all plays in the discussion of public and private clouds.

    Let me start by digressing.  My understanding of VMForce is that it offers a way to move Java code to the cloud while enabling it to access data from Salesforce Cloud applications.  This happens automagically when a developer selects VMForce as the server for an application.  VMForce.com provides a virtual Java server and voila the application is available to users of the cloud instance along with other Force.com applications with proper security.

    That all looks good but just because the Java code is running on the VMForce platform it does not mean the Java code is suddenly multi-tenant.  Does it matter?  Hmmm.  The VMForce virtual server is a multi-tenant device conjured up from multi-tenant resources, but the Java code is separate so it’s operating as a single tenant instance.

    Someone send me mail if this is not the way it works.

    The net effect to the customer is a familiar application running native in Salesforce and able to access all of the user’s data and other applications so the difference is nugatory to the user.

    How is this different from running the Java application on a hosted server in the sky?  Well, first off, the server in the sky runs like the server in the data center which is to say it is walled off from the rest of the world and integration with a cloud (or any other application) requires a more tedious and conventional integration process (read time and money here).  So you get much simpler and less costly integration and the ability to run concurrently in one cloud environment.

    Is this a private cloud?  I guess so but only to the extent that by providing an instance of Force.com to any user, that user has a “private” cloud that just happens to be integrated to the rest of the world.

    Is this a departure for Salesforce?  That’s debatable but I lean towards saying yes here so I am calling it smulti-tenant.  I’m not very concerned about doctrinal purity here.  The facts as I see them are that this approach merges legacy applications into the future of cloud computing.  The alternative, moving your data center to the sky or using infrastructure as a service, does nothing to move legacy code into the future.  It just changes the location of the private data center.

    As an engineering proposition this is elegant, like making the strongest and lightest airframe or bridge.  If Cole Porter were a Java programmer I am sure he’d say ‘swonderful, ‘smarvelous.

    Published: 14 years ago


    I got this long comment on yesterday’s VMForce post and I disagree with some, but not all of it, and rather than just posting it and letting it run, thought I would comment using it as the Q part of a Q&A.  Here it is.

    “Great analysis on vmforce announcement.”

    Ok, you didn’t expect me to disagree with everything did you?

    “At the outset vmforce will benefit vmware by providing their Java developers instant access to cloud services.  Also vmforce will benefit salesforce.com by increasing adoption of Force.com platform among Java community.  From developers perspective I believe today’s announcement is a ground breaking one that is a win-win for both salesforce.com and vmware.

    This is true though as I told John Pallatto at eWeek, I think the early adoption will be among Java developers employed at companies already using Force.com and possibly ISVs and SIs who have catalogues of Java code they’re just itching to deploy in the cloud.  There are six million Java developers and even one percent of them represents a lot of code.

    “However today’s announcement is short on details around business deployment scenarios and pricing models.  I hope today’s announcement is just an incremental step towards a much larger strategy on open cloud infrastructure that addresses some concerns around cloud computing like security and scalability for enterprises.

    Salesforce has a well-documented and time tested approach to its announcements.  Like a good baseball team, that looks fresh and crisp on the field, it’s because they execute well on the fundamentals.  For Salesforce, this means the old rule of three — tell them what you are going to do, do it and then tell them what you did.  The company has gone through a year many times hitting on this formula for every major advance and VMForce is no different.  This was round one.

    Not sure where we get the security and scalability for enterprises issue that the writer is talking about.  These were issues in the middle of the last decade that haven’t been raised recently because they literally went away.  I noticed at the NetSuite partner meeting a couple of weeks ago that CEO Zach Nelson was dealing with these issues in ERP but they’re about as valid in ERP as they were in CRM.  Salesforce has many enterprise customers with thousands or tens of thousands of users, what do they need to do to put the scalability issue to rest?  And when was the last time your credit card information was stolen from a Salesforce application?  I’ve got a lifetime subscription to the credit rating agencies from all of the enterprises with conventional IT who lost my credit card information.  And there are dozens of companies in Silicon Valley that have lost valuable IP to state sponsored IT pirates from China.  They had conventional security too.  Sorry, but I just can’t buy the argument that SaaS is not scalable or secure.

    “Currently salesforce.com lacks support for private clouds for enterprises and only supports public clouds through its hosting services.  Majority of enterprise customers will be reluctant to migrate to public clouds until concerns around security are addressed.  Industry market trends point to migration towards “private/managed” clouds by enterprise customers in the next 3-5 years time-frame.  I hope salesforce.com will announce products or partner with companies like vmware to fill this gap.

    Show me your 3-5 year data please.  I see 1.4 million users and 70,000+ companies using Salesforce.  Let’s call them the early adopters if we must.  I am sure some enterprises will not migrate to the cloud no matter what.  My prediction is that they will all eventually be headquartered in some inland Rocky Mountain state, dig bomb shelters and join the NRA.

    But I digress.  The definition of public and private clouds is something invented by the part of the industry that thinks a cloud is infrastructure as a service or IaaS.  I don’t agree.  IaaS is a data center in the sky, but it’s not cloud computing.  To be a legitimate cloud you really need platform or PaaS and software or SaaS.

    Then there’s that security bogie again.  A data center in the sky will still lose my credit card just as well as one on land unless enterprises devote many more resources to the job — something they won’t spend sufficient money on.  SaaS and Cloud providers spend on this as a matter of course.  The private/managed clouds idea took a big hit yesterday when VMWare and Salesforce showed how to bring a Java application to the security of a scalable VMForce virtual server.

    Finally, Force.com has always offered private server capability, that’s what you get when you run your enterprise on an instance of Force.com.  What the writer is alluding to when he says “private” is more about single tenant, dedicated hardware in the data center in the sky.  Of course this approach destroys much of the rational and cost savings of cloud computing so why go there?

    “In addition current public cloud deployments models have some limitations on scalability and performance front.  While multi-tenancy is good from h/w utilization perspective, due to the inherent sharing model it is not ideal for compute intensive and complex data processing applications.  Until this limitation is addressed few enterprises will be willing to migrate to public clouds.

    Now its scalability and performance.  Ok, you’ve got me, the cloud is probably not the place to do molecular modeling or tornado tracking — though the human genome made great and creative use of spare PC cycles unraveling DNA.  So, all of you molecular modelers and complex calculation buffs are excused from class.  The rest of us will be just fine running our business applications in the cloud, I suspect.

    “I hope trailblazers like salesforce.com and vmware will address these limitations around public clouds soon.  On a final note we seem to have a new “— as a service” acronym pop up every other day.  Unless each and every one of these services is tied to “customer value proposition” we will just end up with technical jargon that only confuses the end customer.

    I think we did all that yesterday, at least at the announcement I attended.

    Published: 14 years ago


    CRM took a significant turn in the last few years, neither good nor bad, really, but significant.  The turn took us from what I will call core CRM, which primarily dealt with business to business (B2B) interactions to social CRM which deals with end customers or consumers, a.k.a. B2C.

    Now, it would be wrong to assert that core CRM never addressed consumers or that social CRM has nothing to say about the B2B world, but the emphasis has shifted.  Interestingly, both core and social CRM have an important emphasis on the call center and resolving customer issues for instance and that idea really straddles the two worlds.  The dominant part of the conversation is now social and consumer oriented.

    It wasn’t always like that.  CRM was born from a frustration that sales managers had with knowing what their people were really doing.  That drove the emergence — if not the popularity — of SFA.  Sales force automation was, after all, an attempt to corral the freewheeling activities of sales people as much as it was an attempt to capture and make sense of the reams of sales data they produced.  All of this existed with a backdrop of expanding markets where new products and new categories bloomed like a desert after a freak rainstorm.

    But if you look at the marketplace in the last several years, it looks like a desert again.  Demand cratered with the economy and companies that were able to make money were those who could understand demand patterns of individuals and cater to them.

    James Surowiecki, author of “The Wisdom of Crowds” made an astute observation in his New Yorker column a couple of weeks ago when he showed that there are now two centers of activity in the marketplace where there had been only one for a long time.  I think of this as the camel theory — one hump Dromedary, two humps Bactrian.  Say what?  Sure, it goes like this.

    The conventional market we all know and love or at least tolerate can be represented as a bell curve, a single hump representing the range of quality, demand and ability to pay.  For years, vendors simply aimed their offerings at the middle of the bell curve and pretty much scored big.  If you couldn’t be successful that way your product or strategy was badly flawed.

    But then something happened and rather than having one hump to contend with we suddenly had two.  The Bactrian market’s humps cover two market types.  The first provides mass produced, good enough products that are affordable by most people.  The long period of new product development and category formation has, at least for the moment, paused leaving us with many fine products in the commoditization process (Hump 1).  That leaves us with a plethora of choices of things that, while they are not customizable, are good, reasonably priced products.  Surowiecki includes H&M clothing stores and flip cameras in this category and there are many others.

    The second hump also features mass produced goods but these products usually come with more features, functions and are generally coveted by all of us.  Think about Apple products when you think about the second peak.  You can make a good argument that you don’t need an iPad or an iPhone or an iMac.  There are cheaper products in the first hump that meet the need but somehow we manage to shell out the extra money.  Apple is not alone either.  We go to Starbucks for coffee and buy luxury cars when there are less expensive alternatives, too.

    What’s interesting here is that the two-hump marketplace has made the single hump variety untenable.  If you try to stay in the single hump world you find that your customers have gone elsewhere and you are in danger of going out of business.  None of this should surprise us though.

    In 2005 Geoffrey Moore wrote about the two-humped beast in a slightly different context.  Moore said that the first hump represented companies that had to rely on operations and efficiency.  They are the vendors who Surowiecki sees as offering good commoditizing products for a market most concerned with price.

    The second hump in Moore’s construction are companies that rely on customer intimacy, they still sell products that need a bit of handholding if only because the attention connotes value in the eyes of the customer — think Genius Bar.  Note here that Joe Pine’s vision of mass customization is bearing fruit if only because the customization is happening via personal attention and the vendor is still selling a mass produced good (maybe this is Pine’s other idea, the good as experience).

    Understanding who their customers are remains among vendors’ greatest challenges.  The young woman who shops at H&M might take a break from shopping at the Starbucks next door, for instance, where she might call a friend on her iPhone.  That’s where social media and enterprise 2.0 ideas come in handy because no vendor can afford to lavish attention on customers like they did in the old days.

    The difference between the two humps may be in how or where social strategies are applied.  Denizens of the first hump use social strategies to identify their customers and their needs.  The second hump cohort may use social strategies internally to marshal resources to meet the needs of the up-market buyer.  As the economy improves, I expect more attention will be paid to the B2B side of the house and the second hump.

    Published: 14 years ago


    First take

    Salesforce.com and VMWare hailed the next generation of software development and deployment today at a joint announcement in San Francisco.  The two men introduced VMForce, an integration of VMWare a powerful Java development platform and Force.com, the Salesforce application platform for cloud computing.

    The significance of the announcement is manifold.  First, it opens up access to cloud computing to more than six million Java developers world wide.  When delivered later this year, VMForce will enable new or existing Java applications to access data stored in the Salesforce cloud but also to deploy standard Java applications using Force.com as a Java server.  The effect will be to make legacy Java applications accessible to cloud computing.

    Second, from a business perspective, the announcement stands to accelerate migration of legacy Java applications to cloud computing.  This should remove or lower barriers that many enterprises have for migrating their legacy applications.

    Third, if this approach is robust and successful (something we have to say about product that has not been released yet), it stands to enlarge the gap between true cloud computing and a resurgent ASP movement.  True cloud computing consists of Infrastructure as a Service (IaaS), Software as a Service (SaaS) and Platform as a Service (PaaS).  The resurgent ASP movement is largely defined as providing IaaS only.

    One thing that remains cloudy (sorry) is whether a transformed Java application running on VMForce inherits the multi-tenancy that every other Salesforce cloud application has.  If not VMForce reduces Force.com to the status of a simple server.  This would be a big departure for Salesforce and something that was not alluded to in the presentation.  But it is a question that ought to be asked.

    Another question worth pondering: What’s next? VMforce for ABAP?  Whoa!  Could happen, I guess, and that’s the significance of this announcement, I think.

    Published: 14 years ago