Facebook

  • May 2, 2018
  • Adam Smith famously referred to “the invisible hand” of the free market in his landmark book “The Wealth of Nations” and with that made himself one of the very first political economists. Smith’s observation was so on point that today most of us assume markets run through the agency of individuals pursuing their enlightened self-interests. A lot of this drove the evolution of CRM as a tool for tracking customers.

    If you pay attention today you can notice a not-so-invisible hand functioning in multiple areas. For instance, if you’ve been following the aftermath of the school shooting in Florida, you know that an activated group of kids and adults nationwide has begun a movement to get something done about gun safety. The #MeToo movement in which women are banding together to change the workplace by eliminating sexual harassment is another, and so is the Black Lives Matter movement. But you can argue that all of them are free market responses in that they arose from the grass roots without much prompting from elite power centers.

    What each has in common is the initiative by engaged individuals to cause change in what are essentially marketplaces in the broadest conception of that term. Much closer to home, even in the technology world we’re seeing the stirrings of user dissatisfaction with social media and it’s not clear where this will go. Its impact on CRM could be big because social has become one of the key channels linking vendors and customers.

    A recent article in Wired, “Facebook Doesn’t Know How Many People Followed Russians on Instagram” by Issie Lapowsky, documents Facebook’s foot dragging on producing information for the various inquiries surrounding the 2016 American election. Jonathan Albright of Columbia University’s Tow Center for Digital Journalism, has been looking at the details and producing information that’s uncomfortable to Facebook. He’s been quoted in Wired, The New York Times and elsewhere.

    Albright’s work has uncovered many things concerning Facebook’s approach to the investigation which you might call passive aggressive. For instance, when asked why it had not produced information about how many people had seen Instagram information produced by Russian operated troll accounts, a spokesperson for Facebook, which owns Instagram, said, “We have not been asked to provide that information.” So little curiosity…

    It’s not necessary to repeat the article here; it’s worth reading but that’s your call. It documents how Facebook has so far assisted investigators but only if they ask the right questions. The final paragraph summarizes this point,

    Facebook has shown consistent reticence in detailing how these trolls infiltrated its platform and who that propaganda reached. They’ve repeatedly had to correct prior statements about the reach of these ads and accounts. By working with outside researchers like Albright, the company might be able to paint a more complete picture, but Facebook has been unwilling to open its data up to researchers.

    It’s not necessary to re-examine every time Facebook denied their involvement or disputed findings that upwards of 150 million people saw content from the Russians or that all the US intelligence services agree that the Russians did indeed hack the election. That’s all very interesting from another journalistic angle but not this one.

    The totality of Facebook’s unwitting involvement in the hack plus its efforts to downplay their importance brings up a bigger issue for Facebook, and by extension all social media: How useful are Facebook and social media generally considering the Russian hacks?

    A glib answer might be that it doesn’t have to be terribly useful because it’s free and users get whatever utility they can from using it. But that misses the point. If Facebook’s utility is small or especially if it’s disputable, its business model would be in serious trouble.

    Social media’s primary product has always been the user. It is valuable to each of us when we use it to gather information about our personal grafs and we knowingly pay an in-kind fee by letting social sites collect data about us, which they can then sell to advertisers. It’s a classic network effect—the greater the audience the more valuable the output of its data.

    But what happens if the veracity of information on social media is in doubt? Social media’s value is directly proportional to its veracity and if one can doubt that veracity it might be prudent to seek alternatives. People and corporations that invest heavily in using social media’s information might begin doubting if their investments deliver value.

    My thoughts

    So far Facebook’s approach to the hacking scandal has been to deny and ignore it only admitting something when there’s no other choice. This presents another problem associated with stonewalling—dissipating trust. However unpleasant the facts, the more a party tries to ignore or hide them the lower the market’s trust in that entity and the greater the opening for a disruptor.

    The truth value of what people put up on the networks and what they believe about the truthfulness of others’ posts makes social media’s world go around. That truth is what makes some people spend hours a day surfing the sites and it’s what makes advertisers purchase ads. Once that trust begins to erode, even a little, the business model can begin to unravel.

    Whoever is advising Facebook on its strategy should reconsider. It’s human nature to not enjoy dealing with criticism and serious accusations. But impinging the free flow of information won’t solve the problem. Free markets depend on transparency and Facebook is a free market of ideas. If it stops being that, or even if people stop believing it, there’s no reason for them to continue using it.

    Published: 3 months ago


    Ok, this is kind of long. Go get a cup of coffee.

    Amid the anxiety and revelations of the Russia scandal including the Cambridge Analytica story that showed how easy it was to steal 50 million Facebook user profiles, it’s easy to mix up cause and effect. Importantly, Facebook wasn’t hacked or broken into but it was used as it was designed.

    This has led some to question whether Facebook as such can exist at all in our pluralistic society while others believe the problem of surreptitious psychographic profiling will blow over once everyone plays by the same rules. After all, others have argued, other entities do the same thing. They point to Google, Amazon and even the traditional print industry as culprits for gathering personal data for analysis and, it should be said, weaponization.

    Of course, the issue is manipulating and weaponizing the data. If we can’t trust the data, then we are disassembling one of the pillars of democracy, the acceptance of scientific rationalism. Boiled down, it means facts are facts even if you don’t like them.

    If you remember a time before social media when identities were not so readily stolen and you think that reality was good, you might also recoil at the thought that those were the good old days, that things are now permanently different. There is a third option though and there are probably many that seek to balance the benefits of new technology with the protections we’ve grown accustomed to.

    This article can’t be all things to all those people but it attempts to find safe harbor in a storm and therefore makes accommodations. If we can’t live with the compromises, perhaps it can at least point out some of the major obstacles to be over come.

    Business model

    It is an article of faith that Facebook’s business model, as well as those of other social networks and search engines, is selling advertising. But it is my contention that this model has run its course. It was effective when the companies were smaller, when their consumers were more innocent to the ways technology can be used for both good and ill purposes. The advertising model was even necessary in a time when the Internet was new and finding people and things was strange.

    The advertising business model was a default that data aggregators took on the way to phenomenal profits and who could blame them. The tech sector has a habit of minting money and the founders of social media and search engines were merely the latest in a long line of prolific brainiacs who struck gold. It is hard to believe that any human in a similar situation would act much differently.

    The latest dustup that dragged social media into the political spotlight now presents two choices to these businesses. They can hobble their products, which could reduce the amount of data they collect making them less interesting to advertisers, or they can change their business models slightly to prevent unethical use of their networks.

    Disruptive innovation

    Anytime a new technology reaches market, it has the possibility that it will disrupt the existing order of things. Disruptive innovations have coexisted with Capitalism since its origins in the Industrial Revolution. Disruptive innovation means making thread and then cloth with high-speed mechanical means, making a steam engine powerful and small enough to be mobile, or making a computer that could fit on a sliver of silicon about the size of your thumbnail.

    The world changed with each of these disruptive innovations and others, because they immediately made an old order irrelevant and they organized whole economies and even civilizations around new driving forces. The Internet and its children are the latest innovations that have rocked the world. In each, humanity has had to grapple with both the benefits and the deficits of the innovations.

    So far, we’ve benefitted enormously from these innovations but recently we discovered their less sanguine side. If history is a guide then regulation in some form is a likely next step. Some leaders in congress have already broached the idea on several occasions but it’s important to get the idea right before pulling the trigger, which is why we need to discuss business models.

    Regulation?

    Regulation could happen in social media and search but there’s much that the technology companies can do to either avert it or ensure that its mandate is as light and congruent with company interests as possible starting with the prevailing business model.

    Although the advertising business model has served many companies well, they’ve morphed into data companies with big responsibilities for safeguarding the data they collect and that’s not something they’re eager for.

    The big data gathering companies like Facebook, Amazon, and Google and their competitors, have become data companies first and advertising vendors second and if this understanding had been realized sooner, many data breeches would in all likelihood have been thwarted. Rule One of business is never give away your product, it’s what you charge for because it pays the bills. Applying the rule should be as obvious as encrypting user data in this case. Additionally, no expectant user of the data should be able to access it in its unencrypted form without, of course paying, but more importantly presenting valid credentials and stating a beneficial and productive purpose of the use.

    I’ve written before about credentialing and how it’s actually harder to pull permits to remodel your kitchen than it is to advertise any message you want on social media so I won’t perseverate. So let’s turn to encryption.

    Security as a business model

    Social and search’s business model must turn from advertising to data management, curation, and selling access to it and we live at precisely the moment when these activities are possible on a very large level. This includes encryption and the same form of certification that applies to other professionals from doctors to beauticians and plumbers.

    Encryption and its reverse take time and require compute and storage resources which have often cut short discussions involving them because of cost considerations. But new, shall we say disruptive, innovations in computer hardware and software are reigniting the discussion.

    In hardware data storage was long accomplished with the hard drives of most computer systems. Data enters and leaves storage on millisecond time scales, which is very fast. However, computer CPUs and memory operate one million times faster at nanosecond speeds. CPU chips spend a lot of time waiting for data to become available even when, as most modern computer systems do, there is memory caching for frequently used data.

    Innovative hardware designs now offer solid-state memory devices that replace disks. This memory operates at nanosecond intervals and eliminates the lag time of older mechanical systems. What should we do with all of this newfound speed? One possibility might be to dedicate a small portion of it to encryption. Typical encryption modes on the market right now could be broken but that would take so many years that the resulting data, when finally available, would be useless and encryption is getting better.

    Encryption would be a good thing but it wouldn’t solve all problems and securing our information infrastructure so that it operates more at utility grade, requires other changes. Bad software, malware, viruses, Trojan Horses, and the like may still get into systems.

    Mark meet Larry

    As luck would have it free markets generate inventions faster than they can be adopted. Often a disruptive innovation exists at the nexus of several disruptions that just need one more critical piece. That’s the case with many of the system level inventions that Oracle has brought to market over the last several years. They’ve pioneered important developments in solid state storage, encryption, chip sets that weed out intrusive malware, and a self-patching autonomous database that just hit the market.

    All of these things turn out to be essential to safeguarding data which will enable the information revolution to continue burrowing its way into our lives and enriching society. They are also the underpinnings of a new business model that turns big data companies into ethical data providers. They might also continue being social media companies but the data tail would now be wagging the dog.

    My two cents

    What do I know? I just read and write a lot. But what I see is an industry about to be regulated and, in my mind, the smart play is for the social media companies to lead the charge to ensure they arrive at something they can live with instead of remaining aloof and having some regulations imposed on them.

    There’s a wild west mentality in Silicon Valley in which what isn’t proscribed is encouraged. But we should keep in mind that the west only remained wild until the pioneers arrived and established towns with roads, schools, and churches. The wild bunch might have disliked the idea of settlement, they might have opposed it, but they were quickly in the minority and civilization won. That’s what’s happening in tech today and we all need to seize the moment.

     

     

     

     

     

     

    Published: 3 months ago


    The multiple issues/scandals/problems facing companies in Silicon Valley could drive you to ask if the wheels falling off the collective tech wagon. A recent article in the DealBook section of the New York Times asks the question,

    Are we witnessing the end of a mania?

    Investors, always willing to believe in technology companies, spent the last three years piling into the shares of companies like Facebook, Amazon and Netflix with special abandon. Now the intellectual underpinnings of the tech rally are being seriously tested.

    But maybe this is just the way disruptive innovations evolve. To be sure, if it takes 50 to 60 years for a disruption to work its way through society and the economy, as I have documented in discussions of long economic waves called K-waves, then it’s quite likely that we’ve seen this kind of thing before, say in the 1960’s. You remember the 60’s, right? That’s the point.

    Because there’s little living memory of the last time, today’s happenings look unfamiliar. A disruption grows and grows consuming everything in its path until there’s nothing left to consume and at its peak the disruptors become the disrupted and accommodations must be made. Often the they are initiated by standards and even regulation—admittedly a dirty word in Silicon Valley and environs but one that must be said.

    In the more genteel days of the 19th century the disruption of the day, electricity, had its own peaking moment pitting uber genius Thomas Edison, a proponent of direct current (DC) standards against the lesser known geniuses Nicolai Tesla and George Westinghouse, who favored alternating current (AC).

    Edison was not above making a slur to defend his viewpoint. When a condemned man was electrocuted using alternating current, Edison got his publicists to influence a headline saying the man had been “Westinghoused.” Those were the days. Nonetheless, facts are stubborn things and AC won the day because it was just a plain better standard for long distance transmission, something the emerging electricity industry simply had to have.

    Through standards setting and regulation, the electric industry became a ubiquitous and standardized service that is accessible to all and this set the stage for further growth of the industry and the economy. Ironically, the standards and regulations were nothing that individual business people would have readily agreed to.

    The Times article asks in a roundabout way if the tech sector is in a similar moment. While the sector comprises businesses as different as Facebook, Apple, and Google, to name a few, Facebook and social networks are reaping a healthy share of skepticism, and for good reason, so let’s concentrate there. Another Times article may have pinpointed the peak moment when social media went from the disruptor to the disrupted,

    Shortly before the election, a senior official with the Trump campaign bragged to the Bloomberg reporters Joshua Green and Sasha Issenberg, “We have three major voter suppression operations underway,” which the article described as targeting “idealistic white liberals, young women, and African-Americans.” Brad Parscale, who ran the Trump campaign’s digital advertising, is quoted in the same piece discussing his plan to use dark ad posts of an animation of Hillary Clinton referring in 1996 to some African-Americans as “super predators.” Parscale suggested that the campaign would use this image to discourage a demographic category described by the reporters as infrequent black voters in Florida. “Only the people we want to see it, see it,” he explained. “It will dramatically affect her ability to turn these people out.”

    Dark ads display once to a specific audience and then disappear though they’re still in the vault. But guess what, the article goes on to say that,

    the dark ads have disappeared and Facebook won’t release them, citing the privacy of its advertisers.

    Zuckerberg might say that trust is important and he might spend a few bucks on grandiose full-page newspaper mea culpas but he and his company are still remarkably tone deaf. Did the suppression effort work? You be the judge,

    The election of 2016, the first after Barack Obama’s presidency, was notable for a seven-percentage-point decrease in African-American turnout, from 66.6 percent in 2012 to 59.6 percent, according to the Pew Research Center.

    The first decline in 20 years and the largest decline on record.

    This isn’t an American phenomenon. The news shows that the effort in 2016 was international with companies like Cambridge Analytica employing many non-Americans to sort data, create psychographic profiles, and generally influence the US election and possibly other efforts like the Brexit vote.

    My take

    We’re at the moment when attention turns to regulation in social networking and elsewhere. Facebook’s focus on the individual’s rights (privacy of its advertisers!) rather than the potential harm it can cause to a whole society represents a blind spot that will interfere with any solo attempt to rectify the situation. That’s why self-regulation rarely works without a mandate from government.

    It’s possible to regulate social networks through a system of encryption, certification, and a modicum of tracking. Doctors, lawyers, plumbers, beauticians and many other professionals all submit to systems of practitioner licensing and professional standards and utilities are regulated. That’s a workable model for social networks.

    It’s currently easier to attempt to influence millions of people about consequential issues than it is to pull the permits to make an addition to one’s home. It shouldn’t be that way.

    The momentum in the halls of legislatures around the world right now is toward regulation. The social networks should welcome it and work with governments to reach a workable compromise which includes a standard set of regulations that apply over broad swaths of the planet. Regulation will do for the industry what it can’t do for itself and that’s exactly what the society needs.

    Published: 3 months ago


    I was a guest on the Gillmor Gang last Friday hosted by Steve Gillmor and available for streaming on Tech Crunch here. If you’ve never had the pleasure, it’s an hour of discussion at the nexus of technology, business, and current events and well worth seeing.

    For the last few weeks we’ve devoted time to how we all should react to the revelations about Russian intelligence services attacking Facebook during the presidential election. The conversation has evolved over that time, in part because there are new revelations every week and as the plot thickens our response has become more nuanced.

    For instance, when we started the discussion we knew that Facebook had been the advertising medium of choice for the Russians but that turned out to be only part of the story. In the intervening weeks we learned about the role of Cambridge Analytica in stealing user profiles. Technically we’d have to say there was no theft and that at the time Facebook was running exactly as it intended. I don’t know anyone who sleeps well knowing this. But it adds layers to our discussion and recommendations.

    Last week I advocated, as I had in some blogs on BeagleResearch.com that we’ve entered a time when we must seriously consider regulating Facebook and all social media and treat it all like a utility. You can read more here.

    But back to the Gillmor Gang. There was a variety of opinion about what to do and to my surprise, regulating was not top of mind for anyone other than me. That’s okay though. The discussion was lively and exercised points that I had not considered. What do you think? Take a look.

     

    Published: 4 months ago


    If you study economic cycles you can watch the evolution of a disruptive technology throughout its lifecycle from a specific product, to a competitive industry. The last phase in the evolutionary chain is often the formation of a utility. For example, over a couple of centuries we’ve seen the evolution of electricity from a curiosity, to a business, to a group of public companies. Along the way there are the inevitable mergers and acquisitions to enable a winnowing field of competitors to achieve the scale needed to compete in very large markets.

    It wasn’t just the electric industry that went through this evolution. The telephone, gas, and cable industries did in their own ways. Local or reginal utilities that provide sanitation and water still dot the landscape too. Banking is in a similar position that is manifests differently. In fact, any industry that attracts the term “too big to fail” is showing signs of utility status.

    When your business becomes so big that it affects large segments of society it can’t be allowed to fail lest it crater the economy or cause massive disruption injuring many people. At that point government has a compelling interest in preventing failure and along with that comes regulation of the riskiest corporate behaviors.

    The latest example to hit the radar might be social media which has completed many steps of the lifecycle with blistering speed in just over a decade. This speed notwithstanding, we are now at a point where what happens in social media affects all of us.

    Regulation is a thorny issue wrapped in individual freedom. But it is also a logical way out of an impasse. The recent interference in the US election, which all the American intelligence agencies have confirmed, is a proof point that social media is now a utility and needs some form of regulation.

    In a recent article in Wired, “Bad Actors are Using Social Media Exactly As Designed,” writer Joshua Geltzer makes the point that popular social sites including Facebook, Twitter, and Air BNB, all provide tools that enable users to find and segment groups that are best subjected to targeted messaging. His point is simple, the bad actors didn’t pervert social media or hack its code. They simply used the tools provided to mount a campaign to upend a US election and there’s evidence of similar activity elsewhere.

    By this measure, social media is now too big to fail; it is too essential to a large segment of society and its potential excesses must be managed so that it does not consume the society and the users who depend on it. Geltzer’s article clearly, but inadvertently, makes the case,

    When Russia manipulates elections via Facebook, or ISIS recruits followers on Twitter, or racist landlords deny rentals to blacks and then offer them to whites through Airbnb, commentators and companies describe these activities as “manipulation” or “abuse” of today’s ubiquitous websites and apps. The impulse is to portray this odious behavior as a strange, unpredictable, and peripheral contortion of the platforms.

    But it’s not. It’s simply using those platforms as designed.

    So, what would a social media utility look like and how would it be different from what we see today? First off, social networks should be regulated with as light a touch as possible. Grandmothers sharing baby pictures shouldn’t have to change their use habits, for example. Second, to achieve positive ends, regulation should be implemented at two distinct levels, the source and the periphery.

    At the source, regulation comes down to access for any person or entity with a beneficial and productive need for the utility’s services. In the electricity markets this means stable pricing for all and a commitment to serve as a common carrier. It wouldn’t be much different with social network regulation; the key is beneficial and productive use.

    Common carrier law began in railroads and shipping. In return for its use of public lands and roads, the carrier commits to serve all parties equally. In broadcast industries (radio and TV), slices of electromagnetic spectrum play the role of roads  that the broadcasters use as grants (licenses) from the people. In transport, the waterways are also owned by the people, so are the roads, and railroads have historically received government help because they provide a useful service to society. It goes on, but you can see that source regulation amounts to giving all participants a fair shot at using the public’s assets and insisting on beneficial and productive use.

    Regulation at the periphery takes on a different cast, notably in America where so much of modern utility regulation evolved. A great deal of peripheral regulation occurs through certification and licensure. People interested in careers involving one of the utilities often serve apprenticeships, learning from a master before earning journeyman’s status. They must also pass tests to prove their knowledge and skill.

    Barbers, beauticians, and other personal services professionals go to school and sit for certifying exams. Other professions are similar. Doctors, dentists, lawyers, and many others must take many years of education, pass tests, and serve different forms of internships before practicing on their own.

    The point here is that we already regulate the day to day best practices of many industries. We do it with a light touch and in the interest of the culture and the society functions quite well despite, or more likely because of, this light approach to regulation.

    My take

    Perhaps the time has come to consider lightly regulating parts of the tech industry, something we have never done. But the age of information and telecommunication, a 50-year economic cycle called a K-wave, is reaching its natural endpoint and that’s often when utility status and regulation has come to the forefront in prior cycles.

    Elevating social media use to professional status, seems a logical thing to do. Establishing a certification or licensing process plus capturing a user’s license number when accessing some of social media’s higher functions would give an uncomplicated way of keeping bad actors out of the networks or at least making them traceable. In case you are wondering this is the basic process of getting a building permit.

    This approach need not apply to lower level personal use. But trying to reach millions of people on a social network is functionally like climbing a utility pole and messing with the wires. For this one should need certification.

    Last point, part of certification in any industry is training in the ethical use of the tools and techniques of that industry. As a society we have not engaged in such a dialog for social networks yet, but one is overdue.

     

     

    Published: 4 months ago