Facebook

  • April 3, 2018
  • Ok, this is kind of long. Go get a cup of coffee.

    Amid the anxiety and revelations of the Russia scandal including the Cambridge Analytica story that showed how easy it was to steal 50 million Facebook user profiles, it’s easy to mix up cause and effect. Importantly, Facebook wasn’t hacked or broken into but it was used as it was designed.

    This has led some to question whether Facebook as such can exist at all in our pluralistic society while others believe the problem of surreptitious psychographic profiling will blow over once everyone plays by the same rules. After all, others have argued, other entities do the same thing. They point to Google, Amazon and even the traditional print industry as culprits for gathering personal data for analysis and, it should be said, weaponization.

    Of course, the issue is manipulating and weaponizing the data. If we can’t trust the data, then we are disassembling one of the pillars of democracy, the acceptance of scientific rationalism. Boiled down, it means facts are facts even if you don’t like them.

    If you remember a time before social media when identities were not so readily stolen and you think that reality was good, you might also recoil at the thought that those were the good old days, that things are now permanently different. There is a third option though and there are probably many that seek to balance the benefits of new technology with the protections we’ve grown accustomed to.

    This article can’t be all things to all those people but it attempts to find safe harbor in a storm and therefore makes accommodations. If we can’t live with the compromises, perhaps it can at least point out some of the major obstacles to be over come.

    Business model

    It is an article of faith that Facebook’s business model, as well as those of other social networks and search engines, is selling advertising. But it is my contention that this model has run its course. It was effective when the companies were smaller, when their consumers were more innocent to the ways technology can be used for both good and ill purposes. The advertising model was even necessary in a time when the Internet was new and finding people and things was strange.

    The advertising business model was a default that data aggregators took on the way to phenomenal profits and who could blame them. The tech sector has a habit of minting money and the founders of social media and search engines were merely the latest in a long line of prolific brainiacs who struck gold. It is hard to believe that any human in a similar situation would act much differently.

    The latest dustup that dragged social media into the political spotlight now presents two choices to these businesses. They can hobble their products, which could reduce the amount of data they collect making them less interesting to advertisers, or they can change their business models slightly to prevent unethical use of their networks.

    Disruptive innovation

    Anytime a new technology reaches market, it has the possibility that it will disrupt the existing order of things. Disruptive innovations have coexisted with Capitalism since its origins in the Industrial Revolution. Disruptive innovation means making thread and then cloth with high-speed mechanical means, making a steam engine powerful and small enough to be mobile, or making a computer that could fit on a sliver of silicon about the size of your thumbnail.

    The world changed with each of these disruptive innovations and others, because they immediately made an old order irrelevant and they organized whole economies and even civilizations around new driving forces. The Internet and its children are the latest innovations that have rocked the world. In each, humanity has had to grapple with both the benefits and the deficits of the innovations.

    So far, we’ve benefitted enormously from these innovations but recently we discovered their less sanguine side. If history is a guide then regulation in some form is a likely next step. Some leaders in congress have already broached the idea on several occasions but it’s important to get the idea right before pulling the trigger, which is why we need to discuss business models.

    Regulation?

    Regulation could happen in social media and search but there’s much that the technology companies can do to either avert it or ensure that its mandate is as light and congruent with company interests as possible starting with the prevailing business model.

    Although the advertising business model has served many companies well, they’ve morphed into data companies with big responsibilities for safeguarding the data they collect and that’s not something they’re eager for.

    The big data gathering companies like Facebook, Amazon, and Google and their competitors, have become data companies first and advertising vendors second and if this understanding had been realized sooner, many data breeches would in all likelihood have been thwarted. Rule One of business is never give away your product, it’s what you charge for because it pays the bills. Applying the rule should be as obvious as encrypting user data in this case. Additionally, no expectant user of the data should be able to access it in its unencrypted form without, of course paying, but more importantly presenting valid credentials and stating a beneficial and productive purpose of the use.

    I’ve written before about credentialing and how it’s actually harder to pull permits to remodel your kitchen than it is to advertise any message you want on social media so I won’t perseverate. So let’s turn to encryption.

    Security as a business model

    Social and search’s business model must turn from advertising to data management, curation, and selling access to it and we live at precisely the moment when these activities are possible on a very large level. This includes encryption and the same form of certification that applies to other professionals from doctors to beauticians and plumbers.

    Encryption and its reverse take time and require compute and storage resources which have often cut short discussions involving them because of cost considerations. But new, shall we say disruptive, innovations in computer hardware and software are reigniting the discussion.

    In hardware data storage was long accomplished with the hard drives of most computer systems. Data enters and leaves storage on millisecond time scales, which is very fast. However, computer CPUs and memory operate one million times faster at nanosecond speeds. CPU chips spend a lot of time waiting for data to become available even when, as most modern computer systems do, there is memory caching for frequently used data.

    Innovative hardware designs now offer solid-state memory devices that replace disks. This memory operates at nanosecond intervals and eliminates the lag time of older mechanical systems. What should we do with all of this newfound speed? One possibility might be to dedicate a small portion of it to encryption. Typical encryption modes on the market right now could be broken but that would take so many years that the resulting data, when finally available, would be useless and encryption is getting better.

    Encryption would be a good thing but it wouldn’t solve all problems and securing our information infrastructure so that it operates more at utility grade, requires other changes. Bad software, malware, viruses, Trojan Horses, and the like may still get into systems.

    Mark meet Larry

    As luck would have it free markets generate inventions faster than they can be adopted. Often a disruptive innovation exists at the nexus of several disruptions that just need one more critical piece. That’s the case with many of the system level inventions that Oracle has brought to market over the last several years. They’ve pioneered important developments in solid state storage, encryption, chip sets that weed out intrusive malware, and a self-patching autonomous database that just hit the market.

    All of these things turn out to be essential to safeguarding data which will enable the information revolution to continue burrowing its way into our lives and enriching society. They are also the underpinnings of a new business model that turns big data companies into ethical data providers. They might also continue being social media companies but the data tail would now be wagging the dog.

    My two cents

    What do I know? I just read and write a lot. But what I see is an industry about to be regulated and, in my mind, the smart play is for the social media companies to lead the charge to ensure they arrive at something they can live with instead of remaining aloof and having some regulations imposed on them.

    There’s a wild west mentality in Silicon Valley in which what isn’t proscribed is encouraged. But we should keep in mind that the west only remained wild until the pioneers arrived and established towns with roads, schools, and churches. The wild bunch might have disliked the idea of settlement, they might have opposed it, but they were quickly in the minority and civilization won. That’s what’s happening in tech today and we all need to seize the moment.

     

     

     

     

     

     

    Published: 3 weeks ago


    The multiple issues/scandals/problems facing companies in Silicon Valley could drive you to ask if the wheels falling off the collective tech wagon. A recent article in the DealBook section of the New York Times asks the question,

    Are we witnessing the end of a mania?

    Investors, always willing to believe in technology companies, spent the last three years piling into the shares of companies like Facebook, Amazon and Netflix with special abandon. Now the intellectual underpinnings of the tech rally are being seriously tested.

    But maybe this is just the way disruptive innovations evolve. To be sure, if it takes 50 to 60 years for a disruption to work its way through society and the economy, as I have documented in discussions of long economic waves called K-waves, then it’s quite likely that we’ve seen this kind of thing before, say in the 1960’s. You remember the 60’s, right? That’s the point.

    Because there’s little living memory of the last time, today’s happenings look unfamiliar. A disruption grows and grows consuming everything in its path until there’s nothing left to consume and at its peak the disruptors become the disrupted and accommodations must be made. Often the they are initiated by standards and even regulation—admittedly a dirty word in Silicon Valley and environs but one that must be said.

    In the more genteel days of the 19th century the disruption of the day, electricity, had its own peaking moment pitting uber genius Thomas Edison, a proponent of direct current (DC) standards against the lesser known geniuses Nicolai Tesla and George Westinghouse, who favored alternating current (AC).

    Edison was not above making a slur to defend his viewpoint. When a condemned man was electrocuted using alternating current, Edison got his publicists to influence a headline saying the man had been “Westinghoused.” Those were the days. Nonetheless, facts are stubborn things and AC won the day because it was just a plain better standard for long distance transmission, something the emerging electricity industry simply had to have.

    Through standards setting and regulation, the electric industry became a ubiquitous and standardized service that is accessible to all and this set the stage for further growth of the industry and the economy. Ironically, the standards and regulations were nothing that individual business people would have readily agreed to.

    The Times article asks in a roundabout way if the tech sector is in a similar moment. While the sector comprises businesses as different as Facebook, Apple, and Google, to name a few, Facebook and social networks are reaping a healthy share of skepticism, and for good reason, so let’s concentrate there. Another Times article may have pinpointed the peak moment when social media went from the disruptor to the disrupted,

    Shortly before the election, a senior official with the Trump campaign bragged to the Bloomberg reporters Joshua Green and Sasha Issenberg, “We have three major voter suppression operations underway,” which the article described as targeting “idealistic white liberals, young women, and African-Americans.” Brad Parscale, who ran the Trump campaign’s digital advertising, is quoted in the same piece discussing his plan to use dark ad posts of an animation of Hillary Clinton referring in 1996 to some African-Americans as “super predators.” Parscale suggested that the campaign would use this image to discourage a demographic category described by the reporters as infrequent black voters in Florida. “Only the people we want to see it, see it,” he explained. “It will dramatically affect her ability to turn these people out.”

    Dark ads display once to a specific audience and then disappear though they’re still in the vault. But guess what, the article goes on to say that,

    the dark ads have disappeared and Facebook won’t release them, citing the privacy of its advertisers.

    Zuckerberg might say that trust is important and he might spend a few bucks on grandiose full-page newspaper mea culpas but he and his company are still remarkably tone deaf. Did the suppression effort work? You be the judge,

    The election of 2016, the first after Barack Obama’s presidency, was notable for a seven-percentage-point decrease in African-American turnout, from 66.6 percent in 2012 to 59.6 percent, according to the Pew Research Center.

    The first decline in 20 years and the largest decline on record.

    This isn’t an American phenomenon. The news shows that the effort in 2016 was international with companies like Cambridge Analytica employing many non-Americans to sort data, create psychographic profiles, and generally influence the US election and possibly other efforts like the Brexit vote.

    My take

    We’re at the moment when attention turns to regulation in social networking and elsewhere. Facebook’s focus on the individual’s rights (privacy of its advertisers!) rather than the potential harm it can cause to a whole society represents a blind spot that will interfere with any solo attempt to rectify the situation. That’s why self-regulation rarely works without a mandate from government.

    It’s possible to regulate social networks through a system of encryption, certification, and a modicum of tracking. Doctors, lawyers, plumbers, beauticians and many other professionals all submit to systems of practitioner licensing and professional standards and utilities are regulated. That’s a workable model for social networks.

    It’s currently easier to attempt to influence millions of people about consequential issues than it is to pull the permits to make an addition to one’s home. It shouldn’t be that way.

    The momentum in the halls of legislatures around the world right now is toward regulation. The social networks should welcome it and work with governments to reach a workable compromise which includes a standard set of regulations that apply over broad swaths of the planet. Regulation will do for the industry what it can’t do for itself and that’s exactly what the society needs.

    Published: 3 weeks ago


    I was a guest on the Gillmor Gang last Friday hosted by Steve Gillmor and available for streaming on Tech Crunch here. If you’ve never had the pleasure, it’s an hour of discussion at the nexus of technology, business, and current events and well worth seeing.

    For the last few weeks we’ve devoted time to how we all should react to the revelations about Russian intelligence services attacking Facebook during the presidential election. The conversation has evolved over that time, in part because there are new revelations every week and as the plot thickens our response has become more nuanced.

    For instance, when we started the discussion we knew that Facebook had been the advertising medium of choice for the Russians but that turned out to be only part of the story. In the intervening weeks we learned about the role of Cambridge Analytica in stealing user profiles. Technically we’d have to say there was no theft and that at the time Facebook was running exactly as it intended. I don’t know anyone who sleeps well knowing this. But it adds layers to our discussion and recommendations.

    Last week I advocated, as I had in some blogs on BeagleResearch.com that we’ve entered a time when we must seriously consider regulating Facebook and all social media and treat it all like a utility. You can read more here.

    But back to the Gillmor Gang. There was a variety of opinion about what to do and to my surprise, regulating was not top of mind for anyone other than me. That’s okay though. The discussion was lively and exercised points that I had not considered. What do you think? Take a look.

     

    Published: 4 weeks ago


    If you study economic cycles you can watch the evolution of a disruptive technology throughout its lifecycle from a specific product, to a competitive industry. The last phase in the evolutionary chain is often the formation of a utility. For example, over a couple of centuries we’ve seen the evolution of electricity from a curiosity, to a business, to a group of public companies. Along the way there are the inevitable mergers and acquisitions to enable a winnowing field of competitors to achieve the scale needed to compete in very large markets.

    It wasn’t just the electric industry that went through this evolution. The telephone, gas, and cable industries did in their own ways. Local or reginal utilities that provide sanitation and water still dot the landscape too. Banking is in a similar position that is manifests differently. In fact, any industry that attracts the term “too big to fail” is showing signs of utility status.

    When your business becomes so big that it affects large segments of society it can’t be allowed to fail lest it crater the economy or cause massive disruption injuring many people. At that point government has a compelling interest in preventing failure and along with that comes regulation of the riskiest corporate behaviors.

    The latest example to hit the radar might be social media which has completed many steps of the lifecycle with blistering speed in just over a decade. This speed notwithstanding, we are now at a point where what happens in social media affects all of us.

    Regulation is a thorny issue wrapped in individual freedom. But it is also a logical way out of an impasse. The recent interference in the US election, which all the American intelligence agencies have confirmed, is a proof point that social media is now a utility and needs some form of regulation.

    In a recent article in Wired, “Bad Actors are Using Social Media Exactly As Designed,” writer Joshua Geltzer makes the point that popular social sites including Facebook, Twitter, and Air BNB, all provide tools that enable users to find and segment groups that are best subjected to targeted messaging. His point is simple, the bad actors didn’t pervert social media or hack its code. They simply used the tools provided to mount a campaign to upend a US election and there’s evidence of similar activity elsewhere.

    By this measure, social media is now too big to fail; it is too essential to a large segment of society and its potential excesses must be managed so that it does not consume the society and the users who depend on it. Geltzer’s article clearly, but inadvertently, makes the case,

    When Russia manipulates elections via Facebook, or ISIS recruits followers on Twitter, or racist landlords deny rentals to blacks and then offer them to whites through Airbnb, commentators and companies describe these activities as “manipulation” or “abuse” of today’s ubiquitous websites and apps. The impulse is to portray this odious behavior as a strange, unpredictable, and peripheral contortion of the platforms.

    But it’s not. It’s simply using those platforms as designed.

    So, what would a social media utility look like and how would it be different from what we see today? First off, social networks should be regulated with as light a touch as possible. Grandmothers sharing baby pictures shouldn’t have to change their use habits, for example. Second, to achieve positive ends, regulation should be implemented at two distinct levels, the source and the periphery.

    At the source, regulation comes down to access for any person or entity with a beneficial and productive need for the utility’s services. In the electricity markets this means stable pricing for all and a commitment to serve as a common carrier. It wouldn’t be much different with social network regulation; the key is beneficial and productive use.

    Common carrier law began in railroads and shipping. In return for its use of public lands and roads, the carrier commits to serve all parties equally. In broadcast industries (radio and TV), slices of electromagnetic spectrum play the role of roads  that the broadcasters use as grants (licenses) from the people. In transport, the waterways are also owned by the people, so are the roads, and railroads have historically received government help because they provide a useful service to society. It goes on, but you can see that source regulation amounts to giving all participants a fair shot at using the public’s assets and insisting on beneficial and productive use.

    Regulation at the periphery takes on a different cast, notably in America where so much of modern utility regulation evolved. A great deal of peripheral regulation occurs through certification and licensure. People interested in careers involving one of the utilities often serve apprenticeships, learning from a master before earning journeyman’s status. They must also pass tests to prove their knowledge and skill.

    Barbers, beauticians, and other personal services professionals go to school and sit for certifying exams. Other professions are similar. Doctors, dentists, lawyers, and many others must take many years of education, pass tests, and serve different forms of internships before practicing on their own.

    The point here is that we already regulate the day to day best practices of many industries. We do it with a light touch and in the interest of the culture and the society functions quite well despite, or more likely because of, this light approach to regulation.

    My take

    Perhaps the time has come to consider lightly regulating parts of the tech industry, something we have never done. But the age of information and telecommunication, a 50-year economic cycle called a K-wave, is reaching its natural endpoint and that’s often when utility status and regulation has come to the forefront in prior cycles.

    Elevating social media use to professional status, seems a logical thing to do. Establishing a certification or licensing process plus capturing a user’s license number when accessing some of social media’s higher functions would give an uncomplicated way of keeping bad actors out of the networks or at least making them traceable. In case you are wondering this is the basic process of getting a building permit.

    This approach need not apply to lower level personal use. But trying to reach millions of people on a social network is functionally like climbing a utility pole and messing with the wires. For this one should need certification.

    Last point, part of certification in any industry is training in the ethical use of the tools and techniques of that industry. As a society we have not engaged in such a dialog for social networks yet, but one is overdue.

     

     

    Published: 1 month ago


    I’ve been a user of social media for more than 10 years and I was among the first to write about its potential long before there were social media products. The original research on 6 degrees of separation and the Kevin Bacon game that illustrated the power of social networking fascinated me. I started writing about it and even wrote a paper in about 2002-03 that called for social networking and analytics to become part of the CRM suite. I was a fan of James Surowiecki’s classic, “The Wisdom of Crowds” and thought that was a prescription for settling many tricky research questions. But now approaching 20 years later, I’m dismayed by what social media has become and I find myself calling for its abolition.

    Okay, social media isn’t going anywhere. The freedom of speech embedded in western democracies will ensure that even to the point that social media is eroding the very freedom of discourse that supports it. But that only places more responsibility on each of us to ensure that this class of products is used appropriately and not as a force for good.

    Revelations about Russian social media use to worsen domestic political arguments among Americans and influence political discourse leaves me shaken. But so does the advertising model and profit motive that drives it. They’re really two sides of the same coin. Social media’s primary product is the user and the products do a great job of gathering crowd data and statistically analyzing it to feed recommendations back to advertisers. It is not wrong to say that we are enabling it to assist force-feeding the consumer culture.

    Surely there must be a higher calling for the great technology that we’ve midwifed in the last few decades?

    A tsunami of negative press is evolving about social media and the ways Russian intelligence services subverted it to sway America during the last election cycle and even today. I am not using any weasel words to suggest that Russian intelligence purportedly or ostensibly hacked the election. The election scandal walks like a duck and it quacks and with two sources of verification I’m calling it. For back-up the Mueller team issued a 37-page indictment against 13 people and 3 organizations alleging it.

    Consider a recent New York Times article, “To Stir Discord in 2016, Russians Turned Most Often to Facebook” by Sheera Frenkel and Katie Benner. It says in part,

     In 2014, Russians working for a shadowy firm called the Internet Research Agency started gathering American followers in online groups focused on issues like religion and immigration. Around mid-2015, the Russians began buying digital ads to spread their messages. A year later, they tapped their followers to help organize political rallies across the United States.

    The social media instruments of choice? Facebook and its photo-sharing cousin, Instagram.

    Facebook and Instagram were mentioned 41 times in the 37 page indictment which charged the Russians with “executing  a scheme to subvert the 2016 election and support Donald J. Trump’s presidential campaign.”

    Now, Facebook and all the other social networks are not charged with any wrong-doing; they are, at least for now, the unwitting dupes of a sophisticated and well-planned effort. Fine, I get it. My dis-ease with Facebook (and Twitter) was summed up well by Jonathan Albright, research director at Columbia University’s Tow Center for Digital Journalism who is quoted in the above article saying,

    “Facebook built incredibly effective tools which let Russia profile citizens here in the U.S. and figure out how to manipulate us,” Mr. Albright said. “Facebook, essentially, gave them everything they needed.”

    If that’s true, and it seems hard to dispute, can social networking tools be unquestioningly used for good ever again? Are they, like fire arms, inherently dangerous and only capable of one use?

    Call me bad names if you wish but as bad as that is, it is the thought that the big social networks like Facebook, Instagram, Twitter and others as well as Google and Amazon all capture a raft of information about us for the purpose of force feeding us things that advertisers desperately want to sell.

    How effective are their tools and techniques? A separate article, also from the Times, shows a small sample of the online ads that the Russians used during the election that pick at the scabs of our society.

    There’s one designed to fan Southern animosity using a Civil War theme,

    There are also ads that tell African Americans not to bother voting or that suggest that the white government is against them. Another showing a picture of Hillary Clinton with an X across it and the caption “Hillary Clinton is the co-author of Obama’s anti-police and anti-Constitutional propaganda.” It goes on and on.

    You don’t have to like Clinton or Trump to understand that these things erode our democracy because they make it harder to have dialog between opposing parties and without dialog there is no compromise. But by extension, if social media can be successfully used against us in an election, and Facebook admits that such ads reached 150 million Americans during the 2016 election, this stuff can and is being turned against all of us in every day commerce.

    They’re still at it,

    Another article in the Times (Feb 19, 2018) “After Florida School Shooting, Russian ‘Bot’ Army Pounced” by Sheera Frenkel and Daisuke Wakabayashi offered this chilling summary,

    One hour after news broke about the school shooting in Florida last week, Twitter accounts suspected of having links to Russia released hundreds of posts taking up the gun control debate.

    The accounts addressed the news with the speed of a cable news network. Some adopted the hashtag #guncontrolnow. Others used #gunreformnow and #Parklandshooting. Earlier on Wednesday, before the mass shooting at Marjory Stoneman Douglas High School in Parkland, Fla., many of those accounts had been focused on the investigation by the special counsel Robert S. Mueller III into Russian meddling in the 2016 presidential election.

    The bots owners don’t care which side of any debate they take and seem to prefer running both sides to ensure divisive reactions. Karen North, a social media professor at the University of Southern California’s Annenberg School for Communication and Journalism summarized the situation,

    The bots are “going to find any contentious issue, and instead of making it an opportunity for compromise and negotiation, they turn it into an unsolvable issue bubbling with frustration,” said. “It just heightens that frustration and anger.”

    My take

    Mostly I am disappointed that social networking isn’t really living up to what we envisioned. It’s a realization of social networking that each of us, according to the theory, is no more than 5 touches from any other person on the planet and for most connections it’s fewer.

    The practical application of social networking has to do with Dunbar’s Number. Robin Dunbar was a British anthropologist who observed that humans can maintain stable social relationships with about 150 other humans. The number puts a practical limit on all kinds of things that depend on close relationships. For instance a military company is comprised of not more than 150 individuals for reasons of cohesion. The company is the building block of all military units because every member has every other member’s back and they all know it because they have personal relationships.

    In the middle ages, monasticism spread for similar reasons. Civilization was saved in Western Europe because whenever a monastery grew above Dunbar’s number, extra members were sent out to establish another miles away. This happened naturally mind you, not because someone had an algorithm but because organizations just got too big for comfort.

    Social networking today has blown up Dunbar’s number. While I wouldn’t suggest that I can have anything like a relationship with the few thousand poor souls who follow me, I can at least keep them interested by occasionally flicking off a crumb of my existence for their consumption. But it’s pointless and all indications are that it’s harmful for multiple reasons to the body politic.

    So I’ve quit Facebook. Actually, they don’t let you quit, they deactivate your account so that you can come back. I really hope I don’t back slide though. I never got much from facebook and the harm it does to society weighs heavily on me. I’m just one person with an opinion but it would be wonderful if other people did the same.

     

    Published: 2 months ago