In Sheera Frenkel and Cecilia Kang's "An Ugly Truth," the authors recount the company's long history of insider threats in which employees (mostly men) used the company's tools to stalk people (mostly women).
The stalking targets included both strangers and intimate partners - for example, an engineer used FB's tools to locate his partner after she fled their shared vacation hotel room in order to "confront her."
Another FB engineer stalked a woman who didn't return his messages after a date, accessing years of private messages and photos, including photos that his target believed she had permanently deleted, but which Facebook had secretly retained.
All told, Facebook fired 52 employees for data abuses between Jan 2014 and Aug 2015, after a policy change eliminated many access safeguards in the name of eliminating "the red tape that slowed down engineers."
In other words, Facebook was in a situation in which its users' interests were at odds with its shareholders. By eliminating protections for its users, it allowed its engineers to work more efficiently, and increased its profits.
These kinds of conflicts - between shareholder and stakeholder interests - are the norm in business. Think of a busy retailer that cuts its cashiers: reducing payroll costs increases profits, at the expense of worker stress and longer waits for customers.
The question of how much value can be shifted from employees and customers to shareholders isn't really an economic one - it's really a *policy* question.
If we have strong labor laws - protecting cashiers from undue stress, extending unemployment benefits to workers who quit bad jobs, protecting workers from non-compete clauses, separating health-care from employment - then a business that screws its cashiers will lose them.
Or if the business has a regulated monopoly - a patent, a trademark or some other exclusive right that makes it the only game in town (say, the sole right to sell snacks in an airport), it can shift more value from customers to shareholders before the customers walk away.
Facebook - and other tech monopolists - have engineered a world where they get to side with shareholders over users, again and again, to the users' great detriment, without losing those users.
Economic analysis of tech monopolies focuses on "network effects" - the way more users make Facebook more valuable (you join FB because your friends are there, more friends sign up because *you're* there).
Taken on their own, network effects are cause for despair, predicting that tech will produce "natural monopolies" - an inevitable winner-take-all market. But that's obviously not true - I'm not typing this on a Cray or using Altavista to look up facts while I do.
Far more important than network effects for antimonopoly analysis is *switching costs* - the things you give up when you quit a service. In FB's case, quitting means leaving behind your friends, communities and customers.
Now, this needn't be the case. You can switch phone companies or email providers without shattering your social connections. FB has engineered a high switching cost, blocking other services from connecting to it.
After all, the more you stand to lose by leaving FB, the worse FB can treat you before you're willing to leave. Zuck didn't abolish the safeguards that protected us from rogue FB employees because he's nosy - he did it because it's profitable.
He was betting (probably correctly) that no matter how unhappy the ensuring scandals made his users, it wouldn't make so many of them unhappy enough to quit that the losses would outweigh the gains from exposing us to predatory Facebookers.
Which is why proposals like the ACCESS Act, currently working its way through Congress, are such a big deal. It's a law that would force FB to let third parties plug into it, so you could leave FB but stay in touch with the people who stay behind.
In response to this (and the EU's Digital Markets and Digital Services Act), FB (and some lawmakers) warned that allowing third parties to connect to monopoly platforms would expose users to privacy risks, by reducing tech companies' control over their services.
There's an element of truth to this, but left unsaid is that reducing the switching costs for leaving Facebook will protect users *from Facebook*.
When FB says that it needs total control over its servers or Cambridge Analytica will steal our data, we have to remember that FB *already* let Cambridge Analytica steal our data.
When FB gutted its internal controls to increase profitability by decreasing user protections, senior employees went to Mark Zuckerberg to warn him against it. Alex Stamos, then FB's then-CSO, reportedly strenuously objected, but was personally overruled by Zuck.
Contacted by Insider's Sarah Jackson, an anonymous FB spokesperson (literally the only kind of person FB allows anonymity to!) said "We've always had zero tolerance for abuse and have fired every single employee ever found to be improperly accessing data."
But that's a dodge. "Zero tolerance" isn't the same as "top priority."
If FB wants to prioritize preventing employees from stalking users, it would collect less data from users, delete data as quickly as possible, and put very strict barriers between employees and data.
If preventing stalking was FB's *top* priority, it would collect *no* data, and/or *never* let employees access it. Obviously, FB won't do that. it will always have to balance its users' privacy and safety against its shareholders' interests.
I am skeptical that FB will *ever* be a trustworthy guardian of its users' safety and privacy. I don't think the problem is that Mark Zuckerberg is the wrong self-appointed czar of 3B peoples' lives - I think the problem is that no one should have that job.
But if you disagree - if you want to fix, rather than abolish, Facebook - then you need to figure out which policies will tip the balance in favor of the public interest.
Policies like interop impose immediate, meaningful, concrete costs on FB every time it sides against the public and with its shareholders. If you're worried that interop will expose FB users to rogue companies or state actors, then regulate who can connect to FB.
But don't leave that up to FB. FB will side with shareholders over users whenever it's profitable to do so. Putting FB in charge of interoperability shifts that balance dramatically in favor of shareholders and against the public.
John Lodder (modified):
Denis Defreyne (modified):
@pluralistic A proper Facebook/Fediverse or Twitter/Fediverse bridge would be fantastic.
Honestly, Facebook and Twitter could just figure out how to implement ActivityPub. That would be great.
@danjones000 @pluralistic If Facebook and/or Twitter decide to take over the fediverse, would they be able to? Couldn't they? Tell people how convenient it is for them? Make it look like fb is "so good because it brings the best of both worlds" when in fact they'll find a way to monetize & control their users in both worlds! Just because you CAN do something doesn't mean you should do it. Best intentions etc. Bridge tech just helps jerky people act jerky in more places. Not a win. Not great.
@danjones000 @pluralistic Before implementing something, tech people need to do a better job of understanding & estimating how those with other motives (e.g. profit) will twist it into something the originators never intended or wanted. I speak from experience. Just because you have good intentions doesn't mean others do. Don't underestimate the avarice and insensitivity of people who are nothing like you. Cost/benefit analysis needs to include Big Money, Big Govt, bad actors.
I’m not sure that technically, if Facebook became a Fediverse instance, that it would be able to monetize or otherwise screw over users from other instances.
There have been companies that launched their own Fediverse instances. Gab is a particularly notable one. A lot of instances ended blocking Gab entirely, because of the ideology of its members, before it eventually decide to stop federating entirely. It wasn’t able to play nicely within the greater ecosystem (culturally), so it just gave up trying entirely.
I think the greatest concern might be Embrace, Extend, Extinguish.
A scenario such as this could happen: Facebook starts federating with ActivityPub. You can now follow your FB friends from Mastodon, Friendica, etc. Everybody’s happy.
FB then adds in functionality unique to FB, so that while they technically are federating and open to other instances, members from other instances are missing out on certain things that their FB friends are posting. Inevitably, people start moving back to FB because they’re missing out.
I think we're agreeing. Who knows how many instances fb would have? Who knows how well the instances would be moderated? Who knows if they'd sell advertising on them? What social nets would they bridge, in which directions? How much money would they throw at people to motivate them to do things to bring them even greater profits? FB has done a lot of research & grabbing people's data. They know how brains work, how cultures work, better than fedi volunteers, I think.
Of course there’s no market incentive. That’s why the bill @pluralistic would be important. It would compel FB and Twitter to interoperate with other services, in some way.
I’m simply suggesting that if this passes, the easiest route might simply be for them to implement ActivityPub. That would likely fulfill the requirements of the law.
@pluralistic ..and the more background checks we'll fail because we don't have a noticable social media footprint.
Mamot.fr est une serveur Mastodon francophone, géré par La Quadrature du Net.