Today, Wired ran my op-ed about Facebook's war on Ad Observatory, an NYU project that enlists users to gather the ads FB serves so researchers and accountability journalists can measure how FB is living up to its promises on paid disinfo.
Facebook is really bad at keeping its promises. At a moment in which paid disinformation on social media threatens the integrity of US elections and the credibility of US democracy, FB threatened to destroy a watchdog that documents FB's failures to live up to its promises.
FB is waging a two-front war on the scrappy academics who run this project: first, there are the legal threats (which depend on very shaky legal ground), and then there's a disinformation campaign that smears these academics and their work.
Here's what that disinfo looks like: FB says that Ad Observer (the browser plugin) threatens user privacy, and that it isn't needed because of FB provides researchers with its own repository of ads.
Both are demonstrably false claims. When you scrape an ad FB showed you and send it to Ad Observatory, you don't violate your privacy (because you have chosen to make this disclosure), nor anyone else's privacy.
FB claimed Ad Observer users could expose information about which of their FB friends have seen the same ad. It would be terrible - even by FB standards - if this was possible. It implies that when FB shows you an ad, it also embeds info about who else has been shown that ad.
It's a mark of just how terrible FB's privacy practices are that multiple journalists found this claim credible enough to repeat (Ad Observer COULD get your friends "interactions" to ads, but it provably does not).
But what about FB's claim that Ad Observatory is redundant because FB already offers researchers access to a database of the ads running on its service? Also demonstrably false. FB's database misses multiple instances of paid disinformation.
How do we know that? Simply: AD OBSERVER CAUGHT THEM.
Get that: FB claims that it's being transparent. A watchdog proved they're not. To fix this, FB proposes that we annihilate the watchdog.
As I wrote in Wired: "This may be par for the course with Facebook, but it's not something we as a society can afford to tolerate any longer."
@pluralistic Why would Facebook want false claims about the outcome of the 2020 election to spread? Do they really need the money? They have so much $ already? Or are they trying to influence the outcome of American democratic elections? Are there state or federal laws against undue influence upon US elections?
a) They don't want to get into legal and PR trouble by allowing political disinfo
b) They don't want to spend the $$ to moderate their self-serve platform(or shut down the self-serve part), and anything less won't work
c) They don't want critics to point out the inadequacies of b) in order to avert a)
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!