by Ken Macon, Reclaim The Net, Dec. 16, 2021
In April 2020, Facebook CEO Mark Zuckerberg offered National Institute of Allergy and Infectious Diseases (NIAID) Director Anthony Fauci help in “facilitating” decisions regarding lockdown measures in the US, shows private emails exchanged between the two.
Zuckerberg’s offer referred to aggregating “anonymized” user reports that would help NIAID decide whether to tighten or loosen lockdown mandates.
The emails, which The National Pulse says it has obtained exclusively, show Zuckerberg willing to put Facebook’s gigantic user data trove at NIAID’s disposal in this “user reports” form, and came a month after the pair started communicating directly – a revelation stemming from another batch of government-redacted emails Zuckerberg and Fauci sent to each other in March 2020.
In the email dated April 8, Zuckerberg explains that he wants to help Fauci and his organization “facilitate” decisions and “prioritize” the right work. He mentions that Facebook already had something called a symptom survey aimed at providing indicators of cases by county, and in this way let the tech giant “inform public health decisions.”
“If there are other aggregate data resources that you think would be helpful, let me know,” the Facebook CEO wrote to Fauci, whom he referred to as “Tony.”
Zuckerberg also expressed willingness to provide more “resources” in order to speed up vaccine development.
In his response, Fauci said, “I will think hard about ways that we may take you up on your offer.”
This is not the only time Zuckerberg and Fauci have communicated privately, although the latter denied that he interacted with the Facebook chief even after other emails surfaced to show them discuss a coronavirus “information” hub being set up on the giant social media site as a means to steer the messaging in the desired direction.
The content of these emails that became public was also redacted, with some interpretations of the missing text suggesting it was Zuckerberg offering to censor particular topics around coronavirus.
Regardless, observers now say that the emails show how powerful private entities, said to have the potential to sway public opinion, and the government in the US can effectively collude behind the scenes.
It’s also noteworthy that Zuckerberg – whose foundation, the Chan Zuckerberg Initiative, reportedly spent millions to prop up Joe Biden’s campaign – is often the target of criticism by liberal media and Democrats as “not doing enough.”
by Didi Rankovic, Reclaim the Net, Dec. 16, 2021
The UK continues to draft its new law meant to tackle online “harms,” dubbed Online Safety Bill, which many critics see as a massive overreach and a draconian effort that could end up producing harm itself – at the expense of free speech and privacy, and even innovation.
The latest proposal that could make it to the final version, to be presented to parliament in early 2022, seeks to make some portions of the existing draft even more restrictive, using vague language to cast the net wide and leave matters open to interpretation. We obtained a copy of the draft for you here.
Among the suggestions that significantly change the current provisions in the proposed bill – that already has the nebulous task of fighting “legal but harmful” content – is now the “potential” harmful impact of algorithms, while new criminal offenses would be to “knowingly distribute seriously harmful misinformation,” “stir up” violence against women, or others, based on gender or disability, as well as “promoting self-harm” and “cyber-flashing.”
These recommendations are made in a new, 191-page long parliamentary report, calling also for mandating that tech companies appoint a “safety controller” who would be liable if found responsible for “repeated and systemic failings.”
And while expanding some, the report seems to want to limit other powers the original draft aims to give to the regulator. It is noted in this context that the definition of illegal content is “too dependent on the discretion of the secretary of state.”
The report also suggests that the “legal but harmful” definition should be removed if online content targets adults.
It is for these and similar reasons that free speech and privacy advocates like those from the Adam Smith Institute (ASI) blast the proposed bill as posing “gigantic threats” to freedom of speech, privacy and innovation, and dismiss the new report as doing little to improve it.
The bill’s stated core purpose is to – “think of the children” – i.e., protect them online, but in reality has a massively broader scope. It is nonetheless endorsed by some child safety groups. However, they are unhappy that the draft doesn’t outlaw end-to-end encryption, but “merely” considers encryption as a risk factor that tech companies would be forced to assess.
The Internet Society, however, sees the report as “a reflection of a public debate largely framed in misleading and emotive terms of child safety.”
The non-profit believes the parliamentary committee is ignoring the danger of undermining encryption. “As a consequence, we see a bill that will result in more complex, less secure systems for online safety, exposing our lives to greater risk from criminals and hostile governments,” they say.