Dystopian technocracy updates


[NB – Technocracy is a broad term that implies governance on the basis of technique or technical expertise: “a system of government in which the decision-maker or makers are selected on the basis of their [alleged] expertise in a given area of responsibility, particularly with regard to scientific or technical knowledge. This system explicitly contrasts with representative democracy” (source)

According to this definition, health czars (i.e., state-mandated health advisors such as Drs. Fauci or Tam) are now running formerly free societies as technocratic states, having effectively usurped representative democracy under “emergency orders” – probably in perpetuity. Democracy will never return until enough people demand it.

The term technocracy can also apply to governance via technology directly, as this example of automation of the justice system (below) illustrates. Automation used to be replacing factory workers with robots and bank tellers with ATMs and grocery clerks with check-out machines. Now it’s spread to arrests. It reminds me of the satire of computerized summary judgments in the movie Idiocracy, which shows how these systems can err.

The globalist plan is for all of us to be chipped and tracked at all times, with zero privacy, under the pretext of public health and anti-terrorism — but really in order to impose totalitarian control. It’s obvious that facial recognition technology is designed to suppress dissidents. It’s already being used in the genocide of the Uyghurs.]

China introduces “AI prosecutor” that can automatically charge citizens of a crime

While in the West mostly speech and movement of people are policed through automated “AI” censorship and surveillance systems, in China, work appears to be well underway to create a machine that would act as an AI-powered prosecutor.

The product, which has already been tested by the busy Shanghai Pudong prosecutor’s office, is able to achieve 97 percent accuracy in charging people suspected of eight criminal acts, researchers developing it have alleged.

According to the South China Morning Post, the cases that the “AI prosecutor” is allegedly highly competent in handling involve crimes like credit card fraud, dangerous driving, gambling, intentional injury, obstructing officials, theft, but also something called “picking quarrels and provoking trouble.”

The last one is considered particularly “problematic” since its definition, or lack thereof, can cover different forms of political dissent.

And now the plan is to introduce a machine that would be given decision-making powers, such as whether to file charges, and what sentence to seek on a case-to-case basis.

That, said Professor Shi Yong, who heads the Chinese Academy of Sciences’ big data and knowledge management lab that is behind the project, is a marked difference between this and other “AI” tools that have already been in use in China for years. One of them is System 206, whose tasks are limited to assessing evidence, the danger a suspect poses to the public, and conditions under which they may be apprehended.

But the tech behind the new artificial prosecutor looks to be at the same time far more ambitious, and advanced. What has been disclosed is that it can be run on a desktop PC, processing 1,000 traits extracted from case descriptions filed by humans, and based on that press a charge.

It’s unclear if the database of 17,000 cases spanning five years used to train the algorithms is enough to consider the project as true AI – and if the same result can be achieved by rule-based algorithms.

Either way, not all human prosecutors are thrilled about having some of their workload replaced in this way – although precisely this has been given as the motive for developing the tech.

“The accuracy of 97 per cent may be high from a technological point of view, but there will always be a chance of a mistake. Who will take responsibility when it happens? The prosecutor, the machine or the designer of the algorithm?,” one Guangzhou-based prosecutor noted, speaking on condition of anonymity.

Police robot from the film Robocop

Clearview AI’s controversial facial recognition tech is involved in 84 Toronto criminal cases

Clearview AI, a poster child for controversies surrounding facial recognition software used for mass surveillance, that is particularly popular among US law enforcement and agencies, has also been used in Canada.

CBC News reports about this, citing an internal document it had been able to see through an access to information request, that shows the police in Toronto used Clearview in 84 criminal investigations.

In the US the startup’s product, causing a huge backlash among privacy advocates, is said to have been used by more than 300 local, state and federal agencies; the Canadian figures seem low in comparison but could be only “the tip of the iceberg” since they concern only one city, and cover the period from October 2019 until February 2020.

The way Clearview works is by scraping, without people’s consent, billions of images from around the world posted on Facebook, Instagram, YouTube, but also what’s described as “millions” of other websites.

These images are then put into a database. When a customer like a police agency uploads its own photos to identify a person, those are matched with the existing Clearview database collected from the web without permission, to produce a match using facial recognition tech.

In Toronto, the document shows that the officers uploaded over 2,800 photos to Clearview’s database to match suspects, victims, and witnesses in the 84 cases that have now been confirmed, during investigations carried over three and a half months.

Aware of the dark cloud of controversy that the US startup is under, the Toronto police first denied using its services, to then admit that the technology had been used – however, without at the time providing any more details.

The internal document that has now come to light reveals that Clearview AI’s free trial was apparently so appealing that police officers started using it without speaking to one another, or their supervisors first.

“When you’re enforcing the law, your first obligation is to comply with it,” commented Canadian Civil Liberties Association’s Brenda McPhail. Canada’s privacy commissioners have marked Clearview AI as a mass surveillance tool that breaks the country’s privacy laws.

Bending the Knee to China: Intel apologizes to China for shunning slave-labor region Xinjiang

Intel has joined a growing number of large western tech firms who have had to backtrack on their moves pertaining to China’s internal policies, which are criticized internationally for violating human rights.

But when the US chipmaker seemed to try to take a stance on the Xinjiang region and labor conditions there calling on its suppliers not to source components or rely on the local workforce, the company quickly issued an apology.

In the apology, posted in Chinese on the giant WeChat platform, Intel said its original letter – effectively calling its partners to boycott Xinjiang-based supply-chain and labor – was motivated solely by Intel’s desire to comply with US laws when it came to doing business in China.


It was in no way meant to express a position on the matter – (i.e., support western claims that Chinese authorities are resorting to forced labor in the troubled region), it said.

“For causing trouble to our esteemed Chinese customers, partners and the general public, we express our sincere apologies,” Intel’s apology reads.

In addition, Intel also “thanked” everyone who raised the issue – referring to the furor on Chinese social media after Intel penned the letter to contractors, and a spokesperson for the Chinese Foreign Ministry commented on the controversy to urge US companies to “respect facts” regarding the conditions faced by workers in Xinjiang.

The ministry flat-out denied that forced labor is taking place in the region and put the allegations down to anti-Chinese forces looking to sully China’s reputation abroad.

But the apology itself didn’t seem to remain outside the realm of high politics, because it was the US State Department and its spokesperson who now reacted – without mentioning Intel by name – by saying that US companies should not feel the need to apologize for “standing up for fundamental human rights.”


However, in the apology, Intel made it clear it was not attempting to stand for human rights, but comply with US legal regulations – while evidently working very hard to maintain its presence in the Chinese market, which amounted for more than a quarter of its revenue in 2020, in the highly competitive semiconductor market.

China’s significance for US tech companies goes beyond providing a large consumer market, since it is also a manufacturing power, prompting many to forget about democratic ideals and think about the bottom line.


And a related story from The Daily Wire:

Control Weaponry’, Biden Administration Responds

The U.S. levied sanctions against numerous Chinese entities this month after it said that the communist nation was using emerging biotechnology to create “brain-control weaponry” and other technology that creates a serious risk to U.S. national security.

Former Director of National Intelligence John Ratcliffe warned that the U.S. is not going to use the same questionable practices that China is. “We’re not going to place our own soldiers, sailors, and airmen at risk, which is what the intelligence tells us the Chinese are willing to do,” he said. “They want to advance at any costs, including those that are harmful to their own population.”

“The Pentagon says Beijing already uses these technologies, including biometric surveillance tools and facial recognition to track dissidents and journalists and to suppress the Uyghurs,” Griffin added.

The U.S. Commerce Department said in a statement on December 16th: “Today, the U.S. Commerce Department’s Bureau of Industry and Security (BIS) took action to address the ongoing threats to U.S. national security and foreign policy presented by the People’s Republic of China (PRC)’s efforts to develop and deploy biotechnology and other technologies for military applications and human rights abuses. BIS is also taking action against entities operating in the PRC, Georgia, Malaysia, and Turkey for diverting or attempting to divert U.S. items to Iran’s military programs.

The final rule issued by the Commerce Department said that the actions against Chinese entities were due to the communist nation using “biotechnology processes to support Chinese military end-uses and end-users, to include purported brain-control weaponry.”

“The scientific pursuit of biotechnology and medical innovation can save lives. Unfortunately, the PRC is choosing to use these technologies to pursue control over its people and its repression of members of ethnic and religious minority groups.  We cannot allow U.S. commodities, technologies, and software that support medical science and biotechnical innovation to be diverted toward uses contrary to U.S. national security,” said U.S. Secretary of Commerce Gina M. Raimondo. “The U.S. will continue to stand strong against efforts by the PRC and Iran to turn tools that can help humanity prosper into implements that threaten global security and stability.”

A senior U.S. official told The Financial Times that China was seeking to use emerging biotechnologies to try to develop future military applications that included “gene editing, human performance enhancement [and] brain-machine interfaces.”

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s