Discussion about this post

User's avatar
V. N. Alexander's avatar

Why do MAHA people have reason to be concerned about a spook-adjacent bureaucrat being made head of the CDC? a bureaucrat whose main interest is using AI to make decisions for us?

Malone mentions “Machine Learning” AI above. That would be an appropriate tool to apply to an enormous dataset to find patterns in the data that people have otherwise missed. ML is good if you already pretty much know what you’re looking for and the data is quantifiable.

However, no one with a pulse glancing at the VAERS data can miss the safety signal with vaccines. We don’t have a problem that we can’t find the suggestive patterns. The problem is the CDC hasn’t allowed anyone to properly interpret them.

Applying Machine Learning AI to the Morbidity and Mortality Weekly Report (MMWR) might find some previously unsuspected or unproven correlations between disease and treatments, toxins and/or life style factors. But again, most of these patterns have already been discovered by clinical research. Using Machine Learning AI might help those on the MAHA team make their arguments more confidently.

I imagine Bobby wants to apply AI — not generative AI (ChatBots) but machine learning AI — to the big datasets to try to strengthen his arguments about the causes of chronic disease. That’s all good.

But, as I have argued on my stack, the concern is that we are being shepherded toward technocractic rule, wherein, instead of bureaucrats and politicians making decisions about how to manage the flock, generative AI will make the decisions. AI technocracy proponents claim that generative AI can be more “objective” and can find the “best” way to win the game, just like it had found the best ways to win the game of Go.

In my stack, I have thoroughly ridiculed people making such assertions, e.g., Elon Musk, Aza Raskin, Tristan Harris, and Joe Rogan. Recently, Naomi Wolf added her voice to the chorus of those fear-mongering that generative AI will soon become more “intelligent” than people.

I am on Naomi’s side as she warns about the dangers of all our data being collected and correlated by private entities to train their own generative AI. I disagree with her that a “sovereign AI,” controlled by the government, would be better. It doesn’t matter who controls the AI, it is an essentially useless tool for that job, regardless of who wields it. Our government is our servant, not our master. We need to stop any bureaucrat who suggests that all we need is a government that makes better decisions on our behalf.

On March 18, I was disappointed in Naomi when Shannon Joy mentioned my arguments that AI won’t ever be “intelligent,” and Naomi unhelpfully regurgitated the marketing she’s been fed, “It’s my understanding that there is going to be a point at which AI is essentially thinking for itself… I’ve been around enough people who are on the inside of creating and distributing the AI of the near future. What Elon Musk is saying is not unusual or just his point of view; it’s kind of the consensus. I heard from the AI specialist at Microsoft [at a recent conference in India]…I heard from a guy who’s at the center for AI at Harvard, a guy from Deloitte who specializes in AI futures…” Consider the source of this consensus opinion, Naomi.

Naomi probably fears that this new super human intelligent generative AI (which hasn’t appeared yet and there is no sign that it will) will be used as a tool by the elite to carry out their dastardly biomedical security plans on us. She is right insofar as any powerful tool applied by the wrong people will lead to harm. But she is wrong to believe that generative AI will become more intelligent that humans— indeed intelligent at all.

Generative AI is only a glorified predictive text search engine that can mimic human communications. Its responses to prompts are probabilistic. It does not perform logical operations and it cannot perform creative or critical thinking and never will. My Substack is dedicated to explaining why this is so, but I think anyone who has tested these ChatBots understands this.

If Susan Monarez has the same dream that all those at the spook agencies have had since the Total Informational Awareness program was launched, she wants to promote the idea that we can “fix” government by allowing generative AI to make decisions about how to govern the population. The Total Information Awareness program was scrapped when the spooks realized it violates our constitutional rights to collect data on us. And so, Palantir was created by the CIA to collect the data. They think (wrongly, due to the limits on prediction of the behavior of non-linear complex systems) that more and more Big Data will finally improve generative AI’s capabilities such that it will be able to make accurate predications — and therefore, will be able to make the “best” decisions and control the public through legislative acts in the best way possible.

In truth, however, I suspect those high-level spooks aren’t stupid. They know that generative AI is just a predictive text engine, not a prophet. But they want the public to believe that AI can be more objective and less corrupt that a human bureaucracy, so that they can institute a new form of government: technocracy.

The remedy to this is to convince the public that everything that is being said about generative AI is hype. I’ve been doing my best.

The solution to this problem is also to stop allowing those in government to think of us as sheep who need bureaucrats —whether human or AI — to make decisions for us. A society wherein the people make fully informed decisions for themselves is the best society. The discoveries of the complexity sciences back in the 1980s and 1990s told us this. Top down control is dumb. We have never reformed our political theories of appropriate governance based on what we now know about non-linear complex systems. We need to do this.

And finally, here’s why Susan Monarez probably wants to look at the VAERS data using Machine Learning. This is my guess, anyway. The Covid-19 shots were a massive experiment on the population. Too many people in biomedical research are convinced that gene therapy will be the way to cure disease and even stop the aging process. (There are various reasons why these people pursuing this goal cannot see that the goal is futile, which I’ve written about in my substack.) The VAERS database is a goldmine of information about how the different formulations of the the different vaccine batches did in the mass experiment. Now they want to use that data, and correlate it with other health records that DOGE has collected, to improve the gene therapy platform.

I think Monarez is trying to leverage Bobby’s interest in using Machine Learning AI (to make his case about chronic disease) to get at the VAERS/MMWR data to use in to train generative AI to improve the gene therapy platform. In the larger picture, I think Monarez and her spook tech friends hope to train generative AI on all government data — all that DOGE may have collected — to transform generative AI into Artificial General Intelligence (AGI). They will find that it won’t work. Making AGI in the way they are trying to do will fail. But that might not stop them from imposing technocracy on us, if enough of us have been fooled into thinking generative AI can make logical decisions.

Long rant, sorry. My critique of Musk and DOGE is not just knee jerk reactions or caused by TDS. I am thrilled about the possibility of cutting the federal budget down to size. I’m just worried that the Trump Administration will try to replace human bureaucracy with AI bureaucracy. Bureaucracy itself needs to go; we need to think for ourselves.

Expand full comment

No posts