There is no AI. What is being called artificial intelligence and sold as snake oil under that label is actually artificial stupidity. It will destroy your own personal ability to critically reason. It will destroy your company—by reducing, not increasing, productivity; and by increasing, not reducing, your risk-exposure to critical errors. And it will destroy the economy. Not by taking jobs. It will never replace any significant number of jobs, because it is garbage. It can’t do even the simplest job. It fucks up more than a tween on weed. Rather, it will destroy the economy by wrecking pensions and banks and tanking the global economic system, resulting in massive layoffs and food lines, because any time now trillions of dollars of the global economy are literally going to evaporate—the moment people realize they are being conned and AI can never make money, or do any of the big things its grifters have desperately been claiming, and they even more desperately try to sell their position, and the whole stock market crashes.

AI is the fanciest of Nigerian princes, whom CEOs (who we already knew were, as a class of people, consistently idiots) are falling for because the Scam is Great. Tulips for everyone! We know the rich are idiots who continually wreck the world with their phenomenal stupidity. They’ve done it literally twice already in Generation Z alone (from Dot-Com to the Big Short). Those were literally exactly the same stupid things they are doing now. They can’t even learn from their own mistakes ten years prior. That’s how stupid rich people are. So stop listening to them. Stop taking their advice. Stop buying their snake oil. Elon Musk is only the most prolific idiot. They are all idiots. And they are conning you—and each other (high on their own supply)—with fake AI. If you don’t already know all this, if you don’t believe me, then read on. This article is my own desperate attempt to wake you the fuck up.

This article is also continuously updated. New links with more studies and expert analyses are often being added. And its conclusions have not changed but only been increasingly confirmed.

No, AI Is Not Good at Anything

AI content has been banned on my blog for months now (see my Comments & Moderation Policy). No comments that even smell like AI content will be posted. You need to think for yourself here. No more “asking randos on the internet” to write long dumb analyses full of incoherent trivia and crap. And that is all asking AI is: all it does is “auto complete” what most people are saying about a thing on the internet, often in an ill-thought jumble. Which means mostly it’s going to be trivial or garbage, because most of the internet are idiots who don’t know what they are talking about, and AI can’t tell the difference between high and low quality information (even intelligent humans struggle to do that), and doesn’t understand anything it is doing. And it will never improve.

This is a scientific fact now. Multiple studies have confirmed that AI makes so many mistakes it reduces productivity because it takes more time to fix all its mistakes (and vet all its machinations to catch mistakes) than it would have taken to just do the task yourself. Humans are more productive than AI. And science has proved this will always be the case: the LLM framework that current AI is based on can never get better. Its error rate will always be around the same no matter how much data it gets, no matter how many processors it has, no matter how much electricity it burns. It’s a dead-end technology.

It’s even worse, of course. Because these AI’s are easily exploited by state and corporate bad actors to get them to say whatever they want, even without having any source control over the AI itself. They can simply flood the internet to spoof every AI there is. So you’re really just reading propaganda. Whether by design or happenstance, what gets said the most, gets told you the most. That’s the opposite of what critical thinkers should be consulting (see A Vital Primer on Media Literacy and A Primer on Actually Doing Your Own Research). Indeed these AI’s are as easy to manipulate as your drunk uncle (no, really, they are). So why would you ever trust them? It’s bad enough that they have intolerably high error rates and a high output of mundane slop (which even leads to model collapse). They are also capturable by bad actors. Honestly.

And this is not opinion. It’s fact.

  • That AI’s are unreliable and exploitable and nothing can fix them? Proved.
  • That AI’s are not rational thinking machines? Proved.
  • That AI’s err so often because they don’t (and can never) comprehend anything they are doing? Proved.
  • That AI’s are dangerously stupid? Proved.
  • That using AI’s makes you stupid? Proved. Proved. And proved again.
  • That this AI can’t be fixed and has nothing left to show us? Proved. Proved. And proved.

AI will survive the decade only in penny-ante or hyper-specialized applications, generating what everyone knows are unreliable results that constantly have to be fact-checked or corrected, essentially doing the same thing Clippy and Siri and other universally loathed tech have already been doing for a decade now. We’ll barely notice the difference. We’ll just keep rolling our eyes at the same crap annoyances and results as ever—or hiring experts (or engaging in hours of our own labor) to make it work, just like every other technology ever (see my followup article How to Use Pseudo-AI). Because in reality, “95% of AI pilots are failing” because AI doesn’t actually work (and the actual rate might be 97.5%). As explained by Andrew Zuo in “Who Would Have Thought an MIT Study Would Be the Thing to Pop the AI Bubble?”, putting it this way:

A recent study showed that AI slowed developers down by 19% despite them thinking it had actually sped them up by 20%. This is because of a few reasons. First there’s the overhead from prompting the AI and waiting for the response that can break your flow. Then you have to manually review the AI’s work. Then AI work is often not good enough so you either get rejected or you have to try prompting again. Plus developers often used AI for trivial changes that would be much faster if done manually.

The same point was summarized by Will Lockett in “AI Pullback Has Officially Started.” As he puts it:

A recent MIT report found that 95% of AI pilots didn’t increase a company’s profit or productivity. A recent METR report also found that AI coding tools actually slow developers down. Why? Well, generative AI models, even the very latest ones, often get things wrong and “hallucinate,” which requires considerable human oversight to correct. IT consultants Gartner attempted to quantify this and found that AI agents fail to complete office tasks around 70% of the time. Simply put, the amount of human oversight necessary, even for simple tasks, almost always undermines whatever productivity gains are made. In other words, in the vast majority of cases, it is more productive not to use AI than to use AI. Yet despite all the evidence, AI is still being shoehorned in everywhere and being praised as the next industrial revolution. Or is it? Because there is also mounting data that the world is beginning to turn its back on this questionable technology.

Hence “The Hard Truth About Enterprise AI” is “Why 42% of Companies Are Abandoning Their Projects” (for a spectacular but paradigmatic example see “Remember Vibe Coders?” by Adarsh Gupta; for many more see “AI Is Producing More Garbage Code Than Ever” by Jose Crespo). And now, AI Workslop is reducing rather than increasing productivity in almost every job environment. (Salesforce is now learning this lesson the hard way, along with Microsoft. As are others.)

For more examples published since I first released this article worth consulting to drive home the point, see:

AI is so unreliable it’s like hiring a sub-minimum-wage high-school dropout to do your clerical work. There is a reason corporations are already not hiring sub-minimum-wage high-school dropouts to do their clerical work. They tried to replace even fast food cashiers with AI and it sucked so bad they gave up. Meanwhile we’re increasing cashier wages and jobs. The position is technically now called “counter worker” because almost no one solely handles cash anymore, but these jobs are growing, not catastrophically declining. AI isn’t replacing them. It can’t. That’s a snake-oil myth. Even when “self checkout” became a thing (with no involvement of AI), it cost more than it saved, while companies simply shifted those workers to warehousing, stocking, delivery, etc. The result? Relative to store-count and revenue, Walmart employee-count has not meaningfully changed in ten years. And wages are increasing. AI will have no effect on this. Because of a basic rule in economics: if you double the productivity of your workers, the tendency is not to fire half your workers, but to sell twice as much stuff. That’s why productivity levers tend to increase rather than reduce employment. If they kill any jobs at all, they create more new ones. All the alarmist hype about AI replacing millions of jobs, is a lieinvented to sell AI to deep-pocketed and gullible companies or shareholders, and then golden-parachute away once the plane starts going down.

Hence ultra-specialized uses for this kind of AI will exist but hardly anyone will notice much difference from now, or be overly impressed by it. For example, LLM systems can assist experts in transcribing damaged papyri (see AENEAS)—but only assist. Its error rate is so high that you need the same number of human experts using it for it to be usable at all. It simply improves accuracy by finding things humans can’t, and saves time by ballparking. But it can’t replace a person. We’re seeing the same thing unfold in the legal profession. Likewise A/V AI tools: they require human labor to use, and check and correct the output, and are mainly being used by people who couldn’t afford humans in the first place, while humans still do better and more reliable work. So it isn’t really displacing artists as much as impelling artists to upskill themselves to outperform AI slop. So all that AI will do is increase the productivity of existing experts, not replace them. Certainly not at scale. It will be just like robots did to manufacturing seventy years ago, and computers to clerical tasks forty years ago—and CGI did to cinema twenty years ago. Indeed, automation has been steadily increasing in CGI tech for decades, such that AI is not a quantum leap even there, but just another rung on an already-climbing ladder (hence CGI budgets remain in the tens of millions, and will ever so). All this tech actually increased productivity and jobs. So will specialized AI. But it will never do anything more impressive than it already does. And it certainly will never think or be conscious. Despite the hype, it can’t, for example, build web browsers on its own, or really, even at all.

Moreover, AI tools might not even be cheaper, even when they work. I added these two paragraphs because it came up in comments. AI tools are currently being sold way below cost to get into the market. That is unsustainable. When they get correctly priced, many of these things cost more than the people or labor-hours they were supposed to replace. For example, this developer demonstrated that even at the current unsustainable cut-prices (which will balloon as soon as tools get priced to break even, much less earn profit) a human is cheaper for many applications (and with all the additional productivity costs I already noted above, probably most applications). And even when an AI tool remains technically “cheaper” at its inevitable 4x to 100x price increase (so as to profit even marginally), it does not replace workers, but returns the market closer to status quo ante.

Consider two examples: audiobook production, and “art.” Before AI almost no one could afford either (hence constraining the market for both industries to comparatively small beans, producing just a few billion dollars in labor each). All AI really did was make those things affordable to people who could never have bought them to begin with. This is not eliminating jobs. Those jobs never existed. And when AI tools get cost-corrected, those people might be back to not affording those things. So even AI jobs could be a hallucination. This is already a reality at the artificially cut rates of some AI services. It will be catastrophically worse when those prices get adjusted. And this has already been noted in realistic investment advice (from Goldman Sachs to Deutsche Bank). But if you don’t understand the significance of what I’m telling you here, read this, and this, and this. The tech works in limited applications and will survive the crash of its industry. It just won’t be all that impressive or profitable. AI is being artificially made to “look” cheap by throwing away trillions of dollars of capital on that illusion. Which is going to end soon. Eventually it will just deflate into another slate of boring software.

Below I’ll get to why the current AI craze is actually stalling all progress toward real AI, and what we should be doing instead—but are burning trillions of dollars not doing instead, thus putting the over-hyped “singularity” off, not bringing it near (an actual singularity is bullshit and will never happen anyway, but the dawn of real droids will launch a less hyperbolic version of it). But what is called AI today is just a productivity tool that requires human labor to deploy and manage, just like every other productivity tool in history, and its impact will be the same. One of the best examples of this is, ironically, how an AI Media channel used AI to produce a decent explanation of why AI is garbage: “Anthropic’s CEO Predicted AI Would Write 90% of Code by Today. Here’s What Actually Happened.” by AI Presenter Julia McCoy. That company actually offers services to train in the effective use of AI (and that video is an AI production)—while admitting it is not what the hype at all pretends. That gives you slick documentation of the false claims made by AI promoters and why AI is a doomed bubble that cannot replace anyone, and why the inevitable market correction will leave AI as just a humble automation tool requiring the hiring of experts, not replacing them.

Oh. Did I mention doomed bubble?

Yes, AI Is Going to Ruin Your Life

Not because it will replace your job. But because the scam of it will destroy the economy and thus destroy your job (or your pension, or the jobs or pensions of your friends and family). Well, maybe not. But it’s all at risk. And a lot of innocent people are going to get crushed even if you dodge the bullet.

Because the “The AI Bubble Is 17 Times the Size of the Dot-Com Frenzy — and Four Times the Subprime Bubble” (oh, and also, there is also a new subprime bubble—and it’s already collapsing, which will make all of this worse). Almost all the illusion of stock market and economic growth in the U.S. consists of doomed AI speculation (example, example, example). Vast wasted capital outlays are thus deceiving our metrics. The U.S. actually experienced effectively no economic growth this year—once you subtract all AI investment, as one should, because it will soon vanish into smoke as its value zeroes out when everyone realizes it mostly doesn’t do anything, and isn’t worth anything but a relative pittance. Literally a third of stock market indexes will vanish, which is worse than the crash of 1929. It may take decades to recover.

Yes. It is going to be pretty bad. The entire AI economy now is a technically illegal circularity scheme (for a quick explanation, watch Hank Green; for a longer treatment of this scandal, watch Patrick Boyle). And I’m not joking. To get up to speed, let these experts catch you up:

So it’s going to be bad. The only good news is that there are some differences between this bubble and others: the collapse might be slower, the rich are going to be hit harder this time than the poor, and there will be something left in the end to sell (data centers and AI tools will still continue and make money, just not the very impressive amount of it the grifters and rubes are claiming). It’s not “transformative” but just more “incremental” progress that has no ROI. Hence it’s doomed. The question is how leveraged banks and pensions are in AI and what effect their collapse will have on society.

For analysis of what dark clouds and grimy tin linings will result

  • Will Lockett’s “Will The AI Bubble Destroy Musk’s Empire?” is focused just on Musk but illustrates the same story a lot of billionaires are in right now and thus what will happen to them as well. And before you cheer for them getting what they deserve, this won’t really hurt them (they will still be rich, just sans empires) while it will ruin millions of innocent middle class lives (as they suffer the downstream effects of these collapsing empires, just like in 2008).
  • Those downstream effects are explored by James Ball in “What Happens When the AI Bubble Bursts?” which compares expectations with the dot-com bust. Forbes also ran different scenarios. The upshot is that the differences may soften the blow: mostly billionaires and the investor class will be wrecked, and downstream effects may be only similar to the dot-com crash, because although vastly more money is involved, it’s mostly private equity, not standard bank loans. So banks might weather it. And if they do, the cost will be in economic recession and consequent downsizing and joblosses, and a drag on development (as capital and credit for building back will simply not be available for a few years).

But enough about the doom.

Do You Want Real AI? Dump the Snake Oil

The second lesson here is more big-picture: if you want real AI, actual sentient computers who actually think and understand and can actually reason, these trillions need to be diverted into a completely different research pathway. Abandon LLM. It can never and will never get there. I wrote about what we should be doing ten years ago (in Ten Years to the Robot Apocalypse). But the world went the other way. Consciousness derives from model-building.

  • It begins with building models (using a learning algorithm we know is largely crudely Bayesian and literally neuralnet, as one should expect because natural selection approaches the most efficient path to doing something).
  • Then it navigates those models (“in the imagination,” though it’s exactly the same machinery as builds the model we call “perception”).
  • By building and creatively navigating models of actual spaces to work out alternatives and answer questions (like “where did that mouse go”), true, actual thinking has begun. Animals use this to move around and acquire resources and avoid threats.
  • The next step is building imaginary spaces—not even mapping actual ones, but creatively building entirely novel ones, and navigating them to accelerate anticipatory (predictive) learning. Cats dream of imaginary mice in imaginary spaces to train at hunting, for example (and we know this because of experiments “turning back on” their muscle command system while they are dreaming, and we can watch them navigate these invisible models chasing invisible mice).
  • The next step is modeling not just spaces but systems, in particular causal systems. This allows a much farther extension of reasoning and learning.
  • This eventually makes possible modeling other minds, a particular kind of causal system. Called “metacognition,” various animals developed this ability to model what someone else is thinking, so as to anticipate and react. More advanced meta-cognition adds the ability to model one’s own mind, and thus think about what you yourself are thinking.
  • The final step is to take a fully trained and developed metacognitive modeling system and turn it entirely onto oneself, thereby generating a complete, continuously-running self-model, which can be used to query, think, plan, and navigate your own intentions and mental resources to solve problems and more sentiently react to the environment.

This is a completely different pathway than LLM. The pathway of LLM is like trying to build a house by swimming. Getting better and better at swimming. Becoming an ace swimmer! And yet, frustratingly, no house appears. Because learning how to swim well gets you nowhere near the objective of building a house. In fact, it keeps you away from making any progress on that at all, because you’re spending all your time in water, and away from tools and materials—rather than on land where you should be and tinkering with tools and materials as you should be. The correct pathway is to start down the “tinkering with tools and materials” way. For true AI, that’s virtual-model building. You first need to invent a really good artificial horse (which we still haven’t been able to do despite a lot of trying). Then a really good artificial monkey. And then you’ll be ready to steer that into a thinking person.

So you need a machine that:

  • Masters building a model of its spatial environment (and the geometry and capabilities of its body) by interpreting data from sensors into correlative perception, and using that model to navigate that environment to accomplish tasks. This step has already begun, for example, in Waymo’s World Model.
  • Then masters creatively inventing new models, of imaginary environments, and using those models to navigate those environments to accomplish imagined tasks, and thus build a repertoire of skills applicable to new real environs.
  • Then masters building models of causal systems, and navigating them to solve problems. First, real systems. Then, creatively imagining new systems to also navigate and build skills again.
  • Then masters modeling its own causal system, to think about its own thinking and answer questions like when it is wrong about something or how to creatively stack tasks in chains to accomplish an end result.
  • Then masters modeling its entire own mind, so that it now navigates the furniture of its mind and relates all its models to itself and its intentions and plans and reasoning, and thus starts formulating a reliable narrative history and a stable but flexible hierarchy of desires, and can talk to itself about beliefs and degrees of belief, and grasp what it means.

That model-building and model-navigating pathway is the only way to real AI. Which teaches us something about what self-consciousness is and how it was built the first time around—by natural selection, which found and followed exactly that same pathway, so we might want to get a clue from that. Why try some new way of getting there, when you’ve already seen how it’s done? This is what we fundamentally are: models and model builders, modelers and model navigators. Models of our world. Models of the causal systems that surround us. Models of other minds. Models of imaginary spaces and systems and minds. And all of them integrated computationally with a model of ourselves, as its own causal system of feelings, reasonings, and desires. And that is why we can think, and learn, and actually understand ourselves and the world. And why LLM-based AI can’t and never will (as I explained before in Why Google’s LaMDA Chatbot Isn’t Sentient and MIT now explains in a recent study).

As Yann LeCun correctly said after I originally published this article (which is so apposite I am now adding it):

We need what experts in the field call world models. Systems capable of understanding physics, maintaining persistent memory, and planning complex actions, and not simply predicting the next word in a sentence.

Indeed. And more than that, we need self-models, integrated with those world-models, and the “physics” and “memory” and “planning” has to include the agent itself, its own physics (a thinking person is its own causal system), its own memory (which means narrative—with recollectable experience; not merely logged), and its own panning: it needs to have, revise, consult, and react to its own set of goals and plans, both short term—pragmatic action—and long term—which scientists call “values.” I later found the same point made by Yossi Kreinin before me and LeCun. And now Fei-Fei Li is getting on board. And Ankit Maloo. and Sergey Klevzov. See “Why Transformers Are Wrong for AGI and Why Scaling Them Higher Makes No Sense” for the big picture.

So until we spend money on that research pathway, we will never get anywhere near real AI. And in the meantime, the trillions already spent on fake AI is going to evaporate, causing global misery. And it might be decades before we’ll raise that stash back to actually spend it on the real thing. But rich people are stupid. So I doubt they will ever spend it on the real thing. They’ll throw it all into the next bullshit snake-oil that ruins the world, and get their government bailout, and blame it all on immigrants. But alas. Welcome to capitalism: the permanent failure-mode of any modern society.

Conclusion

Stop relying on “AI.” No such thing exists. It’s a scam. It’s just fancy auto-complete. And thus is just regurgitating the internet, and poorly. Use it only as a dodgy tool you can never fully trust, or as just another minor productivity lever when its results don’t have to be reliable. And then start planning for when this scam crashes the stock market.

Think for yourself. Do your own competent research. Use AI like Wikipedia: a way to get into the ballpark of some leads to follow up, and not as an authority you can trust by itself (see my followup instructional: How to Use Pseudo-AI). If you side-eye Wikipedia, you definitely should be side-eying “AI.” Wikipedia has a far lower error and hallucination rate, and on most entries, a higher quality expert construction and sourcing. And Wikipedia is shitty compared to fully expert sources. And yet, indeed, most of what “AI” does is just reword Wikipedia at you, thus magnifying even its errors and inaccuracy. It’s garbage. Stop using it for anything more than dodgy web searching, or as a fancy photoshop assistant, or whatever dumb thing. But don’t act like it knows anything.

And then…

Build what contingencies you can to survive a mass worldwide economic crash. It could happen as soon as tomorrow. But definitely within the next year or two. That’s when you will discover your bank blew all your money and pension on worthless AI stock, and when lending will close shop for a year or more for want of capital and fear of default so no one will be able to buy a car or house and credit will be expensive and tight, and businesses won’t be able to start or grow or survive by borrowing, and when the government doesn’t bill the rich for fucking us over but gives them a massive bailout while cutting services to everyone else, and hospitals close and roads crumble, fields burn, and crime (white collar and blue) runs unchecked for want of any way to adequately fund policing it—and then buying screeches, tanking companies, and thereby, alas, nuking jobs.

Be ready. It is not a question of whether this will happen. It literally is just a question of when. And it’s going to be soon. As the analysts cited above explain, the bill comes due by the end of 2026 or 2027. But someone might Tuld it before then.

§

All comments go to moderation except for Patrons etc. See Comments & Moderation Policy.

Share this:

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading