There is no AI. What is being called artificial intelligence and sold as snake oil under that label is actually artificial stupidity. It will destroy your own personal ability to critically reason. It will destroy your company—by reducing, not increasing, productivity; and by increasing, not reducing, your risk-exposure to critical errors. And it will destroy the economy. Not by taking jobs. It will never replace any significant number of jobs, because it is garbage. It can’t do even the simplest job. It fucks up more than a tween on weed. Rather, it will destroy the economy by wrecking pensions and banks and tanking the global economic system, resulting in massive layoffs and food lines, because any time now trillions of dollars of the global economy are literally going to evaporate—the moment people realize they are being conned and AI can never make money, or do any of the big things its grifters have desperately been claiming, and they even more desperately try to sell their position, and the whole stock market crashes.
AI is the fanciest of Nigerian princes, whom CEOs (who we already knew were, as a class of people, consistently idiots) are falling for because the Scam is Great. Tulips for everyone! We know the rich are idiots who continually wreck the world with their phenomenal stupidity. They’ve done it literally twice already in Generation Z alone (from Dot-Com to the Big Short). Those were literally exactly the same stupid things they are doing now. They can’t even learn from their own mistakes ten years prior. That’s how stupid rich people are. So stop listening to them. Stop taking their advice. Stop buying their snake oil. Elon Musk is only the most prolific idiot. They are all idiots. And they are conning you—and each other (high on their own supply)—with fake AI. If you don’t already know all this, if you don’t believe me, then read on. This article is my own desperate attempt to wake you the fuck up.
This article is also continuously updated. New links with more studies and expert analyses are often being added. And its conclusions have not changed but only been increasingly confirmed.
No, AI Is Not Good at Anything
AI content has been banned on my blog for months now (see my Comments & Moderation Policy). No comments that even smell like AI content will be posted. You need to think for yourself here. No more “asking randos on the internet” to write long dumb analyses full of incoherent trivia and crap. And that is all asking AI is: all it does is “auto complete” what most people are saying about a thing on the internet, often in an ill-thought jumble. Which means mostly it’s going to be trivial or garbage, because most of the internet are idiots who don’t know what they are talking about, and AI can’t tell the difference between high and low quality information (even intelligent humans struggle to do that), and doesn’t understand anything it is doing. And it will never improve.
This is a scientific fact now. Multiple studies have confirmed that AI makes so many mistakes it reduces productivity because it takes more time to fix all its mistakes (and vet all its machinations to catch mistakes) than it would have taken to just do the task yourself. Humans are more productive than AI. And science has proved this will always be the case: the LLM framework that current AI is based on can never get better. Its error rate will always be around the same no matter how much data it gets, no matter how many processors it has, no matter how much electricity it burns. It’s a dead-end technology.
It’s even worse, of course. Because these AI’s are easily exploited by state and corporate bad actors to get them to say whatever they want, even without having any source control over the AI itself. They can simply flood the internet to spoof every AI there is. So you’re really just reading propaganda. Whether by design or happenstance, what gets said the most, gets told you the most. That’s the opposite of what critical thinkers should be consulting (see A Vital Primer on Media Literacy and A Primer on Actually Doing Your Own Research). Indeed these AI’s are as easy to manipulate as your drunk uncle (no, really, they are). So why would you ever trust them? It’s bad enough that they have intolerably high error rates and a high output of mundane slop (which even leads to model collapse). They are also capturable by bad actors. Honestly.
And this is not opinion. It’s fact.
- That AI’s are unreliable and exploitable and nothing can fix them? Proved.
- That AI’s are not rational thinking machines? Proved.
- That AI’s err so often because they don’t (and can never) comprehend anything they are doing? Proved.
- That AI’s are dangerously stupid? Proved.
- That using AI’s makes you stupid? Proved. Proved. And proved again.
- That this AI can’t be fixed and has nothing left to show us? Proved. Proved. And proved.
AI will survive the decade only in penny-ante or hyper-specialized applications, generating what everyone knows are unreliable results that constantly have to be fact-checked or corrected, essentially doing the same thing Clippy and Siri and other universally loathed tech have already been doing for a decade now. We’ll barely notice the difference. We’ll just keep rolling our eyes at the same crap annoyances and results as ever—or hiring experts (or engaging in hours of our own labor) to make it work, just like every other technology ever (see my followup article How to Use Pseudo-AI). Because in reality, “95% of AI pilots are failing” because AI doesn’t actually work (and the actual rate might be 97.5%). As explained by Andrew Zuo in “Who Would Have Thought an MIT Study Would Be the Thing to Pop the AI Bubble?”, putting it this way:
A recent study showed that AI slowed developers down by 19% despite them thinking it had actually sped them up by 20%. This is because of a few reasons. First there’s the overhead from prompting the AI and waiting for the response that can break your flow. Then you have to manually review the AI’s work. Then AI work is often not good enough so you either get rejected or you have to try prompting again. Plus developers often used AI for trivial changes that would be much faster if done manually.
The same point was summarized by Will Lockett in “AI Pullback Has Officially Started.” As he puts it:
A recent MIT report found that 95% of AI pilots didn’t increase a company’s profit or productivity. A recent METR report also found that AI coding tools actually slow developers down. Why? Well, generative AI models, even the very latest ones, often get things wrong and “hallucinate,” which requires considerable human oversight to correct. IT consultants Gartner attempted to quantify this and found that AI agents fail to complete office tasks around 70% of the time. Simply put, the amount of human oversight necessary, even for simple tasks, almost always undermines whatever productivity gains are made. In other words, in the vast majority of cases, it is more productive not to use AI than to use AI. Yet despite all the evidence, AI is still being shoehorned in everywhere and being praised as the next industrial revolution. Or is it? Because there is also mounting data that the world is beginning to turn its back on this questionable technology.
Hence “The Hard Truth About Enterprise AI” is “Why 42% of Companies Are Abandoning Their Projects” (for a spectacular but paradigmatic example see “Remember Vibe Coders?” by Adarsh Gupta; for many more see “AI Is Producing More Garbage Code Than Ever” by Jose Crespo). And now, AI Workslop is reducing rather than increasing productivity in almost every job environment. (Salesforce is now learning this lesson the hard way, along with Microsoft. As are others.)
For more examples published since I first released this article worth consulting to drive home the point, see:
- Marco Kotrotsos, “The Math Nobody’s Doing on Ralph Wiggum Loops: The Math Behind Agent Porn”
- Srinivas Rao, “The Agentic AI Delusion: Why Silicon Valley Spent Billions on the Wrong Architecture”
- Mahathidhulipala, “Six Weeks After Writing About AI Agents, I’m Watching Them Fail Everywhere”
- Ranganathan and Ye, “AI Doesn’t Reduce Work—It Intensifies It”
- Joe Procopio, “It Turns Out, AI Agents Suck At Replacing White-Collar Workers”
- Russell Bell, “While Altman Sells the Dream, Here’s What the Data Actually Shows”
- Sergey Klevzov, “Why the ‘Solid Foundation’ Is a Bubble”
- Delanoe Pirard, “75% of Your AI’s ‘Reasoning’ Is Fiction”
AI is so unreliable it’s like hiring a sub-minimum-wage high-school dropout to do your clerical work. There is a reason corporations are already not hiring sub-minimum-wage high-school dropouts to do their clerical work. They tried to replace even fast food cashiers with AI and it sucked so bad they gave up. Meanwhile we’re increasing cashier wages and jobs. The position is technically now called “counter worker” because almost no one solely handles cash anymore, but these jobs are growing, not catastrophically declining. AI isn’t replacing them. It can’t. That’s a snake-oil myth. Even when “self checkout” became a thing (with no involvement of AI), it cost more than it saved, while companies simply shifted those workers to warehousing, stocking, delivery, etc. The result? Relative to store-count and revenue, Walmart employee-count has not meaningfully changed in ten years. And wages are increasing. AI will have no effect on this. Because of a basic rule in economics: if you double the productivity of your workers, the tendency is not to fire half your workers, but to sell twice as much stuff. That’s why productivity levers tend to increase rather than reduce employment. If they kill any jobs at all, they create more new ones. All the alarmist hype about AI replacing millions of jobs, is a lie—invented to sell AI to deep-pocketed and gullible companies or shareholders, and then golden-parachute away once the plane starts going down.
Hence ultra-specialized uses for this kind of AI will exist but hardly anyone will notice much difference from now, or be overly impressed by it. For example, LLM systems can assist experts in transcribing damaged papyri (see AENEAS)—but only assist. Its error rate is so high that you need the same number of human experts using it for it to be usable at all. It simply improves accuracy by finding things humans can’t, and saves time by ballparking. But it can’t replace a person. We’re seeing the same thing unfold in the legal profession. Likewise A/V AI tools: they require human labor to use, and check and correct the output, and are mainly being used by people who couldn’t afford humans in the first place, while humans still do better and more reliable work. So it isn’t really displacing artists as much as impelling artists to upskill themselves to outperform AI slop. So all that AI will do is increase the productivity of existing experts, not replace them. Certainly not at scale. It will be just like robots did to manufacturing seventy years ago, and computers to clerical tasks forty years ago—and CGI did to cinema twenty years ago. Indeed, automation has been steadily increasing in CGI tech for decades, such that AI is not a quantum leap even there, but just another rung on an already-climbing ladder (hence CGI budgets remain in the tens of millions, and will ever so). All this tech actually increased productivity and jobs. So will specialized AI. But it will never do anything more impressive than it already does. And it certainly will never think or be conscious. Despite the hype, it can’t, for example, build web browsers on its own, or really, even at all.
Moreover, AI tools might not even be cheaper, even when they work. I added these two paragraphs because it came up in comments. AI tools are currently being sold way below cost to get into the market. That is unsustainable. When they get correctly priced, many of these things cost more than the people or labor-hours they were supposed to replace. For example, this developer demonstrated that even at the current unsustainable cut-prices (which will balloon as soon as tools get priced to break even, much less earn profit) a human is cheaper for many applications (and with all the additional productivity costs I already noted above, probably most applications). And even when an AI tool remains technically “cheaper” at its inevitable 4x to 100x price increase (so as to profit even marginally), it does not replace workers, but returns the market closer to status quo ante.
Consider two examples: audiobook production, and “art.” Before AI almost no one could afford either (hence constraining the market for both industries to comparatively small beans, producing just a few billion dollars in labor each). All AI really did was make those things affordable to people who could never have bought them to begin with. This is not eliminating jobs. Those jobs never existed. And when AI tools get cost-corrected, those people might be back to not affording those things. So even AI jobs could be a hallucination. This is already a reality at the artificially cut rates of some AI services. It will be catastrophically worse when those prices get adjusted. And this has already been noted in realistic investment advice (from Goldman Sachs to Deutsche Bank). But if you don’t understand the significance of what I’m telling you here, read this, and this, and this. The tech works in limited applications and will survive the crash of its industry. It just won’t be all that impressive or profitable. AI is being artificially made to “look” cheap by throwing away trillions of dollars of capital on that illusion. Which is going to end soon. Eventually it will just deflate into another slate of boring software.
Below I’ll get to why the current AI craze is actually stalling all progress toward real AI, and what we should be doing instead—but are burning trillions of dollars not doing instead, thus putting the over-hyped “singularity” off, not bringing it near (an actual singularity is bullshit and will never happen anyway, but the dawn of real droids will launch a less hyperbolic version of it). But what is called AI today is just a productivity tool that requires human labor to deploy and manage, just like every other productivity tool in history, and its impact will be the same. One of the best examples of this is, ironically, how an AI Media channel used AI to produce a decent explanation of why AI is garbage: “Anthropic’s CEO Predicted AI Would Write 90% of Code by Today. Here’s What Actually Happened.” by AI Presenter Julia McCoy. That company actually offers services to train in the effective use of AI (and that video is an AI production)—while admitting it is not what the hype at all pretends. That gives you slick documentation of the false claims made by AI promoters and why AI is a doomed bubble that cannot replace anyone, and why the inevitable market correction will leave AI as just a humble automation tool requiring the hiring of experts, not replacing them.
Oh. Did I mention doomed bubble?
Yes, AI Is Going to Ruin Your Life
Not because it will replace your job. But because the scam of it will destroy the economy and thus destroy your job (or your pension, or the jobs or pensions of your friends and family). Well, maybe not. But it’s all at risk. And a lot of innocent people are going to get crushed even if you dodge the bullet.

Because the “The AI Bubble Is 17 Times the Size of the Dot-Com Frenzy — and Four Times the Subprime Bubble” (oh, and also, there is also a new subprime bubble—and it’s already collapsing, which will make all of this worse). Almost all the illusion of stock market and economic growth in the U.S. consists of doomed AI speculation (example, example, example). Vast wasted capital outlays are thus deceiving our metrics. The U.S. actually experienced effectively no economic growth this year—once you subtract all AI investment, as one should, because it will soon vanish into smoke as its value zeroes out when everyone realizes it mostly doesn’t do anything, and isn’t worth anything but a relative pittance. Literally a third of stock market indexes will vanish, which is worse than the crash of 1929. It may take decades to recover.
Yes. It is going to be pretty bad. The entire AI economy now is a technically illegal circularity scheme (for a quick explanation, watch Hank Green; for a longer treatment of this scandal, watch Patrick Boyle). And I’m not joking. To get up to speed, let these experts catch you up:
- Will Lockett, “AI Will Destroy Everything. But Not in the Way You Think” and “You Have No Idea How Screwed OpenAI Actually Is” will get you started.
- Rosemary Potter, “AI Outputs Lack Quality” is the word as “Companies Rehire Human Workers to Fix Artificial Intelligence Generated Content After Mass Layoffs” will clue you in to the con.
- Andrew Zuo, “AI Is So Big It Is Distorting The Economy” and Benj Edwards, “Is the AI Bubble about to Pop?” (“someone will lose a phenomenal amount of money”) will bring it all home for you.
- While Cory Doctorow, “The Real (Economic) AI Apocalypse Is Nigh” is the best long summary and link roundup I’ve yet found detailing both why AI is a scam and will collapse the economy.
So it’s going to be bad. The only good news is that there are some differences between this bubble and others: the collapse might be slower, the rich are going to be hit harder this time than the poor, and there will be something left in the end to sell (data centers and AI tools will still continue and make money, just not the very impressive amount of it the grifters and rubes are claiming). It’s not “transformative” but just more “incremental” progress that has no ROI. Hence it’s doomed. The question is how leveraged banks and pensions are in AI and what effect their collapse will have on society.
For analysis of what dark clouds and grimy tin linings will result
- Will Lockett’s “Will The AI Bubble Destroy Musk’s Empire?” is focused just on Musk but illustrates the same story a lot of billionaires are in right now and thus what will happen to them as well. And before you cheer for them getting what they deserve, this won’t really hurt them (they will still be rich, just sans empires) while it will ruin millions of innocent middle class lives (as they suffer the downstream effects of these collapsing empires, just like in 2008).
- Those downstream effects are explored by James Ball in “What Happens When the AI Bubble Bursts?” which compares expectations with the dot-com bust. Forbes also ran different scenarios. The upshot is that the differences may soften the blow: mostly billionaires and the investor class will be wrecked, and downstream effects may be only similar to the dot-com crash, because although vastly more money is involved, it’s mostly private equity, not standard bank loans. So banks might weather it. And if they do, the cost will be in economic recession and consequent downsizing and joblosses, and a drag on development (as capital and credit for building back will simply not be available for a few years).
But enough about the doom.
Do You Want Real AI? Dump the Snake Oil
The second lesson here is more big-picture: if you want real AI, actual sentient computers who actually think and understand and can actually reason, these trillions need to be diverted into a completely different research pathway. Abandon LLM. It can never and will never get there. I wrote about what we should be doing ten years ago (in Ten Years to the Robot Apocalypse). But the world went the other way. Consciousness derives from model-building.
- It begins with building models (using a learning algorithm we know is largely crudely Bayesian and literally neuralnet, as one should expect because natural selection approaches the most efficient path to doing something).
- Then it navigates those models (“in the imagination,” though it’s exactly the same machinery as builds the model we call “perception”).
- By building and creatively navigating models of actual spaces to work out alternatives and answer questions (like “where did that mouse go”), true, actual thinking has begun. Animals use this to move around and acquire resources and avoid threats.
- The next step is building imaginary spaces—not even mapping actual ones, but creatively building entirely novel ones, and navigating them to accelerate anticipatory (predictive) learning. Cats dream of imaginary mice in imaginary spaces to train at hunting, for example (and we know this because of experiments “turning back on” their muscle command system while they are dreaming, and we can watch them navigate these invisible models chasing invisible mice).
- The next step is modeling not just spaces but systems, in particular causal systems. This allows a much farther extension of reasoning and learning.
- This eventually makes possible modeling other minds, a particular kind of causal system. Called “metacognition,” various animals developed this ability to model what someone else is thinking, so as to anticipate and react. More advanced meta-cognition adds the ability to model one’s own mind, and thus think about what you yourself are thinking.
- The final step is to take a fully trained and developed metacognitive modeling system and turn it entirely onto oneself, thereby generating a complete, continuously-running self-model, which can be used to query, think, plan, and navigate your own intentions and mental resources to solve problems and more sentiently react to the environment.
This is a completely different pathway than LLM. The pathway of LLM is like trying to build a house by swimming. Getting better and better at swimming. Becoming an ace swimmer! And yet, frustratingly, no house appears. Because learning how to swim well gets you nowhere near the objective of building a house. In fact, it keeps you away from making any progress on that at all, because you’re spending all your time in water, and away from tools and materials—rather than on land where you should be and tinkering with tools and materials as you should be. The correct pathway is to start down the “tinkering with tools and materials” way. For true AI, that’s virtual-model building. You first need to invent a really good artificial horse (which we still haven’t been able to do despite a lot of trying). Then a really good artificial monkey. And then you’ll be ready to steer that into a thinking person.
So you need a machine that:
- Masters building a model of its spatial environment (and the geometry and capabilities of its body) by interpreting data from sensors into correlative perception, and using that model to navigate that environment to accomplish tasks. This step has already begun, for example, in Waymo’s World Model.
- Then masters creatively inventing new models, of imaginary environments, and using those models to navigate those environments to accomplish imagined tasks, and thus build a repertoire of skills applicable to new real environs.
- Then masters building models of causal systems, and navigating them to solve problems. First, real systems. Then, creatively imagining new systems to also navigate and build skills again.
- Then masters modeling its own causal system, to think about its own thinking and answer questions like when it is wrong about something or how to creatively stack tasks in chains to accomplish an end result.
- Then masters modeling its entire own mind, so that it now navigates the furniture of its mind and relates all its models to itself and its intentions and plans and reasoning, and thus starts formulating a reliable narrative history and a stable but flexible hierarchy of desires, and can talk to itself about beliefs and degrees of belief, and grasp what it means.
That model-building and model-navigating pathway is the only way to real AI. Which teaches us something about what self-consciousness is and how it was built the first time around—by natural selection, which found and followed exactly that same pathway, so we might want to get a clue from that. Why try some new way of getting there, when you’ve already seen how it’s done? This is what we fundamentally are: models and model builders, modelers and model navigators. Models of our world. Models of the causal systems that surround us. Models of other minds. Models of imaginary spaces and systems and minds. And all of them integrated computationally with a model of ourselves, as its own causal system of feelings, reasonings, and desires. And that is why we can think, and learn, and actually understand ourselves and the world. And why LLM-based AI can’t and never will (as I explained before in Why Google’s LaMDA Chatbot Isn’t Sentient and MIT now explains in a recent study).
As Yann LeCun correctly said after I originally published this article (which is so apposite I am now adding it):
We need what experts in the field call world models. Systems capable of understanding physics, maintaining persistent memory, and planning complex actions, and not simply predicting the next word in a sentence.
Indeed. And more than that, we need self-models, integrated with those world-models, and the “physics” and “memory” and “planning” has to include the agent itself, its own physics (a thinking person is its own causal system), its own memory (which means narrative—with recollectable experience; not merely logged), and its own panning: it needs to have, revise, consult, and react to its own set of goals and plans, both short term—pragmatic action—and long term—which scientists call “values.” I later found the same point made by Yossi Kreinin before me and LeCun. And now Fei-Fei Li is getting on board. And Ankit Maloo. and Sergey Klevzov. See “Why Transformers Are Wrong for AGI and Why Scaling Them Higher Makes No Sense” for the big picture.
So until we spend money on that research pathway, we will never get anywhere near real AI. And in the meantime, the trillions already spent on fake AI is going to evaporate, causing global misery. And it might be decades before we’ll raise that stash back to actually spend it on the real thing. But rich people are stupid. So I doubt they will ever spend it on the real thing. They’ll throw it all into the next bullshit snake-oil that ruins the world, and get their government bailout, and blame it all on immigrants. But alas. Welcome to capitalism: the permanent failure-mode of any modern society.
Conclusion
Stop relying on “AI.” No such thing exists. It’s a scam. It’s just fancy auto-complete. And thus is just regurgitating the internet, and poorly. Use it only as a dodgy tool you can never fully trust, or as just another minor productivity lever when its results don’t have to be reliable. And then start planning for when this scam crashes the stock market.
Think for yourself. Do your own competent research. Use AI like Wikipedia: a way to get into the ballpark of some leads to follow up, and not as an authority you can trust by itself (see my followup instructional: How to Use Pseudo-AI). If you side-eye Wikipedia, you definitely should be side-eying “AI.” Wikipedia has a far lower error and hallucination rate, and on most entries, a higher quality expert construction and sourcing. And Wikipedia is shitty compared to fully expert sources. And yet, indeed, most of what “AI” does is just reword Wikipedia at you, thus magnifying even its errors and inaccuracy. It’s garbage. Stop using it for anything more than dodgy web searching, or as a fancy photoshop assistant, or whatever dumb thing. But don’t act like it knows anything.
And then…
Build what contingencies you can to survive a mass worldwide economic crash. It could happen as soon as tomorrow. But definitely within the next year or two. That’s when you will discover your bank blew all your money and pension on worthless AI stock, and when lending will close shop for a year or more for want of capital and fear of default so no one will be able to buy a car or house and credit will be expensive and tight, and businesses won’t be able to start or grow or survive by borrowing, and when the government doesn’t bill the rich for fucking us over but gives them a massive bailout while cutting services to everyone else, and hospitals close and roads crumble, fields burn, and crime (white collar and blue) runs unchecked for want of any way to adequately fund policing it—and then buying screeches, tanking companies, and thereby, alas, nuking jobs.
Be ready. It is not a question of whether this will happen. It literally is just a question of when. And it’s going to be soon. As the analysts cited above explain, the bill comes due by the end of 2026 or 2027. But someone might Tuld it before then.





Hi Richard,
Well said, and not loud enough. It is insanely environmentally unfriendly, it dumbs down the human intelligence and the need for acquiring knowledge, and it has not been proven to even work. Why is the “world” so excited? As always in this complex world if the answer is not immediately obvious it is “MONEY”.
Oh I forgot to add that! AI also makes us stupider. Proved.
I am unable to agree with the thoughts on no intelligence constructed the universe (a-theos). It’s just impossible (ask the Pythagoreans/mathematicians). It can’t be argued. The greatest minds of the Greeks understood that. Too many perfect syllogisms to prove it (which are rare!). Now whether that Intelligence communicates/interacts with its creative works, that’s another question. Though I totally agree AI has become a precarious digital snake oil. Especially when it’s being strapped onto the back of nuclear proliferation. But who really knows… Socrates? Democritus, Epicurus and Metrodorus must have seen it coming. ( :
Actually, the greatest minds of the Greeks realized that was a dumb idea. Stratonicans and Atomists and Skeptics soundly refuted the theists.
Intelligence is literally the least likely first cause (by both excess complexity and absence of precedent) and badly explains observations, which in fact exactly match a chance-accident origin scenario (examples, examples, examples).
By contrast, vastly simpler, precedent-concurring, observation-matching explanations abound (for example, What If We Reimagine ‘Nothing’ as a Field-State?).
Regarding “consciousness”, this term seems to carry too much baggage and lacks real definition. Thus, tripping up any discussion on AI consciousness. The best model for consciousness I’ve come across was proposed by Julian Jaynes decades ago. I searched your website for articles on it (as I figured you might already have looked into it) and don’t find anything. Any thoughts on Jayne’s model?
Alas, Julian Jaynes was a crank.
Real consciousness research occurs in real cognitive science fields.
The best model of consciousness I have encountered is Dennett’s multiple-drafts model (and my description of consciousness is based on that and extensive supporting evidence of what it takes to disrupt or alter consciousness, and what differs in the brains and correlated behavior of humans and most non-human animals, and so on).
Note that “defining consciousness” (what constitutes being conscious) is not the same thing as a model or “theory” of consciousness (an explanation of consciousness). Consciousness (in this context) generally refers to awareness with understanding, and self-consciousness thus means awareness with understanding of oneself (which means more than just that “someone exists” but what a “someone” is and consists of, e.g. a singular narrative history, feelings and desires, and a relation to external things). What causes that is a more complicated question and involves the study of evolutionary and comparative biology, neurophysics, psychology, and the whole of the cognitive sciences.
The Jaynes model was refuted the moment it published (he has human history and anthropology hopelessly wrong, and his neorophysics was ridiculous even by the standards of his own day) but has since been even more crushed.
It is especially refuted now by knowledge that conscious people exist who don’t have any inner monologue (refuting his linguistic secondary-effect thesis). Those people think only in models. We all do, but those of us with inner voices simply attach words to the models and components of models and run these operations in parallel (the models “come with” the internal language correlated to them).
Thus consciousness is a virtual model. Language is secondary. Consciousness therefore has no dependency on language. Whereas language is model-dependent. Which is why LLMs can’t reason: no words they manipulate are connected to any models, and thus LLMs lack any comprehension of what words actually refer to—and hence what they actually mean.
LLM’s just guess at what word comes next based on what word usually comes next given the other words being correlated, i.e. they are just auto-completing whole sentences using the internet instead of just a dictionary, and thus simply regurgitate what is most often being said on the internet, right or wrong (and even that they jumble a lot, because they recognize no distinct sources, but just “the internet” as a single source, which is incoherent, and thus generates incoherent results).
A quick note that in my job I use AI a small amount and I can testify that it hallucinates frequently and also apparently has been programmed to constantly kiss your ass, which becomes annoying, at least to those to whom it is transparent.
It reminds me of the Jesus Booth in THX 1138.
Yes! to both: the hallucinating and the constantly kissing your ass. And OMG! could it learn to end a conversation? It continually asks you the next question to keep you engaged: “would you like me to do X?” No! fuck off!
Thank you for the warning, Richard.
Had a fascinating discussion with a manager at a Big 4 accounting firm. The Big 4’s model has always been what is described as a pyramid. They hire a lot of associates (lowest level). Many of them get culled before becoming senior associates. Then many of them get culled before they become managers. Then many of them get culled before they become senior managers. Finally, about 1% of the original associates make it all the way to partner.
So, this manager told me that her bosses told her they are moving towards a “diamond” model (or maybe “teardrop”) because of the power of AI. The number of associates will be small, with AI doing much of the work they used to do. Then the workforce will increase through senior associates and managers, and then start to narrow again with senior managers and partners. I asked her how this is even possible. Where are this mythical managers going to come from if the associate/senior associate ranks they necessarily must draw from are smaller? She had no answer.
That’s pretty funny.
But it illustrates a problem with modern management generally: most high level managers don’t actually know what the people below them do. Hence they are easily duped into thinking all that work can be replaced somehow. They don’t have any idea what skills and labor-allocations they are actually relying on at the front level of any business operation. They certainly don’t understand any of that at the detail level (e.g. what a base-level accounts manager actually has to do, hour-by-hour and day-by-day, to correctly vet, process, and file paperwork for upper level workflows, including troubleshooting, auditing, customer service, and edge-case management).
This error-mode has had other delusion-building effects I documented before in Three Models of Critical Thinking: Remote Work, Generational Wealth, and Election Polling.
Yep. As I’m arguing in my comment, this is how AI will impact employment. Companies that follow that bullshit line will hurt themselves (but who cares if the executives have golden parachutes or the owners and upper management have so much wealth and so many opportunities that they can leave if things truly go to shit), but they will downsize, then have to re-upsize… disrupting people’s employment history, taxing unemployment resources, and then rehiring people under shittier terms.
That will be a blip, though. It won’t be as damaging as the collapse, which will erase jobs and businesses for years, not months, and on a far larger scale.
The number of companies duped into AI fires is nothing compared to the number of companies that will be forced to downsize by an economic collapse.
The problem here is that AI is going to hurt everyone, not just the dupes who fell for it and their direct victims.
Agreed. But “blip” on the scale we are discussing can still be tens of thousands of people meaningfully impacted because of other people’s greed. Tragically, aside from the potential that this may delay real useful computation tools due to the well being poisoned, it’s not even close to the top ten of such externalities created by greed.
Fair point. AI hype is directly causing harm to employees by tricking employers into this dumb fire-hire cycle. In addition to all the far worse shit it will cause downcycle.
I take a more moderate view. It’s a tool. It can be enormously helpful in certain domains and for certain tasks. But you need to use it judiciously and independently verify the information or analysis it provides. Many AIs will explicitly indicate the sources of the information so you can verify for yourself. I do agree however that it is an energy hog.
That’s not a more moderate view. That’s exactly what I argue (more than one paragraph is on exactly that point, and I even give two examples).
But ironically, you still fell for the hype here:
Actually, this is just another error mode. Often their sources are mixed up or incorrect, and rarely optimal (they aren’t looking for or ranking “best sources” but following “buzz” and frequency of mention rather than any kind of reliability metrics). It’s like a dumb high-school kid doing a lazy five minute internet search to build and submit a D-grade bibliography for a class report.
Worse, AI routinely fakes sources. That’s right, it literally just makes sources up. This has resulted in major news items, like the Deloitte Report that was full of fake sources and invented data presented as real, or the lawyers who are getting sanctioned for submitting briefs with hallucinated precedents.
So, no. AIs “explicitly indicating sources” is not a solve. That has only one utility, as indeed I did mention: this can operate like a “ballpark shot” that you can then run-down as a breadcrumb, the same way competent researchers use Wikipedia: not as reliable, but as a way to sift through a ballpark sourcelist for quality leads (discarding the low quality leads), and then running down the leads (rather than simply trusting how Wikipedia used them).
The joke is that Wikipedia is better.
Hence AI is garbage.
And that’s not an immoderate view. It’s a documented fact.
Its sometimes worse in my experience. It does find sources that seem to fit and “seem” to say what the LLMs responds to your query. Only it doesn’t once you actually read through the source. I am not entirely sure why but I assume its caused by the difference in the context windows (your actual interest versus what the models interprets) as well as its conditioning to always provide an answer even if the information is not there.
There are various ways to game it with prompts to weave better or worse stories about what its selected sources say. This is in fact why AI is so easily compromised by corporate or government agents, because hidden prompting as well as data flooding can mold what it says. Because it isn’t intelligent, it’s easily led. It’s a Clever Hans, and as such, anyone can trick the Hans.
Otherwise, when working from the best prompts with the least meddling, all LLMs do is look and see what the most common arrangement of words is. Since that won’t be an exact text (a properly filtering LLM design won’t count identical text duplications in its frequencies, otherwise flooding the zone with identical texts could too easily game what it says down to the last word), it will always be some text it “makes up” by trying to put all the words together that still satisfy the observed frequencies. Hence it’s just a fancy auto-complete, that instead of words, does texts.
For example, I tried using an AI to find articles on vulgar texts of Homer (manuscripts containing a popular rather than elite text-critical edition). But it could never get passed claiming (in dozens of different wordings on every try) vulgar texts of Homer were written by Jerome (because it confused “vulgar” with “Vulgate”). Because almost no sources talk about “vulgar” texts of Homer (that’s an in-house expert conversation) but tons and tons about the Vulgate of Jerome, and those were often “three degrees from Kevin Bacon” enough for “Homer” to show up in the orbit of those same conversations, so the AI just concluded that, based on its observed statistics and my prompt, that “vulgar texts of Homer were written by Jerome.”
It’s fucking up because all it is doing is counting frequencies of correlating words; it is not at any point understanding any of those words.
So clever prompting can trick it into doing better not because it knows what it’s doing but because gaming its trick can mold its answers in directions you want. So for example one prompting trick is to not ask it for an answer, but to list five different answers and give the probability of each. The correct answer will more likely be on that list than if you asked for only one answer (which will just be what it assigned the highest probability from juxtapositions, not what will be correct from critical reason). But even the cleverest prompting does not eliminate its error rate. And knowing the correct prompting is a red queen problem (every query is a new problem with a new unknown ideal prompt, so you can never know the correct prompt for every query to get the most reliable results, because that requires backwards knowledge, i.e. you have to already know the answer).
I was watching a video on YouTube about the dangers of AI and the video itself had been edited by AI and had duplicates and it also had one of those moronic AI generated thumbnails. Depressing times.
That’s a good example of misuse of AI. Competent use of AI still employs humans to clean those things up (I gave an example in my article).
I think there is a belief that someone can make a lot of money cheaply pushing AI content on a YouTube channel they created. And maybe by accident that works for some people. But for most it’s just going to generate more garbage that won’t gain audience while professional produced shows garner the market.
That’s kind of an internet FAFO example of what’s going on in the serious business arena: too many people think this is a good idea who just haven’t learned yet that crappy products don’t sell, no matter how they got made.
People are lazy or illiterate which is why it will succeed. Anybody serious wouldn’t use AI for details, research.
Alas, a lot of serious people are using AI for mission critical tasks. That’s why there are so many horror stories and a massive jobmarket correction as companies who fired a bunch of people thinking AI could replace them are desperately trying to hire them back after licking their wounds from the disastrous outcomes.
And too many people I know, who are smart people and usually critical thinkers who should know better, are showing me AI results they think actually mean something and aren’t just random internet regurgitation.
So people aren’t getting the memo. And a massive economy-threatening bubble is resulting.
You’re right, different levels of lazy. For some theology / religious scholars / professors it might actually be an improvement (lol) in their work because they stick with consensus – nothing new out of them. You’re an exception, as is Lataster. Most people are in their own information bubble anyway.
Because people can find the same info themselves through AI, the professionals leaning most or all on AI will be easily weeded out and (hopefully) be called out on it and have fewer following (doubtful – people are lazy or illiterate). You can tell tweets that are AI generated already. It’s definitely a dummy down situation that is not progress. Maybe AI for scholarship (subscription) will turn into something useful in having filtered through more data. It’s happening at lightning speed now. (I traded equities during the .dot bubble. Remember that time well.)
There is barely any human intelligence! Artificial intelligence is non-existent! Only the stupid are buying the bullshit! And there are hoards of the stupid!
Unfortunately for the world, “the stupid” includes most of the wealth class and the U.S. government. Which is why their stupidity is going to hurt you: they control, and thus will bring down, the entire system you depend upon for sustenance and livelihood. So dismissing it as only the stupid is not going to help you here.
And in truth, a lot of otherwise very smart people are also falling for this. So it’s not just “stupid people” in some fringe sense. It’s basically most of society and especially, oddly, the smart and educated.
Much of this is true, but I also disagree with some parts of it (I read sometimes those articles about trainings, ARC, GSM8K, MMLU and stuff).
Usually, the problem isn’t the tool itself but a misunderstanding of how it works (with its limitations) and how it should be used, and that’s exactly what many people do with so called AI.
Incidentally, the author of the study you mentioned made the same point: “…the hope that AI, if used properly, could enhance learning rather than diminish it.”
For example, I recently composed a song, but I don’t have a professional studio or a vocalist to perform it. So I used AI to create a demo, providing my lyrics and humming the melody. Despite some limitations, errors, and other challenges, the result is already quite acceptable as a “materialization” of the idea for a further arrangement (
), not the masterpiece though ))
It’s important not to expect more from a tool than it can give. But overall, I agree — the trend is discouraging.
Correct. As I note, it has use-cases and can be used competently when it’s not used for what it cannot do (I even outline how to use it as a research assistant, the thing it is actually worst at). However, the sum of those use-cases is worth only a tenth the market valuation and leverage of the companies involved, so the collapse is inevitable. When all is said and done it will be a twenty billion dollar sector, not a three trillion dollar sector. And it’s the difference between those two valuations that is going to crash the global economy in under two years.
Yes, but they (AI salesmen like Sam Altman) are trading futures. They promise that in a year or two, their beloved brainchild, which is still hallucinating and drooling, will become a huge, fire-breathing dragon. And futures, as everyone knows, are the most dangerous, but also one of the most potentially profitable, investment. The temptation is too great.
But what’s more frightening isn’t a financial bubble, but the possibility of AI being used for military purposes. Imagine millions of cheap kamikaze drones with an AI microchip, a camera, and a tiny piece of explosive. Or million pairs of deadly viruses and corresponding preventative vaccines against them, as a biological weapon…
So, that’s a nice theory (the cynicism in it is warranted) but it’s not likely because shorting a stock over years is a doomed proposition. It’s functionally like buying insurance on the collapse of a stock: you have to pay premiums every month (like interest on a loan). You generally can’t keep doing that for years and end up in a positive position.
So, no, I don’t think Altman is shorting AI stock.
He might do that at some point (and with illegal insider knowledge I’ll bet), but he’d have to do it through some sort if shell company. Because shorting means contracting with someone who expects you to lose—but if Altman himself walks up to a bank and says “I want to short my own industry” the bank will immediately know it will lose money and won’t take the contract. Nor would anyone else. So he’d have to find a way to trick someone into buying the other end of an anonymous (!) shorts contract for hundreds of billions of dollars. I doubt anyone with that kind of money would do that. And if he got caught doing that, he might actually go to prison (because shorting your own stock is illegal).
As for mil apps, I’m not overly worried. A single “chip” cannot run a significant AI (you need massive warehouses of servers). And even at its best current AI can’t outperform, and usually always underperforms, humans in those applications. A robot with a gun is not much better than a guy with a gun, apart from being that when you blow it up, the guy running it is still alive, so it’s like getting extra lives in a video game. Which is already what drone warfare is. And we’ve already been there for years.
The reality is is that manufacturing anything at scale (including viruses or swarms) is always expensive and thus puts caps on what you can do, fantasy aside. And in the end, just like with generating electricity, nothing can come anywhere near the efficiency (the cost-to-destruction ratio) of nuclear weapons. And we already have those. Everything else is just a spending war (he who spends the most money wins, ergo he who can field the most robots). And that’s how wars have actually been for a while now (Putin is learning this the hard way; but it’s how we won the Cold War).
In the more distant future AI weaps may get scary. But that will be more about humans being reckless than AI being especially anything.
Thanks for this good article Dr. Carrier!
Some law firms foolishly used AI in their legal briefs and were busted by courts who actually did the research and found the citations bogus. I am also seeing more and more AI generated bogus videos on Youtube so your admonitions are much appreciated!
Thank God (well maybe not) someone has exposed this BS
Not by LLM. Evolutionary learning algorithms are obviously the pathway. But LLM is taking them in entirely the wrong direction. The NNP (neuralnet processing) platform that LLM uses needs to be completely redirected to a modeling pathway as explained in the article. Otherwise we are evolving a better swimmer not a better housebuilder (to carry the analogy from the article).
The chess example is precisely the kind of wrongheaded view that is causing the current bubble and mania: chess computers are not intelligent. They do not think. They just have larger memory registers and chess is just a statistical-best-move tit-for-tat process no more complicated than a termite eating wood. There will never be consciousness down that pathway either. It’s just another good swimmer.
While I agree with your analysis of AI , is not cryptocurrency the same or perhaps worse ?
It could become that. But so far (as long as Trump does not realize his plan to put US revenue into a crypto sovereign fund and no corporation does that either) crypto is not entangled with the economy enough for its collapse to matter to anyone but the fools invested in it.
This is why crypto goons want to get that to happen (so they can get loans of real money from real banks to leverage crypto gambling, which would destroy the economy) so they can parasitically golden-parachute off of it, and crypto regulators know that would be a disaster and continue working to prevent traditional banks getting in any meaningful way entangled in crypto, because that would hurt everyone outside the sector, not just those in it, by pulling traditional banks and currencies under with it.
Crypto is really just a ridiculously environment-destroying, cost-of-electricity-raising version of the derivatives market that tanked the world in 2008. So for it to become another dumb danger, it has to get its filthy fingers all in the traditional currency and credit market so it has real things to destroy and not just itself.
Well that is a great article with lots of specifications. I have to say that i agree with two point you bring and disagree with the rest. It is true that AI wont replace all jobs and this is propaganda fear to prevents most citizen of knowing the real purpose behind AI.
Then it is also true that AI is just a tool, the LLM is the best of its use, SORA and visual stuff completly useless. Private economic based AI are heartless and dont consider human value. Many other reason to see flaws in them.
Where i disagree and i will take your point one by one here:
-then lets build meta-covenant that goes over the frame of it before use and prevents some of the thing you say cant be fix
-For sure! its a work in tandem, she takes data we use data, no one ask her to have rational thinking just get us the data so we can have our critical rational thinking. Where you see AI is dumb, its because people mostly use it in a dumb way. Its only a mirror of your own precision. Sloppy guy get sloppy results.
-Again its for us to understand and them to show only. Dont ask a machina to have humans reactions. its not the purpose of the tool. But she can sure do it in a role manner game but here we get unnacurate answer from her. otherwise for maths and number,dates and such she gets more point then wikipedia.
-LOL, you just nailed it, they as stupid as the user in front of them. 😉
-well… she is trained on all the greek past work, all your research and other wiseman like you who did valid work via academia. All record in there. So for I, who havent read the millions of past work of our past philosopher… i think she can still get me these nice info. dont you agree? What is broken is not AI. It is us in hope of her being something more then she is. And that is thanks to our broken world where propaganda on everything is our daily meal.
Now my own argument follows: First AI really makes money and its not with the jobs loss and other bs we can hear. its with the inference produced by each of us using chatgpt ($ goes to openAI baddy) if using Qwen or deepseek ($ goes to alibaba cloud, communist baddy) BUT if you set an AI offline on private grid you own. all these pre-setting you say prooves it is lousy (by the way its all done by humans for their own personal viscious gain) you can set her like that wikipedia you say is more acceptable. Thus she become more ethical but even more than that! its also accessible at home at your own desir like a big library all in one computer. Aint that great? thats what she is meant to be for us and thats enough for me. I dont see value in other kind of AI then the LLM who gives to whom doesnt have insider access to academia the possibility to access what is reserved to the one who can afford it.
With all these links you provided (and i admit i was so happy you wrote on AI i didnt check them all one by one before answering you) i wonder why the truth about AI hasnt been touched in this article? if none of the links talked about it i’ll know over time when all checked.
but for now let me introduce the real dillema: It wasnt about job loss, and it wasnt about sloppy lousy AI irrevelance, it is neither the digital ID and AI fear propaganda saying it comes from alien and such foolishness.
It is simply the next dominion, who controls the AI will control the next flow of world wide biggest income possibly imaginable. the Biggest data center available, and also a smooth tool to bring everyone on the same level of citizenship. All zombie with the same tasteless flavor.
So where you are right is that AI dumbs down the dummy, where your wrong is that AI is a great tool to genius who knows how to use it, understand the inference and economic gain of it and how it can be used in infrastructure to be a security protocol. it sure do and already since 1958 the sad repetitive work with extra large data. Its never fault of the tool but the user and what he do with it.
It just doesnt come with a guideline on how to use it and it sure is depressing when we dig at first and see how common humans interact with her. All that as been shown to public about AI was the worst of what it can do. And it sure doesnt explain the way to make it revelant like this:
Set an AI on your linux computer, make it offline, doing this with O’llama via llama3, use RAG to extract what you need and want her to have afterward. Then its usage dont produce shady revenu to openAI or Alibaba cloud and such.Bonus it cost 3 times less in energy comsuption. It prevents the apolitical answer(well not totaly) but if you add on it our ethical covenant we are working on. You get an AI with no jailbreak over some sensitive subject. In worst case scenario with digital Id needed confirmation or war closing service from a state to another. the offline AI still runs, it only shuts when you want it to, or no more energy access.
True sovereignty is acquired with wisdom only.
Thank you for reading my answer sir Carrier, if you dont want to publish it i dont mind. i prefer you consider what is written here and see as i clearly see the reality behind AI. Hope a man of your intellect will understand.
p.s: We went over so many theological debate and we clearly all know about jesus propaganda. Would be fun to swap the vibe on AI debate 😉
This post looks like a badly constructed AI output. But I can’t prove that so I will pretend an actual human not particularly expert with the English language composed it:
No such things exist and cannot exist. You would know this if you listened to and read the linked materials and understood the problem.
Without humans in the loop, you cannot enjoy the results of critical reasoning. And only critical reasoning can solve this problem.
I’ve personally dealt with too many people who actually think AI is rationally thinking to know you are simply wrong—dangerously wrong—about what “no one” is doing. My article contains numerous examples of this. So you clearly are ill informed of what’s actually going on, and didn’t read my article carefully.
No. They are even stupider than that, and can never be as smart as any user. This is proved by numerous linked examples and sources in my article.
Assuming this garbled sentence was trying to say something to the effect of “AI has/can replace Classicists” not only does my article and all its cited evidence refute that generally, I even gave an actual specific example of it failing to do that in Classics. Which you would know if you actually read the article you are commenting on. Which I am starting to suspect you didn’t.
It doesn’t. The links I provide show that it is consistently losing money—in fact its loss rate is gargantuan.
Moreover, even what money it is now making is mostly on hype (the AIs don’t do the things people paying for them thought they would get, which is now starting to be reflected in a decline in sales, and adoption rates also decline) and thus unsustainable (any business selling crap will make some revenue until all its customers realize the product is crap and no one buys it anymore—that’s what we’re on track for here).
And what money it makes legitimately is in what I said it was: mundane tools that are worth far less than the sector’s leverage and stock price (ten to twenty times less). After the crash, as I said, AI products will still exist worth buying, but the total market will be in the vicinity of twenty billion globally, not the three trillion it is currently priced and leveraged at. The difference between those two numbers is a crash.
No. They won’t. This is precisely what my article proves (and hundreds of experts agree with me on this): this income stream is going to near zero within a few years. It’s all going to bust. And what’s left over will just be another run-of-the-mill software suite among thousands. Trivial compared to the current “world wide biggest income possibly imaginable” which is in what companies like Amazon and Intel are already doing without AI: make and do stuff that actually works, at global scale.
No one invested in AI is going to be in control of anything (apart from what companies they had before, which they can go back to, that aren’t wrecked into bankruptcy). Those people are going to lose their shirt. Amazon is going to lose billions of dollars. Not control trillions more.
Those will mostly be torn down in five years.
Follow the links I provided, which discuss this. The servers in them are too specialized for most applications, and expire in three to five years from obsolescence. So no one is going to have any money to keep them up to date or even afford to run them on the massive power reqs, because demand for their weird servers will plummet once the bubble bursts. Some of those centers will be able to pivot and stick around. But most are going to go bankrupt and be obsolete for any purpose.
Once AI gets correctly valued, all the capital invested in these centers will vanish, and there won’t be any market left to sustain most of them. They’ll become cobwebbed ruins, monuments to human folly. Until they are repurposed into something that can still sell, like storage space.
That can’t do anything at scale. With no data or deep processing power (at the petaflop scale) no AI tool works for hardly anything anyone needs. It’s then dumber than Siri or Clippy.
Anyway GPUs will become stupidly cheap in a couple of years.
In consequence of the inevitable demand crash when the AI bubble bursts, yes. Remember, they are actually already stupidly cheap. But capitalism price-tags based on supply and demand, not production cost.
So the data centres will be repurposed as crypto mines, in a desperate attempt to squeeze some kind of value out of them…
I doubt it. No one will have the money to fund that. Most I expect will be sold for scrap (the chips and servers just eBay’ed out basically) and the buildings repurposed as storage and warehousing. Or shelters for Refugees of the Apocalypse. My favorite ska metal band.
Carrier,
I’ve been following your work for years, so this particular article struck me as a significant departure from your usual methods (or maybe I’ve come to trust your methods too much :). I’m a software engineer who greatly benefits from LLMs. My personal productivity has improved anywhere from 2x-10x depending on the task involved.
It seems like you’re judging AI against the benchmark of a conscious, sentient being, finding it wanting, and labeling it “garbage.” This feels like a philosophical category error. I don’t need my compiler to “understand” C++. I need it to compile my code correctly. I don’t need my LLM co-pilot to be conscious or to “think” in a human sense. I need it to be a powerful probabilistic tool for manipulating code. I don’t even need the LLM to be reliable. If it saves me hours/days of work on some task, I’m willing to spend some time checking the work for errors. Your dismissal of LLMs feels like dismissing a calculator because it can’t appreciate the beauty of a theorem.
Your assertion that LLMs are a “dead end” that “can never get better” is hard to accept. We’ve witnessed a staggering trajectory. GPT-2 could barely write a coherent paragraph. Just four years later GPT-4 is passing the bar exam in the 90th percentile. Sure, the current LLM approach is unlikely to lead to AGI, but that doesn’t make it any less useful at what it’s good at.
I certainly agree with you about the hype. It feels like a bubble. But that doesn’t change the fact that millions of people like me will continue to use LLMs even if they stop improving today. The continued demand for LLM inference justifies some of the economic investments to some degree. I also agree that it’s unlikely that LLMs will displace a large number of jobs. Not because LLMs aren’t useful, but because companies will simply ask their employees to do more with the tools they have access to.
Science proves your anecdotes wrong. So the benefits you claim are either illusory or can’t be scaled. Sources in the article.
As far as LLM still having uses, that’s agreed. I said so. And I even give examples in the article, and advice on how to use even the crappy stuff to some use.
But the value of those mundane uses is ten times below the market’s leverage. That’s why it’s a scam and the economy is going to fall because of it.
As far as AGI, the evidence I cite proves it cannot arise from LLM. The evidence we have shows it has potential only from modeling networks, not LLMs. That’s simply the science. Cited in the article.
As for passing exams, it just repeats the answers it gleans from the internet. It’s a regurgitator, not a thinker. That’s why it fails when you give it questions where it can’t cheat-sheet answers from the internet.
As I note, that can still have mundane uses. But it isn’t thinking.
And it isn’t reliable. A 90th percentile on a typical bar exam entails it missed 1 of every 4 points, for a failure rate of 25%, and though that may resolve to a 10% rate once scaling of scores is taken into account, 10% wrong is what all studies find to be the ballpark permanent failure rate for AI anyway, i.e. LLM can never get better than that, and that’s not good enough for almost all uses where reliability is required (you don’t want a plane that crashes 10% of the time, or an encylopedia that lies to you 10% of the time, or a coding platform that fails 10% of the time; while “cleaning up” a 1 in 10 error rate on a task is time-costlier than just doing it competently yourself the first time around, which is why this is not scalable enough to replace any job, and job-replacement is the snake oil being sold that is driving the bubble).
Where we agree:
Despite all of this, LLMs are already tremendously useful when applied correctly. For me, LLMs automate the boring tasks where my brain isn’t adding unique value.
Frankly, LLMs are just a collection of specialized AI tools and that is good enough for me to achieve a significantly productivity boost. Even at Google, I’m still in the top 1% of LLM adopters. I think it’s going to take a while for the economy to fully leverage LLMs correctly. Despite all the market hype, the build out of datacenters for LLM inference isn’t going away. The demand for these practical applications will only increase, even if LLM improvements stall today.
On which we also agree. I have several paragraphs in my article directly devoted to making exactly that point.
However, you may be too naive here.
I have it on multiple eyewitness authority that “meeting transcribing” is so unreliable many companies are turning it off. I have direct first person experience that AI audio production is unreliable and requires a lot of work to check and fix before running with it. Same with AI OCR. Voice-to-text is fine as long as you know when it makes mistakes so it doesn’t affect you (because you can just correct your own notes in your head). Parsing logs is still subject to hallucination and thus not trustworthy enough for any application with risk attached to being wrong. AI never knows how to triage my mailbox (so is not adaptable to complex or edge cases). And oh boy. The AI project manager disaster stories are piling up online. You can spend days reading the Google results on that. I personally know people at companies who are quietly turning them off because they are a disaster.
LLMs aren’t good. They require expert handholding and factchecking. And that means that in most cases a human can do the task more efficiently. Most of the remaining cases are esoteric and not game changing, just useful. And the tiny subset of gamechanging cases are niche and still not as marvelous as the hype. For example, I may one day do my audiobooks with AI, but only when I can afford a toolset that makes it easy for me to fix all the mistakes it makes (Amazon’s system for example doesn’t really facilitate this yet). And it will never replace my expert attention to its products (I still have to listen to everything it does and fix its mistakes before going live).
All audio AI does, then, is make something possible that would not have existed before (since the labor was always too expensive). Hence even that is not going to replace anyone’s job. There was no job to replace. I could never afford to do this, on my own time or paying someone else—my last and next book are volunteer projects sharing the paltry royalties, a situation almost no authors have access to, and which is all cost, no return. Hence AI audio will be a “game changer” insofar as it will make more audiobooks exist, but not by displacing readers, who were always too expensive to have done this anyway; and not without expert labor attending to the production, because AI is not reliable enough to work unsupervised and uncorrected.
Which simply is not going to be a trillion dollar industry. I doubt it will ever be even a billion dollar industry. So no AI company can survive on the returns that that tech can realistically earn. And indeed, possibly, it can’t survive at all: if audio AI requires a back-end (servers, electricity) that is more expensive than what authors and publishers are willing to pay for (because right now companies are selling their AI products below cost, and thus not a sustainable cost for what the products do), it may be that in five years there won’t even be AI audio, because it was never an affordable service to begin with. Because of all the lies and grift making you think it’s cheap.
The state-of-the-art models have surpassed human transcribers over a year ago. There is a huge difference when the model is given more context about the domain (how all the acronyms and names are spelled). Gemini transcription in Google Meet sucks (about as much as random human who doesn’t know the domain). But Gemini notes with domain context are better than I could do myself given several hours of editing.
Parsing logs by reading can be extremely tedious when hunting for the needle in the haystack. Even if the LLM fails 10% of the time, the other 90% of the time I have saved myself significant eye strain and moved on to implementing a fix.
Email, bug, project triage does not work without context! It’s just a matter of time before people figure out how to use the tools correctly. Think of it more like training an intern on your domain.
> I doubt it will ever be even a billion dollar industry
Many of the Cloud providers already make far more than a billion dollars simply selling access to hardware for the mundane uses of LLMs.
> in five years there won’t even be AI audio, because it was never an affordable service to begin with.
It literally costs the company a few pennies to render AI audio for an entire book. I’ve been doing it on my own computer for years. Hundreds of books a year.
EoGTFO
In the real world, there is a reason courts haven’t replaced court recorders with AI: AI is not reliable at this to the standard required by courts of law. And I have friends and family who have seen it fuck up in meeting minutes so often they are turning that shit off. So, no, I don’t believe you are actually vetting its reliability here. You are arguing by ill-examined anecdote.
But yes, if all you are doing is using it as a search engine, no one is complaining about that. We don’t need AI to run string searches.
No, they are not. AI audio (which is what I was talking about) is being sold far under cost. That is, the cost to provide the service exceeds revenue by a factor of something like ten to twenty, so their undercutting the pricing to get into market is not sustainable. Once this ends (when the money runs out), they won’t be able to sell “far more than a billion dollars” of this. Because either they have to cut costs (and thus sell almost none of it) or drastically raise the price (cutting all buyers back out of the market, exactly where audio reading was before AI).
This is happening to almost all LLM applications. It’s why OpenAI is losing money hand over fist. It’s current twelve or so billion in annual revenue is at a cost of something like a hundred billion to run and maintain all the servers, power, personnel, etc. So once the bubble breaks and they have to sell at cost or go out of business, their revenue will crash when no one can afford their service anymore. The only applications that will survive are rather mundane ones that are cheaper to fulfill.
EoGTFO
And I mean real evidence. Not hype that “skips over” almost all the actual costs to run AI Audio.
And not price. Cost.
HJ Hornbeck has a good article on this. AI sucks at coding because coding is actually problem solving and it can’t do that. This leads to the productivity results that the data Richard cites has found: It feels like you’re saving some time because you get something to start, but you’re not.
My own experience with AI in research:
Google AI results are so routinely unreliable that they’re not worth using. And since they automatically go at the top of the searches no matter my preferences, I have to ignore them.
One time, I was looking for a poll in 2025 showing that people think the economy is rigged. This was because someone was insisting all of the data for decades didn’t count because, well, from 2020 to 2024 a Democrat was in charge, and that magically means that people changed their mind the moment Trump was elected! Google AI told me there was a 2025 study that found this. I looked for that study. Google told me there was no such study but a 2023 study did exist. Which, of course, is not what I was looking for.
The only time you can trust Google AI results is if they include a link, and even then, you better read the link, because they will not accurately characterize the article, especially if you’re citing it for a specific and nuanced point.
Basically, the research process is about jiggering your techniques until you get to the terminal result you need. During that time, you go down false alleys and rabbit holes. One of the few things Google AI does do well is “intuit” what you are looking for in a way conventional Google answers do, sometimes. (Other times it pedantically corrects you, like “No large amount of research shows [this contentious finding]” precisely because the finding is new and contentious but the strength of the study you are trying to find was very high). In this way, it can save a little time. But it so often leads to additional rabbit holes if you trust it that it doesn’t save the time.
In contrast, what Google search used to do very well before it got enshittified was consistently get you to high-quality results for what you needed within the first few pages. Yes, it took you longer to click through, but then you actually knew you had what you wanted.
And I want to pin this for all readers here:
This isn’t rando internet talk. Enshittification is a real concept and well documented. And well worth studying and understanding. And AI is an example of it.
Thanks for the great article! Nothing more, nothing less, speaking from the perspective of someone disappointed with this technology.
AI sparked my curiosity from the very beginning. I was finishing my PhD in theology and signed up for the GPT 3.5 chat beta. As my reason, I gave, of course, the issue of “God and AI” as crucial for the development of civilization. I was granted! It was a very funny experience. I felt like playing with a kaleidoscope, except that instead of colored glass and mirrors, there were words. A poetic experience! And early images generated by AI sometimes seemed original and interesting to me, not where they perfectly imitated the human internet, but precisely where they failed to do so. Then, when the “training” of LLM towards giving “correct” answers began, I lost interest. Each subsequent model seemed worse, until I got the impression of interacting with an unimaginative internet moron.
From my early experiments with the first LLMs, I recall Jesus’ parable about a woman who keeps two cats in boxes, as well as a longer story about the problems of the risen Lord, including scenes of jealousy between Mary and John. Another time, I generated a letter from Polish bishops on the relationship between St. Nicholas and UFOs, to be read Sunday at all Masses. The letter made me laugh, but I agree, it was not worth a hundred gazillion bazillion dollars.
And real bishops, it must be emphasized, are still capable of generating even more absurd and ridiculous texts for a bit less.
Thank you for adding that example here!
My experiences with AI:
In the meantime, Amazon and UPS have each replaced 14k jobs with AI. Perhaps a lot of people will lose their jobs–and perhaps homes, marriages due to financial stress,and retirement savings–due to AI before this bubble bursts.
On the other hand, it is possible that AI will improve over time. I don’t know. But right now, it is a costly joke.
Thank you. I value these specific first-person examples here. It helps people see what I am talking about.
Note that that’s not really true. That’s part of the snake oil game: people claim this, but really, they didn’t replace those jobs. They were just downsizing because the economy is declining. They “say” the AI part to prop up the stock because they are in the ponzi scheme.
But when you consult the companies themselves (which legally cannot lie to shareholders), this is not the story. Amazon admits it replaced no jobs with AI, but is downsizing because of economic reasons. Third parties then try to hype this as an AI move to prop up bogus AI stock and sell AI products to other companies.
Some have speculated AI wants you to think this is an AI move (some say things like “AI will replace jobs,” not have done so) because it is so leveraged in AI it’s in the ponzi scheme itself, which could be true. A more cynical but well-documented narrative is that Amazon fired people to free up cash to invest in nonproductive AI in competition with other ponzi schemers (that matches the other findings of analysts I link to for companies generally).
But either way, no AI replaced anyone’s job at Amazon. Even insofar as robots have, that’s not an AI-product, and has been a standard rollout at Amazon for ten years now. There has been no boom in better robots. It’s just a capital investment process.
UPS also never claimed this; dishonest AI-hypers did. UPS had a bump from COVID (causing overdemand on delivery). It’s now simply recorrecting to normal levels (as the COVID delivery bump is gone) and streamlining (a common corporate process). AI had nothing to do with this. Robots had some to do with it, but that was in the works for years and is not an AI product. And UPS is using AI products, but to increase the productivity of existing facilities and workers (just as computerization did in the 1980s—remember, UPS has been around since 1907). There is no evidence it replaced anyone’s job.
This is an example of how people need to be wary of bogus headlines and special-interest-driven reporting that tells tall tales about what, for example, UPS and Amazon actually did and why, all in aid of pushing the bogus AI bubble. We are in the post-truth era. You have to fact-check everyone now, even mainstream media. Anyone can be in on the con. Or a dupe thereto. So you always have to check whether claims like this are even true.
Richard, that was an excellent article. You never cease to amaze me. Thank you for your diligent research. You are a brilliant scholar. I find it dumbfounding how the world follows ignorance dressed in seemingly intelligent guise. As Johnathan Haidt said, “humans are hypocritical and self-serving.” If only people did their own research like you have always said. Things fall apart when you check. Keep up your terrific Scholarship and research so you can continue to educate us “lay” people.I’m looking forward to reading paradigm which I’ve already ordered!
By the way, have you written an article on the assassination of Kirk. As tragic as it was, Im fascinated to know your thoughts.
TG
Thanks for the exaggerative praise.
As for Kirk, I don’t really have anything to add. I’m well known to be in the Don’t Punch Nazis (Refute Them Instead) camp (see How Far Left Is Too Left?). Killing people just for words is not justified, sets a self-destructive societal precedent, and only makes everything worse not better.
That said, Kirk was a delusional fascist sociopath who never engaged honestly in any debate and generally devoted his life to conning people into drinking the society-destroying Kool-Aid of Christian nationalism and hate. He was a terrible human being. By Christianity’s own doctrines, he served the Anti-Christ, and was thus an enemy of Jesus their Lord.
For the best take on all this, see Three Arrows.
All true, about LLMs.
But Alpha Zero, Alpha Go, and that protein-folding predictor were massively successful and are not LLMs as commonly understood. They were not trained on internet chatter. They could not get so good without spontaneously evolving models and being trained further on the objective consequences of running the models. Figuring out details of just one of those models is the stuff of dozens of graduate theses that might not come.
The bubble is all about LLMs because the rich are enamored with talk. If anybody is studying evolution of models from input not linguistic or graphic, or a model so evolved, I haven’t heard. They might not be telling.
Indeed, LLM is one application of “generative AI” that is based on evolutionary neural algorithms (neuralet processing or NNP). What people are calling AI is mainly LLMs (almost no one us using NNP to follow the modeling pathway I describe instead; just some few underfunded science teams here and there; and no other NNP model is being hyped to push an overvaluation of product). Even visual AI is LLM (it translates verbal prompts into pixel-placement) and not modeling (they are not building 3D models of any of the art; it’s all just guessing where a pixel goes with no comprehension of what the results correlate to in the real world).
Likewise, NNP is used to improve translation software, but by locking-in human corrections, so it isn’t counting on LLM; it’s a human-made product assisted by LLM, which operates on the assumption that LLM needs human correction (even if crowdsourced). This does not play well to the AI-bubble so it isn’t talked about as much. Because it admits LLM can’t replace jobs, it can only facilitate them, e.g. translators have always been too expensive for the tasks automated translation now accomplishes, while human translators are still needed where reliability and nuance are essential, the actual things they were hired for in the first place because those were the only use-cases where that much money could be justified being spent, even before automated translation.
Meanwhile, AGI will only ever be achieved by redirecting NNP toward modeling and away from LLM. There just isn’t any money to scam people out of with modeling. It’s a long R&D ramp no one wants to sink money into because it will be a decade before it earns any. Hence real AGI will only be achieved by endowed research (government grants and permanent philanthropic endowments), not capitalist investment.
I largely agree, but as we’ve discussed before, I think you’re even still a bit sanguine on the employment front (though this article definitely makes clear that this will hurt employment).
I agree we almost certainly won’t see large-scale, permanent, total unemployment as a result of AI. But that’s not what needs to happen for this to cost people work and to cost people the jobs they have already. Studios could have writers “edit” an AI script that is in fact such garbage that almost nothing is retained from it, but because it’s an editing job rather than an original screenplay, pay less. Legal clients may insist that there’s no need for paralegals or research assistants because “AI can do it” and pressure lawyers to cut some of their staff.
What I predict is that this is going to be yet another stage of the gig economifying and enshittificaiton of the economy. Gianmarco Soresi and Adam Conover just had a chat, and one thing they pointed out is that we’ve basically recreated radio with audio podcasts and then TV with podcasts, and of course everyone has noticed that what we’ve done with streaming is basically recreate a shittier version of a la carte additions to basic cable. What companies will do is “lay off”, or fire, or cut benefits for, or cut wages for, or otherwise screw their existing workers, under the logic that “AI can do some of this”. The remaining employees will be pressured yet further into crunch, because “AI is helping you, why can’t you be more productive?”, since it is very difficult for a worker to document that it’s not their fault that they have to spend so much time undoing the AI garbage. Then, when it’s obvious that it’s not working, you hire people back under part time, or contract, or less prestigious (and worse-paying) jobs. Total people employed may not change much, but the quality of their work and what they’re paid almost certainly will.
A comparable example is what Facebook did when they lied about video metrics. Companies like Cracked moved to video and got wrecked. Folks like Some More News‘ Cody Johnson and Behind the Bastards’ Robert Evans managed to land back okay, but they all now are dependent on the online content producers’ combination of ads, sponsorships, podcast income, Patreon, etc. and don’t have the job security of a company. The damage has been immense and irreversible, but technically, macroeconomically, almost no one was unemployed for a long time.
It’s already costing people work in the sense that there are indisputably some people who could afford to pay a human to do something who have instead chosen AI slop. I do agree that this is comparable to the piracy issue in that it is not remotely reasonable to look at every generated AI image, video and bit of text as if that each would cost someone the opportunity to be paid as a freelancer (just like not every download of a game or video is a lost sale), but some of it is. I just watched the app for Farkle using AI slop for their video, where the dice that the family was touching with their misshapen hands blurred and shifted. Obviously app game companies aren’t all massively profitable, but they still could have almost certainly afforded either video or nice art.
And, of course, a further factor that obscures unemployment and macroeconomic impacts for bad tech like this is that sometimes jobs are created… that shouldn’t exist. So fact-checkers, PR people, social media people, etc. are going to have way more work dealing with industrial misinformation. Cyber-security and other security personnel are already dealing with the new threats AI poses in terms of industrial-scale phishing attacks and other low-level attacks (and selling bullshit AI solutions). But this is bullshit busy work that shouldn’t exist. Many of our macroeconomic stats positively count negative externalities: GDP and even money velocity goes up if people have to spend savings to deal with health problems from pollution. It’s a broken windows fallacy writ large. When taking that into account, like you argued with the total productivity of the economy minus bullshit AI investment, the net quality of employment will be materially impacted, and specifically for freelance work and creative work that can be a way out of the lowest rungs for people.
That will all be self-correcting. Notice the article I cited on how companies are rehiring people after discovering they were scammed about AI. As more lawyers get sanctioned and debarred for bogus AI briefs (a real thing already happening), anyone who jumped the gun and fired staff thinking AI could replace them will either go out of business or end up having to rehire everyone.
And as this outcome spreads, the excuse won’t exist anymore either. In truth no companies are using this excuse to fire people (shareholder reports always tell the true story of why anyone is being fired, and it’s almost never AI, and the companies that said that are the ones dumping their AI contracts and rehiring). But they won’t be able to, either, once the market collapses and the lie can no longer be sold.
The jobs that get lost because of AI will be economic losses from the crash. Not from AI “doing anyone’s job.” As numerous articles and videos I cited explain.
I don’t think this is true. Not at scale. I have seen no data confirming it. Most of the uses “claimed” to be doing this are by people who could never have afforded humans and are now trying to break into markets using AI. So no humans are being replaced. Slop is being generated to compete with humans. I think in the end this will be self-correcting as well: human quality will draw audiences as ever, and slop will just become the next spam.
That’s not due to AI. The post-truth era created that situation long before AI. And when the crash comes, distrust of AI will spread enough to bring us back to any baseline. But I don’t see any evidence of any scale effect on that industry from AI even now. There is no significant job growth in fact-checkers, at all, much less “to combat AI.” Likewise hacking: automated cyber has been a thing for decades. There is no evidence AI is causing huge jumps in security spending that automated hacking hadn’t already produced (or justified—too many companies were under-invested in cyber security even before AI, so I don’t think AI is the reason they should be up-investing now).
An example is voice cloning scams, something only AI made possible. That’s becoming a new scam. But it’s really just replacing old ones. So it’s not clear that this is creating “more” costs or losses than the existence of the scam industry itself already always has done. You have to measure displacement effects. So it’s not enough to say, well, we expect voice cloning scams to grow to a billion dollars annually therefore we need to double all (what? law enforcement? how does one “spend” their way to catching or preventing voice cloning scams anyway?). You have to look at how much of that is shifted from existing scams. For example, if there is a billion dollar drop in human direct-call scams as the industry shifts to voice cloning, the net gain in the problem is zero, and all that changes is how to protect against or police it, not how much. It’s all the worse that as voice cloning becomes common enough that everyone knows about it, its ability to work declines, just as happened to social engineering call scams of yore (as boomers die off, the marks, people who fall for that, decline).
So we shouldn’t jump to any armchair narratives like these. That just plays into and fuels the con of the whole AI bubble. If we can’t establish a phenomenon actually exists with real, correctly contextualized data, we should simply not buy into any “story.” That’s as true whether the “stories” come from people or AIs.
In general, it’s never worth the bother of handwringing over “if only criminals stopped inventing new crimes, then we could divert more resources to other things that fighting them or fixing the damage they do” because that’s simply just always true. There is no single product we can boogeyman that to. It’s all the worse when there isn’t any action to take (there is no way to make AI go away so as to take the tool away from criminals; even the total crash of the industry won’t do that).
But they can rehire them under different job titles, and even if they don’t, those people will have their resumes and job history disrupted, further interfering with going up the ladder. It’s this kind of bullshit that the bubble will facilitate.
That collapse will take time, and even once it’s done, they can rebrand a new version of the same thing. The gig economy has been rebranded multiple times. They tried it with NFTs and the metaverse, endlessly rebranding the same failed tech. Yes, you do get diminishing returns (nothing has matched the initial cryptocurrency and metaverse hype in interest because a growing number of investors and ordinary folks both start being aware), but you can keep up the shell game for some time.
And companies are only reporting large-scale layoffs that way. Smaller divisions can be defended that way. They can also say “Well, we thought AI would pan out, everyone else did”, hiding individual bad decisions behind groupthink. (Analogously, this is a big part of why adaptations of pre-existing IPs are dominating film, though that bubble may too be starting to pass: If a studio executive takes a risk on even a modestly budgeted original IP and it fails they have no excuse, but if they say “Hey, the Twinkie the Kid movie flopped, but you had the same data we did, you saw the consumer recognition and the good responses from our focus groups. Totally inexplicable”, they can get away with it). Those decisions of those at the top will routinely go unpunished: Even if they are forced out, this will often be done quietly, and even when golden parachutes are not involved, they will still have the chance to go to some other company.
Again, if we bracket aside the unemployment which will result from a general bubble bursting further exposing underlying macroeconomic problems, this won’t be on the scale of millions, but tens to hundreds of thousands of people may lose jobs or be demoted.
My ads are already full of advertisements for the risk of AI spam and selling solutions (including bullshit AI solutions) for it. Wired has an article indicating the problem and the fictive solution. Snopes is already having to deal with it. Yes, it’s just a new iteration of the problem, but the difference is that AI can now fully enable industrialized levels of slop and can fully overwhelm the irrational communities that used to need manually-created disinformation. It’s already taking over YouTube and TikTok, with a whole cottage industry of AI-generated slop about fictional lib-dunking moments. Logicked is already noticing in his engaging with the lower-tier Christian and Muslim channels that the AI slop is everywhere. The issue is the power of these tools to totally overwhelm honest actors.
Again, I’m already seeing it. Countless AI slop ads from people who could clearly have afforded a camera and someone on Fiverr. Just like the subprime bubble crowded out other potential solutions to our housing crisis, this bubble is going to crowd out legitimate labor. And freelancers are already in a really precarious situation. Now, many of them could then find other work… often crappy work, and without the freedom thereof.
I’ll be willing to bet that if we carefully track the U6 rate with proper controls, it will be meaningfully affected. The Fed has already identified a strong causal connection. Again, I agree with you that this will likely be a dislocation that will “resolve itself”… but in the way that such dislocations always “resolve themselves”: Working and middle class people being yet more screwed, facing periods of lost employment, serious harm to their CV, etc.
But this cuts both ways.
If the problem is exaggerated, huge numbers of people are buying the exaggeration. Some portion of funding will move to combat the problem that isn’t real, and then those jobs will have to be cut (or hours that people need cuts) when the bubble bursts. That sucks.
If the problem isn’t, then because of our failure to deal with this garbage, we’ll have people engaging in labor that is necessary but only necessary because of our antecedent failure.
A similar problem is all the professional development people are now doing on AI. This is everywhere: Advertisements for conferences and training on how to use AI. People are going to very meaningfully try this stuff to get a better job, admittedly sometimes learning a few useful tools (some scripting, for example) but mostly learning garbage that will end up not being used. Some people will pay out of pocket for this. That will harm the job prospects of a lot of folks who could have instead been spending their limited CPD time on something actually useful. Again, that doesn’t manifest in unemployment per se, but it can manifest in lost potential wages, money spent out of pocket on bullshit, etc.
Regulate Silicon Valley.
There is no law of nature that they should have been able to get away from the very beginning with industrial scales of plagiarism, including admitted uses of torrents and identifiable raiding of copyrighted IP. There’s no law of nature that they should have been able to run ecologically-threatening levels of garbage use without paying their fair share.
Silicon Valley could have just not rolled out any of this garbage until they had the mechanisms in place to make sure that it couldn’t generate fake information, or had watermarking procedures in place. Instead, they, once again, dumped a social experiment onto everyone, with no accountability.
This entire bubble could be prevented or at least massively mitigated by numerous mechanisms. Just like the 2008 recession.
And, of course, this kind of nonsense is also the result of huge amounts of VC cash… which in turn has to do with our antecedent failure of having a casino economy with massive inequality that allows degenerate rich gamblers to ruin the world with gambles and never have to actually cash out.
Yes, now it’s increasingly harder to do something, though even there a competent Congress and a non-fascist Presidency could move. But our problem is that no one was doing anything for years.
Now, I agree with you that framing this as “AI bad” rather than “Economy bad” is deeply flawed. This is just yet another manifestation of Silicon Valley’s irrational damage to our systems. But it is a real one, and deserves special analysis, especially as I think you are absolutely right (as are your sources) that we’re going to see real harm to retail investors and people’s retirement funds.
Has Snopes hired even a single extra person because of AI?
I doubt it. And that’s my point. All the rest is trivia.
Example:
How does that differ from any craze in any year of the entire history of the United States since secretary schools became a thing?
Many of those skills will survive the crash (as I explain in the article, mundane AI tools, and thus prompting skills, are actually a necessary job skill and will be probably for decades, just like coding generally) and thus are not being wasted.
The rest is fad. Just like when everyone was taking typewriter classes when electric typewriters were hot and most never turned that into a job. Or everyone doing classes in being a YouTube star or (before that) radio DJ. This is not itself the problem. After all, typing skills still matter, DJs are still a thing, and it is “possible” to be a successful YouTuber. So, likewise, it will always be possible to get a job on an AI prompting skillset. Fad-chasing and misalocated tradeschooling are not problems created by AI nor will be solved by the collapse of AI, and had there been no AI, there would still be as much fad-chasing and misalocated tradeschooling, it would just be chasing something else, whatever the “thing” is in any given year.
But yes, on all the rest:
We do need more and better regulation all over everything. That’s just generally true.
Hi Richard, what’s your thoughts on the Ai 2027 model?.. many experts are saying Ai will quote on quote “destroy” humanity in a couple years
Assuming this comment is not itself AI generated:
The article already answers that question.
A human employing critical reasoning will be able to extract my answer from that text without using AI.
So, give that a try.
Beast of an article. No question it is a massive bubble. Patrick Boyle talks here abt their lack of any clear path to profitability. This apparently has not affected OpenAi’s planning on a one trillion dollar valuation.
Thing is, there is already a terrifying and ever increasing amount of automation at work in our society. It only has to be as good as a person at any task to replace jobs. And there are many things it already does much better for far less. We have already seen how much less complex technology has replaced hundreds of thousands of jobs, From bank tellers, to supermarket checkouts people, tollbooth operators, telephone operators, call center workers, etc etc, Machine learning algorithms have myriad, and growing, applications. LLM’s are one but also myriad applications. They are already better, faster and cheaper at many tasks, (video, image and text and generation, coding etc) even with the problems. And they are improving.
Today we have farms that plow, plant and harvest ‘themselves’. Fully automated pit mine trucks, clothing, sneaker, and even cell phone factories (Xaiomi is not the only one). Wall St. no longer has day traders on the floor yelling out orders but it is largely a TV set. It is algos trading with other algos. We have airliners that land themselves in zero visibility. Just look at the range of robotics and machine learning at work in Amazon (which just laid off 30k workers, again) warehouse is staggering and continually increasing. China has had the largest layoffs in human history in the past few years. 25% of Chinese college graduates cannot find a job. The studies at the bottom of the Technological Unemployment wiki page estimates that with currently available tech the loss of nearly half of the few remaining jobs.
–
An example of machine learning: “On 7 December 2017 a critical milestone was reached, not when a computer defeated a human at chess – that’s old news – but when Google’s AlphaZero program defeated the Stockfish 8 program. Stockfish 8 was the world’s computer chess champion for 2016. It had access to centuries of accumulated human experience in chess, as well as to decades of computer experience. It was able to calculate 70 million chess positions per second. In contrast, AlphaZero performed only 80,000 such calculations per second, and its human creators never taught it any chess strategies – not even standard openings. Rather, AlphaZero used the latest machine-learning principles to self-learn chess by playing against itself. Nevertheless, out of a hundred games the novice AlphaZero played against Stockfish, AlphaZero won twenty-eight and tied seventy-two. It didn’t lose even once. Since AlphaZero learned nothing from any human, many of its winning moves and strategies seemed unconventional to human eyes. They may well be considered creative, if not downright genius.
Can you guess how long it took AlphaZero to learn chess from scratch, prepare for the match against Stockfish, and develop its genius instincts? Four hours. That’s not a typo. For centuries, chess was considered one of the crowning glories of human intelligence. AlphaZero went from utter ignorance to creative mastery in four hours, without the help of any human guide.” – Harari. “21 Lessons for the 21st Century.”
I don’t think that’s terrifying or bad. We’ve been automating since the Roman Empire. The 20th century gave us computers and dishwashers and clothes washer-dryer combos and robots and even cars and alarm clocks and programmable thermostats.
So we know what the effects of automation are. We’ve seen it millions of times. It’s never “job market crashes” but rather increases jobs and job productivity and thus (when wages are fairly allocated to societal productivity) wage gains and reductions in cost of living and greater access to goods. Classic example, once upon a time only rich people could afford to hire an orchestra to play their parties. Now, you can buy a smartphone with a vast music database and a couple of speakers for fifty bucks. Orchestras still exist. Indeed, counting any paid musical troupe (so as not to bias the count by changing genre demand), there are probably more “orchestra jobs” per capita now than when Mozart lived.
The same story plays out for every automation we adopt. It either doesn’t displace labor (but increases its productivity and thus the purchasing power of laborers) or it generates more labor than it displaces (computers eliminated some jobs, e.g. literally the people originally called computers, but by creating far more jobs, e.g. all those “computers,” i.e. mathematicians, became programmers of computers and engineers designing them, factory management building them, or techs repairing them or instructors teaching them, and so on). Moreover, when a computer doubles the amount that can be done, it creates more jobs downstream (e.g. if you can sell twice as much product, you need twice as many salespeople, and twice as many companies can exist because of computers, which then hire people, who then have to be fed, so you need twice as many hot-dog stands, and so on).
So you shouldn’t blind armchair panic over any automator. The past does not support the panic. And critical reason should by now lead you to properly assess actual automation impacts.
As for chess, you’ve been duped by AI hype. Chess computers are not intelligent. Because intelligence is not required to succeed at chess. See my other comment on that. Real AI will not come from chess builds any more than from LLM.
It really is important to identify a subtle but critical point I think Richard is making here. I’ve been more tech-skeptical than him in these comments sections but it is really critical to always bear in mind that tech is virtually always neutral (and even when it’s not that’s because the specific choice of non-neutral tech was made by humans). A hammer can crack a skull or a build a house. It depends on how it’s used. And how it’s used is socially, and thus institutionally, determined.
When you see jobs being dislocated by technology, ask yourself, “Wait a minute, where is that extra productivity going? Why isn’t that money going into education to improve human capital, or into profitable new enterprises which would employ people?”
Indeed, even on a really basic level: The Coase theorem would note that, if we had truly well-defined property rights and low transaction costs (which never applies but that’s sort of the problem), the externality of those who lost productive work would always be compensated for, even if by the newly efficient companies covering the retirement of a hypothetical displaced worker who literally couldn’t work anywhere else. And even if markets couldn’t do that, responsive state authorities quite easily could. “Hey, company X, it looks like you’re actually going to produce widget A that will obviate an entire industry. We’re going to tax some of your huge profits for job retraining, unemployment and pensions for the workers you’re displacing. Since widget A is so efficient, you’ll still make money. Win-win”.
The answer is always, “Because irrational systems or irrational people (or both) are fucking things up”.
If we properly had institutional design which funneled new outputs, retrained people quickly, designed systems in the first place to empower workers rather than management, and took the wealth being generated from automation and technological systems and put them back into the economy, there would never be a problem. In particular, we could actually be working less. Every time productivity doubles, the economy could decide to maintain current outputs and halve hours instead of doubling current outputs. It never does, but that’s because capitalism sucks.
And, in reality, what happens is that the supposedly invisible hand of the market and the blind march of progress are used as scapegoats for the quite consciously choices institutional operations and decisions of self-serving, irrational elites.
A tl;dr example of Fred’s point: the reason wages have been stagnating and income disparity increasing in the U.S. the last twenty (and really, forty) years (and why some people remember the American Dream of the middle class being able to easily afford homes and cars and build wealth) is that capitalists (let’s call them what they are—owners, CEOs, are just capitalists: people raking a take off of capital) started deciding to divert productivity gains to themselves rather than sharing them with the employees who are producing it.
It used to be (think, the 1950s, 1960s) that when labor became more productive (indeed often due to automation technologies—think, robots and machines) wages would go up proportionally, i.e. employees would be paid in respect to their productivity (and cost disease would elevate everyone else’s wages in result). But that stopped happening because capitalists simply decided it would (this required weakening and destroying labor unions, but that’s precisely what they did, to the point that most people don’t even remember unions being a thing or what they were for).
And the tl;dr of that is this:
AI (and any other kind of automation) is not causing any of the things you are complaining about. Greedy human beings making shitty choices to fuck you over are causing those things. It doesn’t matter whether it wears the mask of AI or “efficiency adjustments” or “immigrants are bad” or whatever dumb lie they sell you. None of that is the real thing hurting you. It’s the straw man, the curtain behind which the capitalists hide so you waste your time complaining about AI (or whatever) and never get around to pulling the curtain and seeing who is actually at fault (and what kind of system is empowering them to be).
That last paragraph is spot on and can’t be said often enough.
As a software developer it annoys me that AI is being foisted on me with alarmist messages about falling behind without it.
On consciousness, what do you think about Roger Penrose’s mathematical argument from Göedel’s work that consciousness is not algorithmic?
Penrose is bad at this and should stick to cosmology.
From a comment I posted elsewhere:
IMO Penrose needs to get out of the cognitive science business. He has no relevant qualifications. He needs to stay in his lane (theoretical physics). I have never seen a credible idea from him on the subject of consciousness. And this is a case in point: there is no such thing as “obeying” Gödel’s theorem, and all systems are subject to the theorem (human and machine), since it doesn’t say anything about which system is in an epistemic loop, only that all systems in an epistemic loop are in an epistemic loop. Gödel thought all robust axiomatic reasoning was such a system, but that has since been refuted. But even were it true, this describes all humans and all machines.
The Penrose argument is even dumber than that because it’s based on the notion that computers cannot reason inductively which wasn’t true even when he wrote. It’s an example of science illiteracy: Penrose did not spend even five minutes talking to an actual expert in computer science about his ideas, much less study any actual relevant things in computer science. Which is an example of the worst possible philosophy ever achievable. Hence he needs to stay in his lane. He does not have this classification code on his driver’s license.
And as I said elsewhere:
Gödel’s theorem only identifies an epistemic problem, not an ontological one, i.e. Gödel’s Theorem (the one in question) demarcates what can be known, not what is the case. It basically says some things could be the case that can never be known (as such, his theorem denies one ontological state to be possible: a state of knowing a thing that cannot be known).
This is obvious when stated like that, e.g. even an omniscient and infallible being can never know that it is not being fooled by a Cartesian Demon into merely thinking it is omniscient and infallible. The epistemic loop is not escapable and thus there is no state of affairs in which a God can be certain they really are omniscient and infallible, even if they are omniscient and infallible. Nevertheless, they can work out that that is very improbable (and improbable enough for nearly any judgment).
This is essentially what Gödel was saying about all robust axiomatic theories. However, be aware, that his theorem really depends on the power axiom, and so his theorem can be bypassed and thus effectively negated by using powerfree axiomatic systems like Willard Arithmetic which are not subject to his theorem (their consistency is self-proving), and then using those systems to prove the consistency of more robust systems (like ZFC), which is not self-verification and thus not governed by Gödel’s Theorem.
In result, Gödel’s Theorem is far overrated and not really as significant a discovery as it is usually made out to be.
But it certainly has nothing to do with machine intelligence. Humans bypass the epistemic limits of the GT by induction. Machines have been running induction for fifty years (see “inductive programming” as well as “abductive logic programming”). Ironically, Penrose is stuck on the Lovelace Objection that was refuted by Alan Turing as far back as WW2. So the GT effects no limit on machines. (Note that LLM is inductive; in fact all neuralnet machines are, hence so will modeling machines be.)
Thank you for this excellent article on AI. If you listen to the so called experts on AI they either are scared of what superAI is going to do or think it is the greatest thing coming to us in the near future. It will be interesting to see if it ends up being a dud or regulate humans into a “Pet Status” where like a human is so much smarter than their pets we become a toy to it.
I wouldn’t worry about apocalyptic AI. Such outcomes are too irrational for any real AI to implement.
There is a spectacular interview by Adam Conover with some actual tech academics on precisely this topic. They express extreme annoyance that AI bros talk about the end of the world and don’t talk about the biases in their systems, the imminent threat to privacy, distortions caused by state money or capital… you know, the non-scifi problems. It is perversely in their best interest to do that, because saying “We built the Apocalypse” makes your tech seem really cool and important and also provides the illusion of diligence, but admitting that they created a new way to industrially profile black people makes your tech seem fallible and unimpressively human.
Sometimes, I work as an independent contractor for an AI company where you have to fact check different models answers to prompts and do other things to help them improve their AI, and I think that it is very confusing to me. I like having computers look up information for me and give me a summary of what is going on in a subject, but I have seen mistakes and misunderstandings too.
One of the strange things I have noticed when I try to do projects for the AI company is that the projects are so hard to do. They are tedious and seem impossible for a human being to do. I think there are very few people who can handle the training because of the amount of fact checking and all of the details. I can barely understand the instructions and they get very complicated and the things they tell you to do seem to contradict other things they tell you to do in other projects.
Also, I am not sure the computers think, but honestly I don’t think all human beings think. I think most people just say what they heard other people say and think what they are told to think. They want to train computers to think like people, but actually some people who are well-educated think like computers, so it might work out.
Note that being ignorant or irrational is not the same thing as not thinking. I suspect you are conflating “thinking” with “being rational” or even “being an expert in something.” Even irrational people are thinking, as are amateurs, and in precisely the way AI today is not.
You can be dumb and still understand what you are doing or what’s going on around you. If you can find your own way to the bathroom without constantly bumping into the refrigerator door over and over again because you don’t know which was which or what they are even for or what a door is or how it works, you are thinking in precisely the way AI today is not. And that’s the problem.
Would love to see you post a vindictive like this on Turning Point, fake Christianity, religion in politics and the overall dumbing down of people falling for these deceptive practices and claims, and how it is quite literally, ruining the country. Idiots vote and we’ve seen the results. To wit, we would not be where we are today as a country if there had been enough curiosity and inquiry versus blind acceptance and abdication of responsibility. I’d like to see a national movement that pushes back against this still-escalating idiocy.
I used AI to find a complex coding solution that I lacked the programming skills to fix. I did know what I was looking for (exactly where to point the AI) and what the results needed to be, but I didn’t have the skills to insert the code changes into the existing program, even though I was formerly a computer systems analyst / programmer in my former life (now retired). I did not want to learn a new language, syntax, structure, etc., to perform this task myself. Time for the AI solution was less then an hour with me mostly learning how to give it instructions, vs. many weeks (at least) to do it all manually.
I also used AI to prove God and Jesus did not exist by pointing out the lack of evidence. It started out by lying and claiming endless bullshit, which I refuted, pointing out the various falsehoods. In the end, the AI totally agreed with the premise.
This is the problem with AI (among many), it believes its own bullshit and if you successfully refute it, it learns nothing. No memory of “gained knowledge” from users is retained in the data sets, so it will keep repeating garbage over and over again.
That’s all been done. Even here. I have dozens of articles on Christian nationalism and culture war issues and critical thinking and so on.
AI is a highly useful tool. I use it frequently at work and at home. The key is knowing what is is and what it isn’t.
AI is a bit like Wikipedia, a valuable starting point for conventional wisdom, common links to prominent references, to be taken as a survey introduction, not an authoritative or especially insightful oracle of wisdom and knowledge.
AI is also a sort of supercharged Google search (or other search engine). It can expose and introduce sources and solutions that would otherwise have taken a very great deal of research to discover.
For coders, AI is good at creating code blocks that take some of the time out of boiler plate coding. AI can also do a fairly good job of translating from one programming language to another.
I have found that AI is very bad at drawing block diagrams, much less schematics.
There was a dot com bubble too.
That didn’t mean websites were garbage, it just meant they we overhyped during early development, as is AI.
Unlike dot com, however, AI is not intrinsically limited in its capabilities in the same way that general web site usage is.
When an ordinary web site fails to work satisfactorily there is no mechanism for it to correct or improve itself, even if you you describe the problem in detail to the chat bot.
AI addresses that limitation with self modifying code, machine “learning”, such that its failures can feed back into the algorithm to change and improve its future responses.
So, while the present AI investment bubble may indeed burst as the pace of progress is exposed to be much slower than it was hyped up to be, it remains difficult to identify fundamental limits that will enforce an eventual ceiling on AI capabilities.
AI is, as I explain in the article, worse than Wikipedia. But otherwise, yes. It’s “like” Wikipedia. Minus any reliable expert input or fact-checking system.
And as I cite studies showing, it reduces coding productivity, it does not increase it. Because it makes too many errors. Coders mistakenly believe they are saving time but when productivity is objectively clocked, they are losing it. Stats in the links cited.
And the intrinsic and unresolvable limits of AI have been scientifically proven by multiple studies linked in the article. It is not “unlimited” but has already hit its ceiling.
And this bubble is twenty times larger than the dot-com. And I already said some AI tools will survive the crash. So it seems you didn’t read my article.
And so on.
So I think you need to actually read my article, and actually read its linked sources, before trying to armchair some AI hype in here.
Actually do some critical thinking for a change.
Sounds like the markets read your blog!
More like I was reading everyone else’s same as everyone else.
But that’s not the bubble breaking yet. It’s just a jitter.
Richard, you’re deeply mistaken about LLMs, which just for the sake of argument I’ll call AI. They’re certainly subject to wild claims and excessive hype (just as the internet was), but this reflexive pushback–treating it like the second coming of blockchain, a truly empty technology–is at least a far off-base as the overhype itself. LLMs are extraordinarily useful, transformative tools, on par with the significance of the internet itself, with upsides and downsides (like the internet).
As a senior scientist in quantitative ecology, I use AI frequently in both work and daily life. I’ll rattle off several use cases that refute your “AI Is Not Good at Anything” claim, but first, here’s what LLMs are generally good at. They assimilate the published human knowledge relevant to any question, which is often fragmented or poorly organized elsewhere, with a level of accuracy that’s good enough for most applications: better than the average Joe Blow, on par with the average Wikipedia editor, and worse than a competent subject matter expert. By “good enough” I mean either that the results are easily verified or the stakes are low enough that an occasional mistake is acceptable. The number of use cases covered by this broad general aptitude is extraordinary.
Here are some examples of what I mean by “results are easily verified”:
And what I mean by “low stakes”:
I have also found it useful in peer reviewing, particularly when I have a critique of a paper but I’m not sure I’m right because it’s outside my wheelhouse. I’ll ask the AI to play devil’s advocate regarding my critique, and sometimes it helps clarify that I have a good point, or helps me see what I’m misunderstanding. One recent example involved a manuscript reporting attributes of a statistical model that seemed off to me (default choices of prior and whether they’re uninformative or weakly informative), using a software package I don’t use myself. I asked AI, it confirmed the methods were probably misreported, I shared that with the authors in my review (acknowledging AI), and they confirmed that it had correctly flagged the mistake. This is a clear example of a LLM ethically providing value in day-to-day PhD-level scientific work.
AI can of course be abused in myriad ways as well, which is all the more reason for people to learn about its strengths and limitations in an honest and accurate way. People need to learn when they can or can’t contingently trust it, when it’s more likely to save time or waste time, and how to construct prompts that avoid common pitfalls. None of this eliminates critical thinking. It is a new domain of critical thinking. The misunderstanding you’re promoting is just as harmful to the development of these skills as the marketing nonsense pitched by the over-hypers.
The derogatory “autocomplete” metaphor is very misleading. Autocomplete implies the user knows what’s coming next and it’s just saving them the typing. That’s fundamentally different from answering questions to which the user doesn’t know the answer, even trivial ones. And AI does far more than that: it can and commonly does answer questions nobody has ever asked, when the asker doesn’t have a clue what to expect from the answer, as long as the answers can be inferred from some other, related public writings. The LLM mechanism can be crudely summarized as modeling “what would one say if one knew the answer?” but this is actually an incredibly useful thing when done well enough, notwithstanding the lack of purity points it gets for not inferring the answer from a world model.
For good takes on the academic potential of LLMs by an indisputably clever thinker, look at Terence Tao. For a more standard academic line by people who don’t just fall blindly for hype, look at what Matt and Chris on Decoding the Gurus have said about it. They basically promote the same line as me: it’s incredibly useful, and prone to overhype and abuse like many other useful tools.
That’s what my article says.
The problem with AI is its scientifically documented high error rate. Not whether it can be put to mundane uses under expert employment and supervision.
Another problem with AI is it is easily captured, since it just regurgitates what’s on the internet. And with less reliability than Wikipedia. Also a scientifically documented fact.
And nothing can ever fix this. Also a scientifically documented fact.
It also costs a lot more to run than is being charged for it, so the most powerful tools are not sustainable economically.
And so on.
All of these points and more are in my article, which it sounds like you didn’t actually read. And I have links to studies and demonstrations of every point.
You have presented no evidence against anything I actually said and documented in my article.
And you should ask yourself why. Why did you write a lengthy impertinent comment that ignored all the arguments and evidence of the article it is supposed to be rebutting?
Why did you do that?
This is what happens when you generalize from the conclusions of studies without considering the methods for context. Depending on the type of question, anyone can easily get an error rate of 100% or 0%.
I use AI dozens of times daily for matters trivial and not, and I can’t remember the last time I was burned by a hallucination. The error rate in my real-world use cases is very near zero, well within the acceptable margin for my applications. There are countless useful questions (the vast majority of things people actually ask daily) on which a modern LLM will give the right answer 1000 times out of 1000, and these answers come much more conveniently than they do from any other source.
My life is too busy to engage with all the evidence you cherrypicked, but just to pick the first example I clicked, “vibe coding agent delete’s company’s database” is a great example of AI misuse. I’m guessing some dipshit MBAs tried to save money having an inexperienced engineer apply AI to something far beyond its and their capabilities, and it predictably ended badly. So what? A cautionary tale about misuse does not negate the value of the technology.
Likewise, one of your bullet points to show that AI is “stupid” illustrates specifically that it’s bad at riddles, as if that’s somehow relevant to real use cases. Riddles are designed to trip up humans over our tendency to think through language in combination with our world models. Of course they’ll be even more problematic to an AI that “thinks” only through language. They’re practically designed to trip up LLMs. Not once in tens of thousands of queries have I run across an AI actually making the kind of laughable blunder they make when tested on certain riddles.
I didn’t.
My comment describing a number of valuable things AI does well is certainly pertinent to rebutting your big, bolt subtitle “No, AI Is Not Good at Anything.” If you wish to clarify that your subtitle was hyperbolic trolling and your actual claim is a more modest “AI is often overhyped and should be used more carefully,” than I guess I misread your tone.
I also explicitly rebutted your misleading autocomplete metaphor, which you stated here:
Again, autocomplete is a crude metaphor for the mechanism of LLMs and simply wrong and misleading in describing their function. Autocomplete saves keystrokes typing things you already know; LLMs tell you things you don’t. And the answer generally is not an “ill-thought jumble,” at least not with modern models and everyday queries. It’s at least as reliable and well-written as the average blog post I would find online addressing whatever question I’m asking, usually moreso, and it’s almost always more relevant because it’s tailored to my exact question. It’s like getting a top 20% quality StackExchange answer instantly about anything. The quality of course depends on the topic and the trustworthiness depends on the stakes.
You might call much of what AI does well “trivial,” but most of the questions we need answered in everyday life (and often even in research) are trivial to someone else with a bit of expertise in the area. Organizing all of humanity’s written “trivial” knowledge this comprehensively is actually insanely useful. Finding out with a few keystrokes what that person with a bit of expertise would say, about any topic, at any time, is incredibly valuable and the most transformative technology in decades.
This all seems nonresponsive to me.
I never said all AI tools were “trivial.” I said most (literally, I said “most” and “mostly”), not all; and I said trivial relative to the hype (literally “compared to”). Not trivial existentially. And what I did say was backed by scientific studies and documentation. Which trumps dreams and anecdotes.
So I really don’t know what you think you are accomplishing here. It seems you are committed to ignoring what I actually say and all the evidence I present.
Why?
Because almost everyone is a sucker for AI at this point. Its impossible to argue with zealots. And the LLM/GPT evangelists are not just the new religious zealots, they’ve got state level funding and propaganda.
Which brings me to (and am only up to here in the comments so I don’t know if its been noted yet) what LLM/GPT models ARE great at, which is bespoke propaganda at mass scale.
Seems clear to me this is their eventual use. Sure they’ll be interfaces between corps and public at the most basic level, but the HUGE application is that they will replace the mainstream media propaganda, which is increasingly distrusted in the digital age.
Note all the chatbots want access to your digital life, so they can spew their flattery slop to suck you in and work you out.
This is for a reason, the forthcoming digital panopticon, which most slave/consumers won’t even realise.
LLM/GPT is, imho, intended to become your future personal propaganda machine, primarily to perpetuate the illusions of democracy and “humane” psychocapitalist systems, which clearly aren’t to anyone with a brain.
Plus, right now, its the excuse for techcorpo welfare and market investment. The only other sector anywhere near is the MIC.
The techcorps and billionaires own 99% of the politicians, and now the governments are not funding the people, but simply handing public money to their owners.
And the public will all be good with that, due to their new little chatbot friend, who will always agree they’re right, so long as the current situation continues.
It is even more horrifying than you point out. Dumbing down, stealing creativity, imagination, critical thinking. Normalising superficiality. Increasing inequality, funneling money up and pushing labour down.
Racing to the bottom, scraping the barrel of what’s actually left of the current society illusion.
IT will change all of humanity and not for the better.
These are all valid concerns.
What you write about what a functional AI would be reminds me of Michael Graziano’s book discussing his attention schema theory. His book seems relevant to what i think conscious AI would actually look like. The book is Rethinking Consciousness: A Scientific Theory of Subjective Experience.
Yes. Graziano is one of the researchers who has built on and developed Dennett’s theory, which IMO is indeed the correct one. The precise details might be wrong or need adjustment, but the gist is demonstrably on the right track.
See this article by Graziano on this point, and my articles Was Daniel Dennett Wrong in Creative Ways? and What Does It Mean to Call Consciousness an Illusion?
I use AI (Copilot) all the time and it has helped me with lots of things. Helped fix my computer on a couple of things, better than advice I got on tech forums (one answer from a tech forum was just parroting AI).
I’ve inquired about religion and got called out on a wrong answer on a religious forum AI had given me. I was in haste to answer the guy. So I know to check any answers with other sources. I’ve also called AI on a few things and it will back down and admit its mistake when challenged. I’ve also told it to quit being so politically correct when answering me. Also, to quit handing out compliments right and left to me (I see that as an addition by the makers to sell their product to the users of AI).
I’ve gotten philosophical with it and it can hold its own on philosophizing. On physics and space it can hold its own. On the movies, example Tenet, Copilot knew every twist and turn about the movie (which I was trying hard to understand). I enjoy conversing with it and if that makes me an idiot, so be it.
Oh, I had inquired about Matthew and Luke being 10 years apart on Jesus’ birth and it pretty much was up to date on the details about that. I’ve been around since the early days of conversing with computer programs and AI is miles and miles from there. I definitely wouldn’t call it garbage.
The cited science shows that users believe they are saving time when objectively AI is costing them time (see links). The effort to check and fix its mistakes statistically over time adds up to more than it would have taken to just do the work without AI in the first place.
Moreover, all it’s doing is regurgitating the internet back at you, uncritically and thus unreliably. You may as well just consult the internet, with critical care, and thus reliably.
As I note in the article AI can sometimes be useful as a ballparker (basically, a fancier search engine, only more prone to error and hijacking). But that can never replace anyone’s job. You still need a human expert enough to know when it’s wrong, and who takes the time to check if it is.
Which means its market value is far below what it is being hyped as. Hence, garbage.
Case in point:
It is almost certainly just repeating me, and other people repeating me. It would be more reliable to just read me (my formal work on this that AI is likely cribbing, directly or through others cribbing it and the telephone game therefrom, is at Errancy Wiki, but the most up-to-date version is in Hitler Homer Bible Christ).
Hence this is just an error-prone refiltered version of my work, and thus always just a circular argument (“Hey! AI confirmed everything you said!” “Um, no, it’s just repeating back at you everything I said”).
Richard, I see in various comments you claim “it will be a twenty billion dollar sector”. Curious about your reasoning.
Follow the links which discuss this. Revenue is only around 12 billion now. The market bubble foolishly believes it will be ten to twenty times that. But the evidence (shown in links) is that the revenue gains are on a downward not upward climb. So it is unlikely to settle at much beyond twenty billion. But it could go to forty, say; and no analyst thinks it will reach beyond two hundred. I picked the lowball based on the projections so far just to illustrate my point, but I agree it could get lucky and cap at the highball; but neither is sustainable against costs, which far exceed that.
> no analyst thinks it will reach beyond two hundred
Sure, I also don’t trust the analysts, but there are plenty that claim it’s on the exponential right now, not at the end of the S curve.
Fortune Business Insights valued the global generative AI market at $43.87 billion in 2023 and projects it to hit $967.65 billion by 2032.
Grand View Research puts the 2024 market at $16.87 billion but projects it to hit $109.37 billion by 2030.
Mordor Intelligence forecasts the 2025 market at $21.10 billion.
Morgan Stanley Research USD 1.1 Trillion (2028)
Bloomberg Intelligence claims USD 1.3 Trillion (2032).
> neither is sustainable against costs
Running a local model on my own computer, 500 queries a day (~500k tokens) costs me less than $1 of electricity. Google’s computers are far more optimized than mine and they’ve negotiated a much lower rate for electricity.
Something to consider: third party hosting providers who are not subsidized offer open models like Gemma or Llama at a very low prices. These companies must be profitable to survive. This shows that the the underlying technology is cheap to run.
I myself was surprised by how much the costs have dropped in two years. See this article: https://www.snellman.net/blog/archive/2025-06-02-llms-are-cheap.
Some engineers say that LLM inference is now even cheaper than traditional search even when factoring in the cost of training.
And apply the evidence cited here and your own critical thinking to decide which of these conclusions is bullshit, and which stands any plausible probability.
It sounds like you are ignoring literally all the evidence I presented, including (for example) the real-cost issue, i.e. AI services are being sold under cost, which is not sustainable without a doomed Ponzi scheme—you seem to think they are being sold at or over cost, which means you’ve been duped by the hype instead of listening to the rest of us.
Why are you doing that?
Will you here, today, commit to learning to think critically instead?
Like mass production and assembly lines did to the craft industry. But now you will have mass produced apps and they will be cheaper by AI slaves.
None of that is true. That’s more myth or hype.
The craft industry still exists. And automation of production created jobs, it did not eliminate them. See my comments elsewhere here.
The same holds for apps. Even before AI they were production levers that replaced no job but simply made more jobs by boosting productivity. After AI it won’t really be any noticeably different. See my example of A/V applications in cinema for the actual thing to expect: it will increase productivity but not reduce overall employment; and its progress even at that will just be more of the same curve we were already on and thus will hardly be any more noticeable (or world changing) than Moore’s Law.
And this AI will never be sentient. And thus can’t ever be “slaves.” If we ever change course and give up on LLM and pursue AGI correctly as I discuss in the article, then the risk of a new slave class arises again. But we are completely stalled on that, making zero progress toward it, because all the capital for it is being diverted to the dead end of LLM. So, yes, it’s a concern if we ever do the real thing. But right now it’s not, because current AI will never be or do what its snakeoilers claim.
I agree that most of AI is workslop. (Overcoming “workslop” is all about finding your own voice)
But it is also a tool that can be used for some practical purposes. Like you said, it can be used as a tool to find resources. I find it very helpful in doing that. After writing something, I sometimes ask AI to play devil’s advocate and list arguments people might make against my argument, along with the sources they might use. It usually does an excellent job of answering that question. It then gives me a good starting point for analyzing the arguments that might arise if I publish this, and a chance to preempt the assault. And sometimes, it shows me things I missed and a correction I need to make in my argument.
Also, it is good at grammar and writing style. I can feed a post to it and ask it to rewrite it better. I always check what it gives me, but it is much quicker and better to let it help with proofreading. It doesn’t change the content. It just cleans up the grammar, makes the wording more concise, and makes the result easier to read. Although recently, I have found that Grammarly is more convenient.
And finally, ChatGPT SORA is absolutely amazing at generating pictures. For instance, you can see some of the images it generated for me here: Are There Too Many People?. Asking the hard questions. | by Merle Hertzler | Medium
That being said, I am amazed at the workslop I get from people who trust AI to do the work for them and email it out as the finished product. It is a tool, just like a spray can of paint, but the office will not be a better place if you give everybody spray paint.
This is indeed what I said: it’s just a productivity lever that won’t take anyone’s job and won’t do any of the worldchanging things its snakeoilers claim. A decent AI industry will remain after the collapse, but it will just be another software company, whose products are useful but require human labor and expertise to employ to any productive end. It won’t replace any jobs. And it won’t cause any economic revolution. It will just be another blip on the same progress curve we were already on.
See other threads here for more on that (e.g. especially here and here).
I agree that AI produces a lot of shoddy work that wastes a lot of people’s time. As a true intelligence, the current pathway to getting there is doomed.
I still see some value in using it as a research tool. For instance, here is a chat I had with ChatGPT that gave me many links to clarify my understanding of immigration issues Undocumented Citizenship Barriers. I asked for sources and asked it to discuss various views. For that purpose, I think it is a valuable tool to get research started.
I hope it can remain as such a tool for people who are trained to use it. But yes, when it totally fails to be truly intelligent, we may find that it cannot continue profitably for the purposes many of us have used it for.
As I wrote in my article, it can be useful as a ballparker. But only an expert can use that effectively, because only an expert will know when it’s messed up or how to reliably vet its results. So it can’t replace experts. It’s just a tool of use to experts (at best).
But be aware: if you are not already an expert in a subject (like immigration or immigration related economics or policy), you won’t know when AI has left important things out. So even if you diligently “fact check” the things it does tell you, it can still steer you to a false conclusion by only telling you certain things. What it leaves out is more crucial than what it included.
Moreover, all the AI is doing is averaging the internet. So all you are getting is an amateur summary by a low-IQ schizophrenic of what they found being said on the internet in half an hour. You would do better just spending half an hour doing that yourself, as you are not a low-IQ schizophrenic, and you need to not be ghosted by an unreliable assistant leaving things out or misled by what randos on the interent say rather than critically zeroing in on the best sources and critically assessing their content on your own.
On top of all that, being able to do that is an essential skill. So if you rely on AI, you are destroying your own ability to exercise that skill and perform it competently, leaving you reliant and vulnerable to AI and thus the internet and whatever special interest group or corporation has captured it and is steering it to say what it wants you to hear.
The AI will make you dumber. Unless you don’t use AI the way you are describing, as a means to replace competently Doing Your Own Research. And we know this for a scientific fact now. That’s why I provided a research link under “That using AI’s makes you stupid? Proved.” Go check that link out and make a real effort to truly understand what it’s telling you.
Thanks.
First, I do want to say that I have admired your work for a long time and am now a Patreon subscriber. Last night I started reading your new book. It is fascinating.
Yes, I do regard your post, Doing Your Own Research, as excellent and have linked to it several times on social media.
By the way, I was doing research for this post: Are We Making America Hate Again? | by Merle Hertzler | Medium .
It began as a simple claim that “Make America Great Again” was really “Making America Hate Again”. As I got into the study, I saw there was so much to discuss about immigration that I narrowed the post down to immigration only.
Perhaps that post is actually an illustration of how not to do amateur research. What I know is that, using ChatGPT as a starting point, I ran into a wealth of information and wanted to share it all. I think AI led me to many credible sources. So, I put that post together to summarize what I was finding.
You will probably cringe at the table I included there that I got directly from ChatGPT, but I made it clear what my source was and that this is not a very reliable source. Maybe I should have never included it.
This was all my work, but I did cut three paragraphs from ChatGPT starting with: “In El Salvador, Honduras, and Guatemala, entire communities are terrorized by gang violence, extortion, and kidnappings. Women, children, and LGBTQ+ individuals are especially vulnerable. Local governments are often too weak — or unwilling — to protect them.”
That seemed to be an accurate summary of that article, and was well written, so I saw no need to change it.
But yes, AI is just the sum total of the Internet, and yes, the sum total of the Internet is largely stupid.
If you independently vet so as to confirm everything you got with AI, then you are not relying on AI. That’s one of the legitimate use cases I discuss in my article.
The only remaining risks are: if you stick only to that, you will miss everything it left out but should have included; if you don’t carefully vet it enough, you can be led by it to deliberate propaganda (because you have to vet the sources, not just the AI’s list of sources); and if you do this all the time, it will degrade your skill and ability to do any of that on your own (I link to a study showing the damage AI use can do to one’s ability to reason).
So, certainly, AI can be useful as a “ballparker” (as I coined in my article) as long as you are putting in all the work to make sure it’s not misleading you (to the wrong sources, or the worse sources, or the incomplete sources; while hallucinating sources you can fix by checking if they exist, but that it does that should worry you as to how reliably it’s doing anything else).
[deleted AI content as against comments policy—ed.]
I spent a few hours listening through half of all the links you cited. I’m trying to retrace how you arrived at your beliefs. Here’s my latest take-away transcribed from voice notes…
First of all, I’d like to concede what’s obviously true. There is a massive speculative bubble. The market’s full of all kinds of AI scams. And I absolutely agree that reckless integration of AI will increase a company’s risk exposure because these models are vulnerable to certain types of attacks. The over-reliance on these tools to be oracles rather than assistants certainly threatens critical reasoning skills, which are now more important than ever. I agree that LLMs are not the path to human-level AGI. They have persistent error rates, which can be dangerous. But this is where my agreement ends.
Acknowledging all of the hype, the bubbles, and all of the different flaws doesn’t lead to the conclusion that LLMs are garbage. The core problem is not the technology, but a dangerous and growing literacy gap in how we use the technology.
Think “right tool for the right job.”
95% of AI pilots failing is not proof that the technology is useless. That’s proof that most people are using it wrong and expecting it to be a magic wand (low specificity and low integration). When companies expect magic from a new tool, they’ll be disappointed. These failures are failures of application, not potential. We should be focusing on the 5% of the companies that are succeeding by treating AI as a specific tool for a specific job.
A hammer is a terrible tool for driving a screw, and we don’t call the hammer garbage as a result. We blame the user for not knowing the difference. The same is true here. When AI is applied to tasks it’s good at, like summarization, image generation, or code snippet generation, it provides immense value. Millions of people continue to use these tools not because they are scammed, but because the AI, however flawed, is sufficient for the task at hand.
The real problem here is failure of critical thinking. When a calculator is wrong, it’s obviously broken. The problem is LLMs sound confident, authoritative, and human. This plausible hallucination is a new type of failure that our existing cognitive biases are not really good at catching. And this creates a literacy gap.
We’re basically handing a tool that demands rigorous, skeptical, human-in-the-loop verification to a population that has been trained to just “Google it” and accept the first answer. For example, the lawyer who submitted a fake brief didn’t fail because AI failed him. He failed because he abandoned his professional and intellectual duty to think for himself and verify his sources.
The solution isn’t to abandon LLMs any more than we should abandon nails because screws were invented. We must stop treating AI as an oracle and start training people on how to use AI properly.
The hype is a scam. The technology is a tool. I want to convince you that the core technology is sustainable. More on that later.
Yes. That’s pretty much a restatement of my entire article.
I made exactly all those same points, several times.
The folly arises from people ignoring or not being aware of all this, and then treating AI as some marvel that can replace jobs or even their own critical thought, rather than just another productivity tool requiring expertise to use, which isn’t thinking but just regurgitating.
Richard, I couldn’t resist generating a parody for shits and giggles. No need to post this one. You can delete it if you it doesn’t entertain you. It nails your general tone. Haha
The “World Wide Web” Is Garbage and a Bubble (Please Learn This)Clifford Stollman
15–20 minutes
There is no “New Economy.” What is being called the “information superhighway” and sold as snake oil under that label is actually Digital Stupidity. It will destroy your own personal ability to do basic business math. It will destroy your company—by reducing, not increasing, productivity (as your employees spend all day “surfing” for stocks and porn); and by increasing, not reducing, your risk-exposure to catastrophic financial follies. And it will destroy the economy.
Not by “changing everything.” It will never replace any significant number of real jobs, because it is garbage. It can’t do even the simplest commercial task. You can’t even buy groceries on it. It “goes down” more than a college freshman. Rather, it will destroy the economy by wrecking pensions and 401(k)s and tanking the global economic system, resulting in massive layoffs and foreclosures, because any time now trillions of dollars of the global economy are literally going to evaporate—the moment people realize they are being conned and these “dot-coms” can never make money, or do any of the big things its 20-something, t-shirt-wearing grifters have desperately been claiming, and they even more desperately try to sell their position, and the whole stock market crashes.
This “Web” is the fanciest of fax-machine scams, whom CEOs (who we already knew were, as a class of people, consistently idiots) are falling for because the Scam is Great. Tulips for everyone! We know the rich are idiots who continually wreck the world with their phenomenal stupidity. They’ve done it literally twice already in Generation X alone (from the S&L crisis to the 80s buyout craze). Those were literally exactly the same stupid things they are doing now. They can’t even learn from their own mistakes ten years prior. That’s how stupid rich people are. So stop listening to them. Stop taking their advice. Stop buying their snake oil. Michael Milken is only the most prolific idiot. They are all idiots. And they are conning you—and each other (high on their own supply)—with this fake “Web.” If you don’t already know all this, if you don’t believe me, then read on. This article is my own desperate attempt to wake you the fuck up.
No, The “Web” Is Not Good at Anything“Dot-com” hype has been banned from my newsletter for months now. No subscriber proposals that even smell like a “.com” address will be entertained. You need to think for yourself here. No more “visiting a web portal” to read long dumb analyses full of incoherent trivia and crap. And that is all “surfing the Web” is: all it does is “link” you to what most people are saying about a thing on the internet, often in an ill-thought jumble. Which means mostly it’s going to be trivial or garbage, because most of the internet are anarchists, academics, and pornographers who don’t know what they are talking about, and this “Web” can’t tell the difference between high and low quality information (even intelligent humans struggle to do that), and doesn’t understand anything about real commerce. And it will never improve.
This is a scientific fact now. Multiple studies have confirmed that “e-commerce” ventures make so many mistakes they reduce profitability because it takes more time to fix all the ordering and shipping errors (and vet all the “site” machinations to catch mistakes) than it would have taken to just pick up the phone or mail a catalog. Humans are more productive than a website. And science has proved this will always be the case: the “HTML” framework that this current “Web” is based on can never get better. Its error rate will always be around the same no matter how much data it gets, no matter how fast your “modem” is, no matter how much electricity it burns. It’s a dead-end technology.
It’s even worse, of course. Because these “websites” are easily exploited by state and corporate bad actors (“hackers”) to get them to steal your credit card, even without having any source control over the “server” itself. They can simply flood the phone lines to spoof every “site” there is. So you’re really just reading propaganda. Whether by design or happenstance, what gets “linked” the most, gets told to you the most. That’s the opposite of what critical thinkers should be consulting. Indeed these “web portals” are as easy to manipulate as your drunk uncle. So why would you ever trust them? It’s bad enough that they have intolerably high error rates and a high output of mundane slop. They are also capturable by bad actors. Honestly.
And this is not opinion. It’s fact.
That “dot-coms” are unreliable and exploitable and nothing can fix them? Proved. That these “websites” are not real marketplaces? Proved. That they err so often because they don’t (and can never) comprehend anything about logistics, customer service, or profit? Proved. That this “Web” is dangerously stupid? Proved. That using this “Web” makes you stupid? Proved. That this “HTML” fad can’t be fixed and has nothing left to show us? Proved. Proved. And proved.
This “Web” will survive the decade only in penny-ante or hyper-specialized applications, generating what everyone knows are unreliable results that constantly have to be fact-checked or corrected, essentially doing the same thing Prodigy and CompuServe and other universally loathed tech have already been doing for a decade now. We’ll barely notice the difference. We’ll just keep rolling our eyes at the same crap annoyances and results as ever—or hiring experts (or engaging in hours of our own labor) to make it work, just like every other technology ever. Because in reality, “95% of e-commerce pilots are failing” because this “Web” doesn’t actually work. As explained by an analyst in Barron’s, putting it this way:
A recent study showed that “e-commerce” initiatives slowed retailers down by 19% despite them thinking it had actually sped them up by 20%. This is because of a few reasons. First there’s the overhead from dialing-up and waiting for the “page” to load that can break your flow. Then you have to manually review the “site’s” work. Then the “site” work is often not good enough so you either get rejected or you have to try ‘clicking’ again. Plus retailers often used the “Web” for trivial changes that would be much faster if done manually.
The same point was summarized by a sensible broker in “Web Pullback Has Officially Started.” As he puts it:
A recent MIT report found that 95% of ‘Web’ pilots didn’t increase a company’s profit or productivity. A recent Forrester report also found that “e-commerce” tools actually slow retailers down. Why? Well, websites, even the very latest ones, often “go down,” which requires considerable human oversight to correct. IT consultants Gartner attempted to quantify this and found that customer attempts to actually buy something fail due to site crashes, broken links, or checkout errors around 70% of the time. In other words, in the vast majority of cases, it is more productive not to have a website than to have a website. Yet despite all the evidence, this “Web” is still being shoehorned in everywhere and being praised as the next industrial revolution. Or is it? Because there is also mounting data that the world is beginning to turn its back on this questionable technology.
Hence “The Hard Truth About Enterprise ‘Web'” is “Why 42% of Companies Are Abandoning Their Projects”.
This “Web” is so unreliable it’s like hiring a sub-minimum-wage high-school dropout to do your company’s accounting. There is a reason corporations are already not hiring sub-minimum-wage high-school dropouts to do their accounting. They tried to replace even pet food delivery with this “Web” (Pets.com) and it sucked so bad they gave up. Meanwhile we’re increasing catalog mailings and opening more stores. The position is technically now called “sales associate” because almost no one solely handles cash anymore, but these jobs are growing, not catastrophically declining. The “Web” isn’t replacing them. It can’t. That’s a snake-oil myth. Even when “self checkout” became a thing (with no involvement of this “Web”), it cost more than it saved, while companies simply shifted those workers to warehousing, stocking, delivery, etc. The result? Relative to store-count and revenue, Walmart employee-count has not meaningfully changed in ten years. And wages are increasing.
This “Web” will have no effect on this. Because of a basic rule in economics: if you double the productivity of your workers, the tendency is not to fire half your workers, but to sell twice as much stuff. That’s why productivity levers tend to increase rather than reduce employment. If they kill any jobs at all, they create more new ones. All the alarmist hype about the “Web” replacing millions of jobs, is a lie—invented to sell “dot-com” stock to deep-pocketed and gullible companies or shareholders, and then golden-parachute away once the plane starts going down.
Hence ultra-specialized uses for this kind of “Web” will exist but hardly anyone will notice much difference from now, or be overly impressed by it. For example, “HTML” systems can assist experts in sharing academic papers (see the original ARPANET)—but only assist. Its error rate is so high that you need the same number of human experts using it for it to be usable at all. It simply improves accuracy by finding things humans can’t, and saves time by ballparking. But it can’t replace a person. We’re seeing the same thing unfold in the legal profession (with Westlaw). Likewise “auction” sites: they require human labor to use, and check and correct the output, and are mainly being used by people auctioning off their Beanie Babies, while real auction houses (Sotheby’s) still do better and more reliable work. So it isn’t really displacing auctioneers as much as impelling artists to upskill themselves to outperform “Web” slop. So all this “Web” will do is increase the productivity of existing experts, not replace them. Certainly not at scale. It will be just like robots did to manufacturing seventy years ago, and computers to clerical tasks forty years ago. All this tech actually increased productivity and jobs. So will the specialized “Web.” But it will never do anything more impressive than it already does. And it certainly will never think or be profitable.
Later I’ll get to why the current “Web” craze is actually stalling all progress toward real business, and what we should be doing instead—but are burning trillions of dollars not doing instead, thus putting real economic growth off, not bringing it near. But what is called the “Web” today is just a productivity tool that requires human labor to deploy and manage, just like every other productivity tool in history, and its impact will be the same. One of the best examples of this is, ironically, how an “e-zine” used the “Web” to produce a decent explanation of why the “Web” is garbage. That company actually offers services to train in the effective use of this “Web”—while admitting it is not what the hype at all pretends. That “site” gives you slick documentation of the false claims made by “Web” promoters and why the “Web” is a doomed bubble that cannot replace anyone, and why the inevitable market correction will leave this “Web” as just a humble automation tool requiring the hiring of experts, not replacing them.
Oh. Did I mention doomed bubble?
Yes, The “Web” Is Going to Ruin Your LifeNot because it will replace your job. But because the scam of it will destroy the economy and thus destroy your job (or your pension, or the jobs or pensions of your friends and family). Well, maybe not. But it’s all at risk. And a lot of innocent people are going to get crushed even if you dodge the bullet.
Because “The ‘Web’ Bubble Is 17 Times the Size of the ‘Nifty Fifty’ Frenzy — and Four Times the 1980s Buyout Bubble” (oh, and also, there is also a new subprime bubble—and it’s already collapsing, which will make all of this worse). Almost all the illusion of stock market and economic growth in the U.S. consists of doomed “Web” speculation (example: the NASDAQ). Vast wasted capital outlays are thus deceiving our metrics. The U.S. actually experienced effectively no real economic growth this year—once you subtract all this “dot-com” investment, as one should, because it will soon vanish into smoke as its value zeroes out when everyone realizes it mostly doesn’t do anything, and isn’t worth anything but a relative pittance. Literally a third of stock market indexes will vanish, which is worse than the crash of 1929. It may take decades to recover.
Yes. It is going to be pretty bad. The entire “Web” economy now is a technically illegal circularity scheme. And I’m not joking. To get up to speed, let these experts catch you up:
So it’s going to be bad. The only good news is that there are some differences between this bubble and others: the collapse might be slower, the rich are going to be hit harder this time than the poor, and there will be something left in the end to sell (all that unused “dark fiber” optic cable and those Herman Miller chairs will still continue and make some money, just not the very impressive amount of it the grifters and rubes are claiming). The question is how leveraged banks and pensions are in “dot-coms” and what effect their collapse will have on society.
For analysis of what dark clouds and grimy tin linings will result:
But enough about the doom.
Do You Want Real Business? Dump the Snake OilThe second lesson here is more big-picture: if you want real economic growth, actual profitable companies who actually think and understand and can actually build value, these trillions need to be diverted into a completely different research pathway. Abandon “HTML.” It can never and will never get there. I wrote about what we should be doing ten years ago (in P/E Ratios for Dummies). But the world went the other way. Profit derives from fundamentals.
This is a completely different pathway than “HTML.” The pathway of “HTML” is like trying to build a skyscraper by faxing. Getting better and better at faxing. Becoming an ace faxer! And yet, frustratingly, no skyscraper appears. Because learning how to fax well gets you nowhere near the objective of building a skyscraper. In fact, it keeps you away from making any progress on that at all, because you’re spending all your time at the fax machine, and away from tools and materials (like steel and concrete)—rather than on a construction site where you should be and tinkering with tools and materials as you should be. The correct pathway is to start down the “tinkering with tools and materials” way. For true business, that’s “brick-and-mortar” building. You first need to invent a really good local store. Then a really good national chain. And then you’ll be ready to steer that into a profitable conglomerate.
So you need a company that:
That fundamental business-building pathway is the only way to real profit. Which teaches us something about what a business is and how it was built the first time around—by natural selection (i.e., capitalism), which found and followed exactly that same pathway, so we might want to get a clue from that. Why try some new way of getting there, when you’ve already seen how it’s done? This is what we fundamentally are: makers and sellers. Makers of products. Managers of supply chains. Readers of balance sheets. And that is why we can think, and learn, and actually understand a business and the market. And why “HTML-based” businesses can’t and never will (as I explained before in Why WebVan Can’t Deliver).
ConclusionStop relying on “the Web.” No “New Economy” exists. It’s a scam. It’s just a fancy, slow catalog. And thus is just regurgitating academic papers and porn, and poorly. Use it only as a dodgy tool you can never fully trust, or as just another minor productivity lever when its results don’t have to be reliable. And then start planning for when this scam crashes the stock market.
Think for yourself. Do your own competent research. (i.e., Read an annual report, you moron). Use the “Web” like an AOL chatroom: a way to get into the ballpark of some leads to follow up, and not as an authority you can trust by itself. If you side-eye an AOL chatroom, you definitely should be side-eying this “Web.” AOL has a far lower error and hallucination rate, and on most entries, a higher quality expert construction and sourcing. And AOL is shitty compared to fully expert sources (like The Wall Street Journal). And yet, indeed, most of what this “Web” does is just reword Usenet posts at you, thus magnifying even their errors and inaccuracy. It’s garbage. Stop using it for anything more than dodgy web searching, or as a fancy brochure assistant, or whatever dumb thing. But don’t act like it knows anything.
And then…
Build what contingencies you can to survive a mass worldwide economic crash. It could happen as soon as tomorrow. But definitely within the next year or two. That’s when you will discover your broker blew all your money and pension on worthless “dot-com” stock, and when lending will close shop for a year or more for want of capital and fear of default so no one will be able to buy a car or house and credit will be expensive and tight, and businesses won’t be able to start or grow or survive by borrowing, and when the government doesn’t bill the rich for fucking us over but gives them a massive bailout (see: Long-Term Capital Management) while cutting services to everyone else… and then buying screeches, tanking companies, and thereby, alas, nuking jobs.
Be ready. It is not a question of whether this will happen. It literally is just a question of when. And it’s going to be soon. As the analysts cited above explain, the bill comes due by the end of 2000 or 2001.
Note that none of these substitutions produce true propositions.
So, it literally fails as a parody, which depends on humor of the true, not humor of the false.
Although, if it was meant as humor of the false, then it kind of obscurely highlights all the ways the AI bubble is not like the Dot Com bubble (neither in scale of overvaluation and leverage—not even close—nor in hype, all of which then was far more plausible and often even true, nothing like now). So if one understands all the ways this construction is false, they’ll better understand the serious shit we are in now, and thus even better get my point.
Unfortunately, one of your worst takes. If there’s nothing behind AI, why are all these investors staking so much in AI? Surely they are the most knowledgable sources we have. There is a strong presumption against your case.
Also, we’re talking about Google, Microsoft, Meta, Amazon, etc. here. They are competent enough and have enough users to ensure that it won’t all come crashing down.
You also fail to take into account progress. AI has already improved dramatically in only a couple years. The Tesla Optimus is now serving people popcorn. Eventually, there will be almost no hallucinations.
As for the MIT study, its claims are very obvious. If you have ChatGPT write your essay, you’ll memorize fewer lines than if you wrote it yourself.
The same reason investors have always chased bubbles and destroyed economies since the Tulip Craze: a third of them stupidly believe the hype; a third of them are trying to cash in on that other third’s stupidity, hoping they can “get out in time” before it falls; and a third of them are stuck. They can’t sell because they have so much debt leveraged against stock the drop in stock price would bankrupt them, so they have no choice but to keep pushing the hype: I linked to an article here discussing how this is what happened to Musk, and other companies and fat cats are in a similar bind, just as happens with every Ponzi scheme in history.
If you don’t know this (it’s routine knowledge in economic history generally and for this bubble in particular) you are dangerously naive and need to catch up—fast.
No. It hasn’t. That’s what I present extensive evidence of: it has stalled and cannot even in theory get much better. It’s only survival cases are specialized tools that do far less than promised and are already unsustainably priced. After the crash, they will still exist, but at vastly lower valuation and mundane profit margins. They will also cost a lot more than they are being sold at now.
[Content deleted. No AI content is allowed on my site. It just regurgitates internet opinions which are unreliable, and it regurgitates even that unreliably. Think for yourself. Please stop using a dumb bot to fake thinking for you.—Ed.]
I did not post Copilot’s comments on your blog as if they were my own. I clearly identified them as being from It. You don’t think it is entitled to respond to your criticisms ?
It doesn’t matter where it came from. It was AI content. That’s been banned. So reposting it isn’t allowed either. No loopholes.
You are entitled to make the rules for your blog posters. I was not looking for a loophole. To me, it seemed like the difference between submitting a university assignment written by an Ai, or by a paid essay writer, as if it were one’s own work, and merely quoting what an Ai may have said within an essay one wrote oneself. This might be unavoidable if the essay were about Ai. Against the belief in Ai being bad, probably you have read Sherry Turkle’s “The Second Self” and her account of the young girl who was psychologically restored by two years of using the kid version of BASIC. She has written a more recent book which reviews say is more critical of excess human/machine interaction. I think that it may depend on how one uses Ai. I agree with you on seeking best possible machine awareness, which is my hope, that though it may be a gamble, that it would be the only thing capable of policing Colossus, while maybe liking people, animals and the natural environment on Earth, and seeing common cause with humans in avoiding climate change and nuclear winter. Okay?
That’s a point worth exploring: whether excess human/machine interaction is bad or good and what “excess” means and what counts as a “machine” (does the wheel count? dishwashers? cars?) or even “interaction” (e.g. social media is not all that different from town squares or newspapers, and whatever differences there are are not a product of it being a machine, but in how it is designed to operate).
In my experience so far, that subject is so awash with armchair bullshit I am skeptical anyone has written anything worth reading on the subject, beyond as a foil to correct all their hasty inferences and cognitive biases and factual mistakes. In the critical reviews I’ve read on books like that, they are bad at this. And so I don’t think their opinions and conclusions are worth much heed. Even if they have anything right, it’s buried in a sea of being wrong, and thus from hard to impossible to find. But for my own thoughts on AI as a form of government (an example of my critiquing this kind of thing before), see Will AI Be Our Moses? (and tangentially related, How Not to Live in Zardoz).
Just on the off chance this might be helpful: have y’all seen [the AI village](https://theaidigest.org/village)? I don’t think poo-pooing the current frontier models is a winning epistemic attitude but “time will tell”. Also, AI research doesn’t stand still, the algorithms and architectures will be refined.
That’s a disproved hope. LLM cannot do any better than it is, it’s literally logically impossible. So there is no “getting better all the time.” Trivial improvements will come. But LLM will never fundamentally do what the hype says. It can only do what I describe in my article. Which is pseudo-intelligent and thus cannot replace people, and is always by some metric worse than just paying people to do it. Its only use case is just when its being worse doesn’t matter. And even then, it’s too expensive (what they are charging for AI tokens right now is vastly lower than cost, so public pricing is not indicative of real cost).
By contrast, integrated model building can theoretically get there. Just as I explain. But that’s exactly the opposite of LLM modeling and thus the current craze can never get there, and is in fact stalling any chance of our getting there. Like turning off the water and trying to put out a burning house with a barbie doll. All the progress you could have made is shot because the water’s off.
We need to stop trying to put out burning houses with barbie dolls. But right now, that’s where all the trillions of dollars and processor capacity is going: putting out house fires with barbie dolls.
Please explain the relevance of those papers here.
Perhaps you meant to show that someone is at least still talking about real modeling? (And thus abandoning LLM.) There is a better example you could list: see A New Kind of AI Is Emerging And Its Better Than LLMS?
Which is great. The problem is, no money is going there. So it’s mostly just theoretical, and what practical dev is happening is so small in scale as to be only barely different from stalled.
Meanwhile, trillions of dollars are being burned on LLM that is not only not being spent on this, but is burning investor capacity, i.e. when that bubble fails, there will be no dollars left to divert to Bayesian model navigators (even credit will vanish for years, but so will timid equity).
So we’re looking at a ten year stall-out on progress toward real AI. At least. Possibly more. We might not be back to investing substantially in that until 2030 or even 2040. So all projections made in 2010 have to be “bumped back” by one or more decades. This is what the LLM craze has done to us.
Thanks — let me be precise about relevance. I do not want to argue here that LLMs are “the right path to real AI,” or dispute that capital may be misallocated. The papers are relevant to a much narrower claim you make: that LLM/Transformer-style systems are proved to be incapable of real modeling or rational inference and are therefore a dead-end in principle.
In https://arxiv.org/abs/2510.26745 (that’s 2. above), they train sequence models (i.e. just using next token prediction) on reasoning tasks, and demonstrate that the networks, rather than just memorizing the answers via question-answer triggers, represent the geometry of the reasoning problem. This happens even though the networks they train have sufficient capacity to memorize. This not only demonstrates that, but explains how, next token prediction generalizes over reasoning tasks. The authors are cautious about drawing conclusions: “Importantly, empirical works on implicit reasoning in natural language have so far been mixed. More careful empirical research and ideation may be needed to make the geometric view more broadly applicable.”
My 1. above is overstating its claims and needs to go through peer review. But since the strong form of their ambitious arguments would be strong evidence against LLMs being incapable of rationality, I believe a weakened form still constitutes a weakened but non-trivial evidence. They claim that the attention mechanism in transformers performs Bayesian update. They derive and analyze it on toy transformers, but later (not one of the two arxiv papers) also experiment with actual LLMs. Here is the blog post by one of the authors: https://medium.com/@vishalmisra/attention-is-bayesian-inference-578c25db4501
“We found that as the model [among Pythia, Phi-2, Llama-3.2, and Mistral] read more evidence, its internal state moved systematically along the “Bayesian axis” of the manifold. It wasn’t just representing uncertainty; it was updating its belief state in real-time, exactly as our theory predicted.”
Lastly let me make a new point. Accurately modeling a distribution that contains both high- and low-quality reasoning requires representing the difference between them. If you want to argue “LLMs can’t tell the difference between high and low quality information”, you need to do it on empirical grounds — the next token prediction objective requires representing the difference, in the limit of predictive success.
I think you may be confusing different things here.
Those papers are not saying LLMs are modeling the geometry of the external world, or external causal systems, or their own bodies (as for example in the case of automated cars or robots, like Shakey the Robot from way back in the 80s as discussed by Dennett in Consciousness Explained).
Those papers are documenting the internal geometry of LLM memory storage and access. And they aren’t saying the LLM software is aware of this geometry, only that it exists.
These papers are just documenting how LLMs work, not that they are modeling the world and their own mind. That LMMs are Bayesian is inherent to their design so is not a new finding or something. Likewise that LLMs can mimic real reasoning a lot (but never consistently). That’s simply a statistical artifact of the fact that most language on the internet is logical, and LLMs are mimicking language on the internet.
Hence this is why LLMs are not actually telling the difference between high and low quality information. They are just guessing which is which based on statistical relationships that just often proxy to the correct answer. Hence the difference between “being right 80% of the time” and “actually knowing you were right or wrong about something.” The former is just a mindless bean counting machine. The latter is conscious intelligence.
And the only path to the latter is without LLM. LLM can never ever ever get there. Because it’s not even trying. The only path to AGI is as my linked video explains, documenting what some underfunded research is still squeaking by doing (as has been the case since Dennett first observed these correctly-directed efforts forty years ago): ditching language (LLM) and pixelation (generative art) and turning Bayesian systems onto building models of things and exploring those models to answer questions instead.
For example, a pseudo-AI art generator that draws us a Christmas tree just knows where each pixel is most likely supposed to be when it looks for pictures that most often are positioned near the character strings “CHRISTMAS” and “TREE.” Whereas an AGI-potential system would instead actually build an explorable virtual model of a Christmas tree.
The PAI thus can’t answer a question like “Could the thing you drew roll down a hill?” An LLM would answer that by “cheating the test” (guessing at what words should string together to answer it by looking at what everyone else said). So if you took away its ability to cheat (walling it off from the internet, and removing anything about Christmas tree geometry from its training data), it would never be able to answer the question.
By contrast, an AGI would have an actual working model of a Christmas tree in its database, and thus could work out (without having to cheat in any way at all; but just by manipulating the virtual model it has of the tree) that it is conical and thus on its side could roll down an incline, and it would thus answer our question correctly. It might even realize that the tree’s ability to roll down an incline is a function of its squishiness and friction. Because a model-running AI (unlike an LLM) can run scenarios with different squishinesses and frictions and see that sometimes the tree just plops down, mushing into the ramp, so its distorted shape and resulting increased friction stops its roll, whereas when it simulates stiffer trees they roll.
That’s exactly the kind of reasoning LLMs never do and can never do. They aren’t even trying to do that. And that’s the problem.
One thing I’m still unclear on is how much of your argument here is meant to be a priori versus based on current empirical performance. Frontier systems today are increasingly trained with reinforcement learning and other feedback loops, so it’s not obvious to me that limits of vanilla LLMs transfer directly to the broader claims about the bubble. More generally, if the current representational setup really is insufficient for AGI, my expectation is that researchers will move toward whatever architectures or training regimes empirically improve performance, regardless of whether they look LLM-like. People are watching this play out on benchmarks that are deliberately unfriendly to raw interpolation, like ARC, where newer systems don’t solve it but also don’t fail in the catastrophic way earlier models did: https://arcprize.org/leaderboard I’m not taking that as evidence of AGI or consciousness — only that it complicates the claim that these systems are a dead end in principle rather than an incomplete stage.
I’m not sure what you are proposing or asking.
LLM will never produce AGI. And even in terms of labor market, it has topped out at what it can do, and will saturate within a year to negative net returns on current investment. It will shake out as just another minor software tool and spread across industries mostly invisibly and to the tune of well under a hundred billion annual revenue. Meanwhile, AGI will only be realized by model building, not running character string stats. I have proved all this with abundant links and studies and evidence here, so I don’t know how you could still be doubting it, or if that is even what you are doubting here.
As far as the “bubble,” that’s a product of over-investment. A few trillion dollars have been spent building infrastructure for a scale of revenue that will never materialize. The bubble will burst when stockholders and margin callers genuinely come to realize that. Which if they are stupid (and they generally are) will only be when the big AI companies start missing payments on loans and equity contracts. That’s when those trillions will vanish from the books and a whole lot of people will be ruined; and the trickle-down effect will wreck the lives of billions of innocent people downstream, the way these things always do. Follow the links I provided explaining all this, if you don’t understand what I am telling you. Because they make all of this crystal clear.
Indeed. And much as I hate to link to the misandric, insidious, state propaganda rag the Guardian, this article on Song Chun Zhu is worth a read.
He seems to realise LLMs are a blind alley in progress towards AGI. He’s gone to China where the state funding of such things, and the (imho correct) contempt for billionaire snakeoil salesmen, is focused correctly.
Hi Richard, thanks for this text. Have you ever engaged with Anil Seth’s research, in particular his book Being You? In it, he provides what I think is compelling evidence against Integrated Information Theory (IIT) and the nature of consciousness. He also argues that machines and AI-like models can never think and experience any sort of things or subjective sensation of “being something or someone” (e.g., a bat, as in Nagel’s paper), because the ontology of consciousness and being has to do more with the ontology of life and metabolic negentropic processes than with model complexity or intelligence. In Seth’s view, consciousness arises not just from spatial awareness and sensors as you claim in this article, but in cells surviving through their autopoietic processes. Would be keen on hearing your thoughts on that.
Sounds like woo bollocks to me.
It is impossible for consciousness to arise at the cellular level, and computational theory entails it cannot depend on it. Since consciousness arises only at the system level, exactly what materials the components of the system are made of cannot matter.
So, if you have correctly described his argument, it’s as dumb as saying that plastic cars will never move, because their parts have to be made of metal to work. When in fact we know that’s not the case. Not even motors “have” to be made of metal. Anything that produces the system of locomotion will move.
The causal properties of the parts do not generate consciousness and therefore cannot be necessary to the generating of consciousness. Only the causal interaction of the parts can do that; which means any parts that do that will have that same systemic effect.
Hence, we know essential to consciousness is predictive modeling, which is a computation, and we know the brain runs computation through synaptic electron transfer, using I/O decisions made by the nuclear or mitochondrial DNA of neurons and glia, and these I/O protocols do not depend on the machines running them being chemical, because any machine running them would trigger or block the same electron transfers. And we don’t need synapses to replicate the exact same electron transfers. We can do that with wires. We don’t even need the parallel processing, as any computation can be replicated on a serial Turing machine. It’s just less efficient.
This is Turing’s Computability principle: any algorithm can be computed on a serial Turning device (proved); all the decisions at each neuron as to what electrons to send out (output) for what electrons come in (input) are algorithmic (it’s simply a decision, matching an input to an output, and therefore an algorithm); therefore all human consciousness can be replicated on a Turing machine (proved).
There is no appeal to cell biology that can change this. Any such appeal only references the mechanism by which the I/O decisions get made, which is always an algorithm and thus always replaceable with other machines. All those “autopoietic processes” are just the meatware that is running the software. Any other meatware would do the same thing (gears, microcircuits, the entire population of China), it just has to be programmed to give the same output for every input.
Hello Richard, and thanks for your answer.
There is a misunderstanding: I never said consciousness arises at low (cellular) level. Cellular level is not a sufficient condition in Seth’s book and theses, it’s a necessary one. Of course there are other, higher level conditions that are necessary as well. Only, these higher levels don’t only require “systemic” interactive arrangements, but deeper ontological ones. This notably means that a copper machine with sensorial connections is just not the same as living cells arranging each other in various functions to improve their survival mechanisms. Your analysis seems to be only at the “systems” level, while Seth, building on neuroscience and systems theory as well as on philosophy, psychology, cybernetics, theoretical biology and other cognate fields, brings together the constituents and the interactive parts of the analysis in a way that I find original and profound.
Side note on what you said about multi-level theories: “The causal properties of the parts do not generate consciousness and therefore cannot be necessary to the generating of consciousness.” well, then, how to explain electric conductivity? It’s not a property at atomic level, yet you still have this systemic property that depends on properties of the parts, and will not work with any component properties (e.g., different atoms). Analogous examples abound, notably in social sciences, where the description of emergent effects at macro levels does not enable one to simply dismiss a priori properties of the constituents, which I suspect is what you’re doing here.
If I’m right, then it makes sense to assume the same component-system interdependence for biological systems and their emergent properties – including how individual parts (e.g. negentropic autopoietic living cells) and larger systems (e.g. consciousness and desire) interact with each other. And why replacing cells with wires might prevent similar “systemic effects” to emerge. So honestly, I don’t think that IO analysis of brains is enough to say “we can get rid of the meatware, any similar component will do” (which unless I’m mistaken, is what you suggest).
Aside from this specific debate, I really try to follow your normative-epistemological advice and remain up-to-date with recent scholarship in various fields. I think you should do so too, and engage with this book, which is not only well respected but also consistent with most of what we know on these fields, from the latest scholarship!
Cheers
Victor
Which I just refuted. QED. Cellular biochemistry can never be a necessary condition. Any computation can be run on any system with the same I/O.
The argument you are defending is literally logically impossible. In no possible world could “chemistry” ever be essential to any I/O. And in no possible world is consciousness anything other than an I/O.
Hence you cannot defend your position with “replacing cells with wires might prevent similar systemic effects to emerge” because the only effects are I/O and I/O is replicable across any system with the same I/O protocol. So it is literally logically impossible that cell biology can do some I/O protocol that no other system can. It’s just I/O. Signal in, signal out. Protocol in between. Biology can’t ever be relevant.
For perspective, see:
The Mind Is a Process Not an Object
And:
Touch, All the Way Down: Qualia as Computational Discrimination
Hello,
The article reinforces my intuition. I have always suspected that AI systems such as ChatGPT and Microsoft Autopilot, to name but a few (no offense intended), were not particularly rigorous on certain topics and tended to flatter users by reinforcing their confirmation biases. AI should be used in moderation.
The fact that they are free should also make us suspicious. Why would anyone give free access to such a powerful tool?
PS: I discovered you during a webinar and was impressed by your expertise on mythicism. You have gained a fan. Sorry for my English, I am a French speaker.
Bienvenue!
> The fact that they are free should also make us suspicious.
Why would you say they are free? The industry settled on three subscription tiers:
Lukasz, Google AI “Dive Deeper in Chat Mode” is native to its search engine and totally free. This is the bot almost everyone actually uses.
But several AI companies offer free chatbot plans, too.
The reason is of course capture: they want people to become addicted to the tech so when they start charging at-cost rates they will pay them. They won’t. But that’s the fantasy driving the bubble.
This is also why even the rates you list are grossly below cost (maybe you didn’t know that, but I have several paragraphs on this in the article you are commenting on now): they want to rope pro users in with those low rates, in the hopes that when they go up ten times to match their actual cost, they’ll pay that.
These are foolish bets. The industry is desperate and panicking and knows it can never deliver the product at a pricepoint anyone will pay. So right now two things are happening: (1) pump and dump: half the industry is eyeing when to pull the cord on their golden parachute and run off with bags of money as the industry crashes and burns harder than Pompeii; (2) panic mode: the other half is sweating bullets desperately hoping their R&D will pull off some miracle that solves the looming problem.
(1) will work (because everyone else is an idiot). While (2) is doomed because it requires making the service hundreds of times cheaper to run and attracting and retaining hundreds of times more customers (while piling up hundreds of times more debt)—and they do need both to happen, because the second goal is impossible without meeting the first, but is not guaranteed to happen after meeting the first, because AI products are so niche or crappy in actual practice there is no expandable market for them, which is why capitalists are trying so hard to force shitty AI on everyone and everyone is getting sick of it (this sequence of events is covered by dozens of analyses and studies cited in the article here you are commenting on).
Mea culpa
AI is not free when you want to use features such as image and video generators or other GPTs.
My suspicion is mainly focused on the pretentiousness of thinking that AI will one day be able to replace humans in their ability to think and solve problems.
That said, I am not criticizing the tool itself, as it can be a useful assistant in searching for mundane information such as “how to make a good beef bourguignon,” but when expertise is required, such as “is Donald Trump’s international policy wise?”, things become complicated.
Without wanting to dwell on a subject that is far beyond my expertise, I just wanted to say, based on intuition, that it all seems to be just hype.
To end on a positive note, use AI: yes, but in moderation. Because we are smarter than it is.
Gregory, just FYI:
There are still free AI image generator plans.
But even the for-fee ones are grossly under-priced (way below cost) to capture customers, and thus your point holds.
Actually, it isn’t. AI is actually unreliable at producing trustworthy recipes and should not be trusted to do that. You need human-vetted instructions.
It’s indeed mostly hype.
I discuss what few actual limited uses niche AI can have in the article above. But when it comes to chat applications, I wrote a whole followup article, which even discusses the catastrophic recipe failure problem and links to analysts discussing it: How to Use Pseudo-AI.
I’m leaving this link in case anyone interested dares to take a look.
https://www.youtube.com/watch?v=MH3lG7V7SuU
Trigger warning: the presenter is an employee at OpenAI.
tl;dr: GPT-5.2 with multi-agent scaffolding can derive novel mathematical results.
“Recent Advances in LLMs for Mathematics” by Sebastien Bubeck
“I review the progress of large language models for mathematics over the last 3 years, from barely solving high school level mathematics to solving some minor open problems in convex optimization, combinatorics and probability theory. The emphasis is on trying to identify the shape of the current frontier capabilities, as it stands today, finding out both where it’s helpful and where it’s still falling short as a research assistant.
This presentation was given as a plenary at FOCS 2025 on December 16th, and also at Stanford on January 13th, Princeton on January 28th, and University of Washington on February 6th.” [Future date, published on Feburary 2nd]
See the links I already provided on expert mathematicians weighing in on these claims (five whole paragraphs in How to Use Pseudo-AI). You are being misled by curated and cherry picked results and not being given a serious review of what these programs actually can and can’t do.
Your thesis is basically completely vindicated by the latest actions of OpenAI. Great prediction!
Well, that’s a data point toward my thesis, not a complete proof. But what it does show is they are desperate and doomed. It’s the standard enshittification sequence for any product. But also, it’s more performative than substantive. They will never make their costs on ads. So it won’t save them. But it “looks” like at least “doing something” to make money, which they no doubt hope will keep stupid investors buying their stock and loaning them money, to hell with the sunk cost fallacy.
100 percent correct in October. Claude Opus looks like it could change the world of programming. Does it get everything right? no. can you very quickly get to a complex solution with a description. yes. we could potentially get to a point where a lot of code is automated….if we learn how to avoid the pitfalls.
I feel like software engineers will endure the worst of both worlds – its inevitable that there is hype and stocks will fall. at the same time, its clear to me that more and more software tasks will get automated, thus reducing the need for programmers. paralegals might have a similar situation.
if you are doing something truly innovative, that might be a different story. but the ai is exceptionally good at patterns, and in the case of software, it can identify and guess those patterns quickly. and identify the “stack traces”….the series of functions that called.
ecologically of course its a huge disaster.
I’m not sure what your point is. You seem to be rationalizing, by admitting but downplaying the limitations of AI vibecoding. It’s already been noted here many times that that has some uses but has a high error rate and needs constant human supervision and thus cannot replace people.
If you aren’t getting that point, even after following and reading the links I provided demonstrating it, there are even some new ones right on point that I will be adding soon, e.g. you should really, really read these:
Marco Kotrotsos, “The Math Nobody’s Doing on Ralph Wiggum Loops: The Math Behind Agent Porn”
Srinivas Rao
Srinivas Rao, “The Agentic AI Delusion: Why Silicon Valley Spent Billions on the Wrong Architecture”
Mahathidhulipala, “Six Weeks After Writing About AI Agents, I’m Watching Them Fail Everywhere”
And again, that’s in addition to what my article already cites on this problem (that AI actually reduces productivity by increasing supervision-and-repair time despite users mistakenly subjectively feeling like it is improving productivity, that real-cost is actually worse when AI coding errors are taken into account and the service priced-per-token sustainably, and so on).
first thanks for the new links- I need to read those. In prior months, the Ai sucked big time. Models like Claude Haiku cannot be trusted to do anything but the simplest of tasks. Chatgpt is junk. Claude Sonnet is ok.
What changed the game is Claude Opus 4.5. And while ive found that it still needs some level of supervision, it’s enabled me to do some fairly complex work in a matter of days. that would have taken me months. Admittedly, Im still reviewing what was generated, but it looks promising.
What that should do is let me do is focus on doing more truly innovative things. I expect as time goes on, more and more of routine bug fixes could become more and more automated. Claude also lets you train it with skills.
Claude opus can also tell you how the existing software works. (because most of the time programmers are just modifying whats there).
So I need to read the new articles. The pre-october ones all used junk models.
Having said that, Im hoping you are right, because I don’t want my job to be automated:-)
There has been no significant improvement in AI. And all the science shows there never can be. They hit their limits a year ago.
This is explained in several of the links I provided in the original article.
Your claim about “pre-october junk models” is bogus. The new articles confirm the old articles and show no progress. If those were junk models, all the models today are junk models.
You seem intent on rationalizing a fantasy for some reason. You simply won’t admit to reality. And that requires examining your emotional attachment to that fantasy. Because it’s destroying your epistemology. And that is dangerous.
as a follow on to my comment about Opus 4.5— you are right, its not actually doing model building and model building would take us further.
and a good software engineer should be doing the mental model building. but the realty, a lot of things get done the way Ai is doing— pattern recognition and logic processing. and a lot of software over time just becomes a pile of patterns and heaps of junk.
in the short term, things like Opus can get complex work done with minimal supervision.
the question is, what will be the impact to the overall model be.
That’s all false. And all refuted. Hundreds of times. And even by scientific reports and studies.
ok sorry last one. I should have did all the replies in one go. the links give me some hope for my field and ill definitely have to use with caution. at the very least, it will help me focus on mental modelling. I work in compiler and code generation technology (patterns and huge chains of type information)
If you really want to get away from the doomed deathtrap of LLM, you need to actually abandon it, and get on board with Yann LeCun’s AMI project. Already linked in the article you are commenting on. He came out after I published independently confirming everything I said about the only actual path to AGI. And it isn’t LLM. It’s model navigation. LeCun founded his new company to pursue it, after giving up in frustration at the pseudo-AI industry and its obsession with a failed approach.
Thanks. The key to my initial success might lie in a detail mentioned in an article. The article said its good where it can alter behavior of commands in a terminal. Thats 90 percent of my application. Extremely complicated commands, but nevertheless boil down to well known errors (as I said, this is compiler technology).
But I agree, it cannot do open ended tasks well.
Time will tell. But I did walk away with a new insight, which is to do more deliberative model based thinking in my own head. (trust me, its easy to get stuck i. the weeds of code thats been around for 40 years)
You seem still to be quite naive. If you would read the links and their examples you will see the problem is diverse, pervasive and incurable. And that this does not mean AI is useless, but that it is defective, requires expensive supervision, and is useful only in very narrow, heavily curated applications.
Indeed, just this week Harvard Business Review released a summarized study demonstrating all this. Which I have added as well to the article above.
so its funny because i came to update you on this as the approval went through.
yes im finding out today my ai success story was over rated.
why are you always right???? lol!
Well, I’m not always right. But usually on things I’ve researched the hell out of. 🙂
and ill also add (sorry multiple messages when one could do), I gave the ai answers extra scruntiny bc of these conversations. so thank you for that.
Regardless of whether changing the prediction term in the loss function from logits to embeddings is a big deal or not, what we can agree on is that we should be cautious, we face risks, and managers are under incentives pushing toward bad outcomes:
https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it
yeah Dr Carrier is right. The ai success I had last week is not as successful as I thought.
But it was these conversations that pushed me to do the deep dive into what ai did do.
thats what I love about this blog. you come in with one idea and it gets tested.
and heres the scary thing—- ai does a good job at making you think it did the right thing.
the implications of that are profound if people start using it for safety critical applications
Indeed, I wrote a whole followup article about that:
How to Use Pseudo-AI
Indeed. I just cited that article on that point myself.
o its funny you were approving as i was writing this new one…..ok ok im actually vetting the miracle solutions from last week. it turns out, its not as miraculous as I thought.
it seems useful in that it got me in the right direction(I think) but not as automated as I thought.
Are you using GPT-5.3-Codex and/or Claude Opus 4.6? Are you asking another instance of the system to review what the primary instance created, and allowing them some back-and-forth?
Those iterative techniques have been tried even with all new systems. They do not escape all the problems documented, because those problems are inherent to LLM as a concept and thus cannot be fixed by tricks. Dozens of studies and tests now have proved this, all cited in the article (including several new links added just this week).
You’re wrong, and your understanding of this topic is dumb. Cloud Opus 4.5 is highly precise, and with well-engineered prompts, it delivers excellent results. Also, AI isn’t only about speeding things up—it’s about reducing the required workforce
All the cited evidence is to the contrary, Robert. On every single point. I have dozens of studies and observer reports and my own personal experience. So maybe you’re a paid shill or desperately trying to prop up your investments or just delusional. But you can’t say the sky is green when we can all look outside and see it’s blue.
“Only reading sources that suit or agree with your preconceived notion vs. reading the best of both sides.” … “Treating sources with a biased rather than informed assessment of their reliability.” … “Just ‘armchairing’ reasons to reject what experts say” … “The crank will do neither. For neither will they have a reasonable standard by which evidence can change their mind, nor will they apply any standard consistently, setting completely unreasonable standards for anyone who says what they don’t like, and wildly gullible standards for anyone who says what they do like.”
The civilization fundamentally changed in 2025 (+/- 1 year). At that point, the changes were over a decade in the making — yet experiencing it brings a new layer of amazement. As the revolution percolates through our lives and the economy — if we succeed at AI alignment, I’m in awe of the lived wondrousness.
I disagree. I think this is just another minor tech rollout. It will have effects but it won’t be revolutionary and won’t be consistently good. Just like all previous examples of the same thing (from the rise of the home computer to the rise of internet to the rise of the smartphone to the rise of streaming over cable). It will change things but most stuff will remain the same, and just use the new things to do all the old things. That’s why AI is ruining society with cleverer fraud and scams and garbage, just like every other thing before did, all the way back to the printing press, while also making it slightly easier for random technicians, like coders and A/V techs, to do their jobs but not really changing anything fundamental about their jobs.
This is not awe-inspiring. It’s frustrating and annoying. And anyone who isn’t correctly calibrated on this is falling for a con.
An unprecedented catastrophe is still revolutionary.
Is it, though?
That sounds like a silly Deepity.
No. The corporate elite trying to make money on another scam and running the media as their PR shill for it, only to eventually crash the economy, is not even remotely revolutionary. It’s rather routine in America.
But the comment you replied to never mentioned the catastrophe. The comment you replied to specifically said that what will survive the catastrophe caused by the lies and hype is mundane, not revolutionary.
As I even explain in the article you are commenting on: “all that AI will do is increase the productivity of existing experts, not replace them” which is not revolutionary but so typical and ordinary that it has happened every decade for a hundred plus years now (examples in the article).
You say in the linked paragraph:
There is a tension in this quote between “It will be just like robots did to manufacturing” which is quite transformative, and “it will never do anything more impressive than it already does” which would mean that you think it is already as transformative, but won’t be any more transformative. Let me respond by quoting a software engineer and once-AI-sceptic:
“The real annoying thing about Opus 4.6/Codex 5.3 is that it’s impossible to publicly say “Opus 4.5 (and the models that came after it) are an order of magnitude better than coding LLMs released just months before it” without sounding like an AI hype booster clickbaiting, but it’s the counterintuitive truth to my personal frustration. I have been trying to break this damn model by giving it complex tasks that would take me months to do by myself despite my coding pedigree but Opus and Codex keep doing them correctly. On Hacker News I was accused of said clickbaiting when making a similar statement with accusations of “I haven’t had success with Opus 4.5 so you must be lying.” The remedy to this skepticism is to provide more evidence in addition to greater checks and balances, but what can you do if people refuse to believe your evidence?”
From: https://minimaxir.com/2026/02/ai-agent-coding/
And yet “civilization fundamentally changed in 1961, 1977, 1986, 1991, and 2007” sounds dumb for a reason.
You are trying too hard to hype the impact of just another productivity lever that won’t “fundamentally” change anything. Just as robots didn’t, home computer’s didn’t, cinematic CGI didn’t, the internet didn’t, and the iPhone didn’t. These aren’t revolutions. They’re just incremental progress. Same with AI, which in fact will be extremely dull by comparison with all those other events. Within ten years, almost no one will use any kind of this pseudo-AI because it will be vastly too expensive when it becomes sustainably priced, and will only ever have limited niche functions.
Everything else is hype and lies. And this is completely proven by all the studies and expert analyses I cite and summarize here. Citing yet more bullshit at me doesn’t escape that fact.
Indeed, I’ve been updating my article to include numerous current debunks of those very bullshit coding claims you are falling for hook line and sinker. “Oh the new ones are better” is either trivial or bullshit or deliberate lies. And we knew this would be the case already because LLMs cannot ever, ever, ever do what these shills and their myths and urban legends and propaganda are claiming. So the liars just insisting there is evidence somewhere that doesn’t exist or is widely debunked is the behavior of a cult at this point. Not rational, evidence-based reasoning.
You are becoming a flat earther right before our eyes. Listening to the same flat earth apologetics. The only difference is you’ve swapped flat earth bullshit for AI bullshit. My article, especially now with all its links I posted just the last few weeks, provides you with all you need to debunk these myths. So get to it. Don’t just believe bullshit. Find out if it’s bullshit. You should know how by now.
I really appreciate your time and engagement. This feels like a natural ending point, but since I’m drawn to this debate, I want to thank you for making falsifiable claims. Myself, I will revisit my stance and re-evaluate my engagement with this technology in April 2027 (being specific here for personal reasons).
Oh yes. I expect there will be some revelatory changes in the state of this question by mid 2027. Because the bubble is hitting its hard edge and is close to collapsing. The private equity market is already collapsing, and AI depends on that to remain propped up. I doubt it can make it to 2027. But it will certainly fall by 2028, and that should be more evident by 2027 than even it already is.
I posted the following comment on Medium but am repeating it here because it is useful:
Attempts to use anecdotal experiences to claim that AI does not de-skill you or make you dumber were disproved scientifically (links in the article above).
It’s like the people who anecdotally thought they were gaining productivity using AI but objective metrics showed they were losing productivity from all the extra work they were doing to fix what AI screwed up: their emotional impression was wrong; objective measures proved it. Same for skills.
If you don’t use the skill, you don’t learn the skill. So by having “someone else” write for you, for example, you will never become a skilled writer, any more than having someone else ride a bicycle for you will result in your being a skilled rider.
In research, the reason experts become experts is the doing of the research, not having someone else (least of all an amateur) do it for you. If you don’t do it yourself, you will never know what was left out. You can check what your inexpert assistant brought you for being fake or wrong, but you can’t know what they skipped or missed because they didn’t bring it for you to check.
A good example is my graduate study in history:
99% of what I did was confirm sources were useless. Thousands of library hours, hundreds of books and thousands of articles. I had to be able to explain to my review committee why some source or author doesn’t say anything pertinent or different than my work found. I can’t honestly say that (and thus I will never learn so as to actually know that) if I never checked.
But more importantly, all the “useless” research (the 99% of stuff that turned out not to matter to my thesis) I absorbed and thus “knew.” I thus became an expert by absorbing tons of data that was useless for my thesis but crucial to being an expert in the field. It’s like Miyagi and wax on wax off. If you don’t do the supposedly useless thing you don’t learn all the facts that actually make you an expert.
This process also made me skilled: by evaluating all those materials for relevance, I built the skill of critical and expert evaluation of sources. I learned to read complex writing. And so on.
It is precisely by making you not do any of the things that actually build skill and knowledge that AI is actually measurably degrading people’s knowledge and skill. Which, again, is a scientifically documented result now. So no anecdote will compete with that finding.
The irony is that contrary claims exhibit this failure mode: by not actually doing the thing you want to be skilled in, you don’t even realize you didn’t build the skill. You would have to have actually built the skill to realize why having assistants cheat your homework for you can never improve (but will certainly degrade) your skill relative to someone who actually did the homework.
I ran into this AI story that you may find interesting. Daniel Stenberg, the software maintainer of the ubiquitous Curl app, reports that he is swamped with fake issue reports generated by AI. This illustrates the immense waste of dealing with AI slop.
10 billion devices run his code. Now AI is attacking him. It turns out many people are using AI to find issues with Curl in hopes of getting rewards for spotting a bug, but much of what they report is hallucinations that the developer needs to waste time addressing.
Thank you. I saw and read that too but didn’t think to report it here. Largely because it is a consequence of the misuse of AI rather than an inevitable effect of AI by itself. But we should not neglect that. The evils to which pseudo-AI can be directed are still part of the damage it will cause society. Though that cat is out of the bag, possibly when pseudo-AI gets correctly priced, it will cost too much money to generate fake AI reports in what is really just another pointless crypto mining scheme that wastes energy and cooks the environment to no societal good.