Pseudo-AI is garbage and a bubble. And AI content is banned in comments on my blog. Yet people are still gullibly over-trusting AI chatbots, constantly sending me chatbot content and gushing over how brilliant it is and proves whatever they want. I have to explain over and over again that they are being fooled and need to stop being so naive. Because this is dangerous. Not only because AI chatbots are easy to manipulate. So you are simply signing on to be conned by state and corporate manipulation and propaganda (you need to think for yourself, not let spooks and corpos think for you). And not only because relying on AI literally makes you increasingly dumber and more incompetent. Because it de-trains you to think and do research critically (and you need to be constantly building those skills, not letting them atrophy by taking orders from a parrot). But also because pseudo-AI does not think or reason. It just regurgitates what’s on the internet. Often poorly.

This is the thing you need to accept. Asking a pseudo-AI chatbot to do research or reason for you is literally like asking a high school dropout on weed to do research or reason for you. “Hey, Jimbob, spend the day googling to answer this question for me.” All the errors, conflations, failures, omissions, confusions, even hallucinations you would expect from that poor dumb Jimbob is exactly what you will always get from chatbots. And I do mean always. Do not repeat corporate lies and claim “it will get better; it’s just around the corner.” It has been multiply scientifically proved that it will never ever ever ever get better. This is as good as it will ever be. And it’s crap. The sooner you learn and accept this the better. Failure to learn this will be your doom, as you spiral into fatal stupidity and become a state or corporate lemming, all the while marveling at the nonexistent “genius” of this stupid talking parrot.

In my last article on this I covered the narrow use cases where AI has and will continue having utility, but they are very narrow cases, and mostly business-end, or just productivity enhancement for specialized tasks. The reason chatbots suck is that they are general function tools, and AI does worst at general tasks and better when constrained to highly specific tasks, and even better when guardrailed with a shit-ton of human engineering around it to keep it in check. And those applications are not relevant to my point today. Today I am only talking about the use of chatbots to replace independent or critical thinking—relying on pseudo-AI to “do your own research” (when you should actually be doing your own research), or worse, to think and reason for you, which it never actually does. A tutorial appears needed so I can stop repeating myself. Because chatbots might stick around, but will never be reliable. So you need to understand how to use them competently, and not like a stooge.

Of course, pseudo-AI chatbots might not even stick around. Because pseudo-AI is heavily subsidized by runaway capital soon to vanish, so its current availability is literally unsustainable; and will be, for most users, unaffordable going forward. I think a lot of people don’t understand this. But if you don’t get what I mean, or don’t believe me, catch up with this, and this, and this, and that, and that, and that, and this, and this, and this (many of which also relate to business applications of AI tools, but I summarized what corpos need to understand about pseudo-AI last time; this time, I’m talking about chatbots). But even in the worst case scenario, where you will only have access to these chatbots for hundreds of dollars a month and are still stupid enough to pay that rate, you’ll still need this tutorial.

Basic Bot Vaccination Toolkit

From the constant shower of gullible people falling for AI bots that I get in my in-boxes, I realize no one actually understands why and how chatbots are unreliable. So later I’ll give some real-world examples, so you can see exactly what I am talking about, and why it’s a problem. But to avoid TLDR, I want to give you the inoculation toolkit first. If you still don’t know why you need this kit, or are unclear on how to implement it, the examples in the next section will help.

You need to know:

  • What they leave out. Omission is the most dangerous error. This is how cranks and propagandists manipulate you: by omitting crucial information or sources or perspectives, which frames you into only believing or fact-checking or arguing with what they presented, and thus never realizing you should be looking for what they left out. Because chatbots don’t reason, they routinely leave crucial data out, because they don’t know what a good vs. a bad source is, or what the most important sources are, or even what is a source (see below). They just look for statistical relationships between words on the internet. Which will never produce a complete source list, or the best, and thus will never produce a complete or best answer to anything. You always have to check what they missed and put it back in, and reorder the resulting sources by reliability, and generate a more reliable answer, one that doesn’t disregard or leave crucial information out.
  • What they conflate. The most common chatbot error is conflation: it will confuse different concepts and subjects, and stitch together an answer to a question that is, actually, gobbledygook (see examples below). But unless you are already an expert in the subject, you won’t know that it did this. So you need a strategy for dealing with this. Most of the time you aren’t going to already be a real expert or PhD in the relevant subject. So how do you know when it has done this and when it hasn’t? Because the frequency of this is so high, in order to not be naive, to not get regularly miseducated or duped, you have to always assume it has done this until you can prove it didn’t.
  • What they hallucinate. Hopefully you have heard that chatbots actually invent facts and even sources, a lot. So you have to fact-check literally every claim they make and every source they cite at you, to ensure it isn’t fake. Combined with the other two bullets, this actually makes chatbots almost useless. Because you have to redo all the work they did, and more, before you can ever trust it. The only way to use chatbots productively at all is as a “ballparking” tool: something that randomly throws you into “the ballpark” of what “might” be a correct answer—as in, it might get you “near” but not “at” the correct answer. Often it won’t even do that (it’s rate of being wildly off is also high). But more often it won’t get “the” right answer, but something “near” it that you can then correct with a little more self-checking (no longer relying on the bot). A quick vet of its claims and sources can check if that ballpark is right; or from that ballpark you can refine the results on your own, saving you maybe a little time (although studies show this often costs you more time; often it would have been quicker to have skipped the bot and done it all yourself).

How chatbots generate source lists is instructive. Some of the links above explain how this all really works: the bot guesses at an answer to your question based on the statistical juxtaposition of words on the internet; then it looks for sources (hyperlinks) that have the highest statistical representation of the words it put in that answer. In other words, pseudo-bots are not actually summarizing the sources it gives you. It is summarizing “the entire internet” and then trying to “guess” at what webpages will correspond to what it said. Once you understand that this is what it’s actually doing, you will stop trusting these bots.

Of course summarizing “the entire internet” is in no way a consistently reliable source of information (it’s just regurgitating which random group said the most words on the subject). And even this method of statistical juxtaposition of words is not a reliable way of getting a correct answer to anything, as it produces too many causes of error—this is in fact what a middle school education was supposed to train out of you: to not make this very mistake that all bots routinely make. But it’s what they are doing. If you have been fooled into thinking the bot “found sources” and then “summarized them” at you, you were tricked. Bots aren’t doing that. Indeed, they aren’t doing any of the reasoning they report to you when you ask them to, either. When you ask them to explain their reasoning, they are “backwards guessing” at what you want to hear it say. They are not showing you any actual reasoning it actually engaged in to get its answer. See this excellent video by ByteMonk for what’s going on here—which also proves the whole LLM approach can never achieve general artificial intelligence, not ever ever. These things do not think.

And even when you can steer a bot to only summarize a source you found, it will not consistently get it right. It’s bad at that. That it is good at it, say, nine out of ten times might trick you into thinking it’s good at it one in a thousand times, but that’s you being fooled (which is another reason to get Everything You Need to Know about Coincidences). The rate of error is much higher than that, and your trust in the bot’s summaries should correspondingly be much lower. And remember, error rates compound. If they are wrong 1 in 10 times, use them dozens of times and there will be an effectively 100% chance that you will be misled one or more times. And every bot is highly prone to all three errors: leaving out important information from the source you told it to summarize (and omissions are more destructive of accuracy than many people realize); conflating information in the source (and thus getting it wrong); and hallucinating information in the source (literally making shit up that isn’t there). That these errors happen, let’s say, every tenth time can fool you into being amazed by its success and over-trusting it, like a mentalist or psychic tricking you into thinking they can read your mind or talk to the dead. It’s the same game. You must never be lulled into this trust.

Which is the real issue: bots don’t reason; they don’t think; they don’t “reason out” their answers. Even when you tell it to present its reasoning, it isn’t doing any of what it types out at you. It is just faking the appearance of having done that. Because all they ever do is guess, by essentially “cheating the test.” They lean over and look at what everyone else said, and try to come up with some mishmash of all those different answers, or whatever has been said that comes “near enough” to answering the question, at no point ever understanding what any of these words mean, or whether any of those other test-takers it’s cribbing from even know what they are doing, much less whether a mishmash of them all is a better or worse answer.

This is the opposite of knowing what the answer is. Bots are covertly “admitting” they never know what the answer is, that they are neither intelligent nor even conscious. They are instead just going to lie to you by arbitrarily claiming what some other randos on the internet said (or even statistically would have said). It will never know if that actually is the answer. And consequently, its error rate is alarmingly high. Even when it builds lines of reasoning when you told it to, it just fakes up what it thinks a line of reasoning would look like. It did not actually use that line of reasoning, nor does it “know” whether it’s valid or sound. If it “is” valid or sound, it’s entirely by accident, trusting statistical juxtapositions of words to more often land on something that looks right. But often it doesn’t. That’s why it hallucinates: that is the result of this “juxtaposition” technique inevitably going off the rails. That’s also why it has a high rate of omitting crucial sources or data (because it is relying on what gets said most often, not what gets said best). And why it has a high rate of conflation (because relying just on statistical juxtaposition will often mix up what pertains to what).

To control for all these mistakes (and you always have to, because as all the studies I linked above show, pseudo-AI chatbots can never improve their error rate: they have already hit the ceiling on that), you need to remember three steps to take, and always (always) apply them every time you use a chatbot for anything:

  • Only trust the bot for ballparking. Assume it is always at least slightly wrong, and use what it gave you as a springboard to do your own research. That will catch when it is hugely wrong and when it’s only slightly wrong. It will also solve the omission, conflation, and hallucination problems. So you need to build and always apply your own critical thinking skills (and yes, to learn those, see my critical thinking category, or take my online correspondence course).
  • Always back-check and fact-check. Bots are less reliable than Wikipedia. Because Wikipedia has far more controls (and more intelligent controls) in place. So if you are wary of Wikipedia, be even more wary of chatbots. Like with Wikipedia, rather than trust a fact-claim a bot makes, fact-check it. Rather than trust that a source it cites says what it claims, read the source and find out what it really says. And after all that, also go around the bot (bypass it) and do your own source and information search and see what differences result in your findings, at the very least to confirm there are none.
  • Always logic-check its reasoning. Pseudo-AI actually deploys pseudo-reasoning. But this still can land on a valid sequence of reasoning (because statistically, the internet itself does). But you can’t assume that has happened. It only is good at looking like it’s doing this well (hence the “pseudo”). The error rate is too high. Pseudo-math and pseudo-logic abound. So you need to know which it’s giving you every time. And the only way to do that is to vet the logic it presents you yourself. You can never circularly trust the bot to do that for you.

You might be thinking, “Wait, if I have to do all that, why do I need the bot at all? It sounds like it’s just a massive time-waster, generating way more work than if I’d just skipped the bot and did my own research well.” Yeah. Exactly. I myself have only found the ballparking function useful. Basically, I use chatbots like slightly more robust search engines, and not as oracles, much less as reliable thinkers. I don’t trust them, even when I use them. Neither should you.

Examples of What I Mean

Indeed, I would rather not even have this feature on half the time, as Google search is actually worse with AI than it used to be without it (examples, examples, examples, examples, examples, examples). But actually expecting the AI to be smart or know things is something you should never do. Here’s why.

The whole internet exploded this year with gullible praise of a fraudulent science paper claiming it could derive the quark and other fundamental particle masses with an assumption of three-dimensional time. The problem is, all the math in the paper is literally bollocks. It all looks like real math. But none of it works as depicted, none of it gets the results presented, and half of it isn’t even intelligible. There are also telltale errors in how it cites sources incorrectly that resemble AI “guessing” and not human error or deception. This was almost certainly AI generated (and, evidently, its journal has no real peer review). Because a human expert would have caught these errors and tried to fix them somehow. What happened is AI constructed something that looked like a science paper, with equations that looked like math, but did no actual math. The outputs of the equations are bogus. Most of what’s in the equations is even bogus, with symbols never explained or that don’t mean anything.

That is a perfect example of both the dangers and the stupidity of AI. This paper illustrates that AI doesn’t ever actually understand math or even “do” math. It just fakes it. So if you don’t know that (or don’t check to find out), you, like the entire internet, will be fooled by absolute garbage like this. This is true across the board. Read Ethan Siegel’s Big Think article on this problem in Why “Vibe Physics” Is the Ultimate Example of AI Slop. The catchline says it all: “the conversation you’re having with an LLM about groundbreaking new ideas in theoretical physics is completely meritless.” Lex Clips has a great brief from world renowned mathematician Terence Tao explaining the problem here.

There are AI tools that are “human-rigged” to be better at math (by cheating the code to put in some non-AI routines to “fix” the AI’s routine mistakes). But even those are not entirely reliable, they’re just better. And without them, bots are very bad at math in a way that illustrates everything I am saying about how bots don’t really think, but just parrot statistical relationships between words and symbols on the internet. And that’s why an “Apple Study Exposes Deep Cracks in LLMs’ “Reasoning” Capabilities: Irrelevant Red Herrings Lead to ‘Catastrophic’ Failure of Logical Inference” (by Kyle Orland at Ars Technica), which demonstrates “the illusion of understanding” in these bots.

Some examples of this are surveyed by Hunter Kallay in Poor Calculations: Why Is AI So Bad At Math? Kallay also explains how “non-AI” interventions to “check the work” of AI fixes some of this in some systems. But the fix had to be human-intelligent. A human had to build the back-checking system to correct the AI’s output; the AI could never do this on its own, because it would never really “know” when a result was right or wrong so as to tell the difference; it just cheats the test by peeking to see what most people answered or “would” have answered, assuming that that will be correct when often it won’t.

But more important for you to understand here is why this is the case. Consider Kallay’s first example: bots can answer the question “9.11 and 9.9, which number is larger?” with “9.11” because when it checks the statistical juxtaposition of words on the internet, “larger” is more frequently associated with “longer, more characters,” and thus the bot “assumed” the three-digit “9.11” was “larger” than the two-digit “9.9.” This illustrates the difference between actually understanding the question and its answer, and just looking at which words are closer to other words and how often. The bot does not know what “larger” means or even what “9.11” means. It just counts up ASCII characters on the internet and regurgitates what those counts would predict someone would say. And obviously that’s not a reliable way of answering a question (as this example of a failure demonstrates).

Experts can use math-trained bots as productivity tools, but those bots can never replace or do the work of mathematicians, because you always need an expert to vet the results and know what things the tool can and can’t do. See Is GPT-5 Really a PhD Level Mathematics Researcher? and Grok 4: Over-Promise and Under-Deliver on Easy Riders for a good breakdown of this point, with lots of examples of bots not actually thinking and thus never actually understanding any math, even when it gets something right. From those videos you will come away understanding more about how it is that bots don’t actually reason or think or do math. It’s all guesswork based on statistical juxtapositions, along with iterated automation. It’s not smart. It doesn’t know what it’s doing. It just “cheats the test,” and in result, can sometimes do some things right, but only with expert management, supervision, and correction. So you still need real mathematicians. Bots can never replace them. They can never do what we actually need mathematicians for. So you still have to ask mathematicians questions. You can’t skip that and foolishly trust a bot. (And by now you might realize why these bots are never giving you the Singularity.)

An even more serious example is the food recipe scandal: corporate AI grifters promoted using AI to come up with recipes, and it gave them poison sandwiches, mustard gas, and banana tea (involving just plopping a banana into a cup of tea). The bots do not have the knowledge or sentience to understand what ingredients are, or what would be palatable or even edible to mix together or include in what quantities, much less what would be poisonous or dangerous. You can’t replace human-vetted cookbooks with AI. And that’s literally the case: because, yes, an AI-generated Harry Potter cookbook glowingly recommends aconite tea as “soothing,” and even insists you follow its recommended dose precisely to avoid lethality—then recommends a dose that would kill an entire platoon. This is a perfect example of AI literally having no knowledge or comprehension of what it is doing or saying. And if so here, so everywhere: it is making these same mistakes every time you use it.

For the bigger picture here, and more exploration of the significance of these examples, watch Total Stagnation – The AI “Nothing” Products at Upper Echelon; and also Jim the AI Whisperer’s “Want to See How Insanely Stupid AI Really Is? Ask ChatGPT to Answer These Riddles in Just One Word” at Medium. These demonstrate not just that AI sucks, but why it sucks. Because when you understand the why, then and only then will you understand that these things do not and cannot think. And that helps you develop correct ways to use pseudo-AI productively, and not gullibly (for really good example in AI-coding see “I Burned Millions of Tokens on Claude Code. Here Is Why “Vibe Coding” Is a Trap.”).

My Own Examples

Examples that happened to me include when I asked Google for references on vulgar manuscripts of Homer, technical terminology in Classics for manuscripts of the “vulgar” or “popular” (as opposed to curated or elite) text of Homer. A year ago, Google’s native chatbot tried confidently explaining to me that vulgar manuscripts of Homer were written in Latin by Jerome, and went on to jumble up the history of the Latin Vulgate Bible and the Archaic Greek of the Iliad and Odyssey, and none of the sources it cited had anything to do with what I actually asked. If I were not an expert, I would have had no way of knowing this was all conflated bollocks. I’d have had to check its claims and sources, which would have led to an hour of confusion until I realized its mistake. I ran the same query today and got a correct answer this time, with relevant sources (see above right). But how do I know when it’s doing that, or doing what it did a year ago? How do I know when it has learned and when it hasn’t?

Another more recent example is when someone tried asking a bot to explain the evidential reasoning behind my theory of the Gospels, and it vomited up this:

The Gospels are intentional literary constructs required for the transition from a celestial figure to a historical one. Carrier, for instance, argues that the need for a “seeming” ministry was essential to bridge the gap and that the Messianic Secret is part of that process of historicization.

This is presenting itself as a confident summary of something I argued. But it is actually a bizarre gobbledygook of unrelated things. I never talk about a “seeming” ministry, and that has nothing to do with the question of the “messianic secret,” other than those two discussions being occasionally sort of adjacent. So here we see the actual thing it’s doing (guessing at an answer based on the statistical juxtaposition of words it otherwise has no comprehension of—at all, much less in context), and how it gets a wrong answer that sounds like a correct answer. This is conflation error.

The wording is also weird, as if what it wants to say is that creating a ministry for Jesus was part of the process of his historicization (a statement so trivial as to be banal) and that Mark’s invention of the messianic secret within it had something to do with that—but the bot doesn’t know what, or how to articulate even what I said about that. This is an example of omission error. Someone who did not actually read what I say about these two distinct subjects would here get a completely wrong idea of their relation, not only learning nothing about my take on the messianic secret or Mark’s invention of a ministry, but coming away from these omitted details with the false impression that I argued the one from the other. In fact, I argue the messianic secret is simply an allegorical representation of 1 Corinthians 2 (even using a Homeric parallel), an artifact of the cosmic gospel, and not an apologetic for Mark’s story being hitherto unknown.

Because (perchance) I happen to be an expert on me, I know the word “seemed” the bot culled from my even more unrelateddiscussions of Docetism, which aren’t about the ministry or the messianic secret. In fact, what I argue is that scholars were wrong ever to think so, yet the bot here mistakenly implies I took the opposite position—and mistakenly attributes that conclusion to me, when in fact it is the position I document of all experts in Docetism now, and I am just agreeing with them (see Did ‘Docetism’ Really Even Exist?). These are details being omitted yet that are crucial to know to avoid being misled or misinformed. But this illustrates a common mistake I have seen many times, where a bot will claim or imply a person argued a thing that in fact they argued against, by mistaking the statistical juxtaposition of the author’s position and the position they are rebutting: Pseudo-AI cannot tell the difference. It is only guessing based on word-placement.

Likewise the word “ministry” I know it culled from my discussions of that ministry’s absence from Paul and 1 Clement, and its demonstrable fictionalization in Mark (and later riffers on Mark); and the word “messianic secret” it culled from my separate discussion of Mark’s use of the letters of Paul (in this case to reify the “secret plans of God” motif in 1 Corinthians 2). It mixed up these two unrelated discussions with an even more unrelated discussion of Docetism, and then confused me as taking the classic Docetist position (when in fact I am explicitly rejecting it). These are all disconnected discussions. The bot conflated them all, and got my positions wrong or failed to correctly describe or connect them. All because it is only looking for what words are close to each other, not actually understanding anything.

And there is no way you could know this.

This is the central problem with pseudo-AI chatbots: people tend to just take what they say as correct; but in fact, it can be wildly or even subtly incorrect, and there is no way for you to know. So how can you ever trust anything it tells you? You would have to redo all the research you were trying to avoid by using this dodgy internet parrot instead. So why use the dodgy internet parrot instead? The risk of being repeatedly, and cumulatively, disinformed and misled should scare you here. It should not motivate you to be impressed and keep using ChatGPT or whatever.

And this extends even to subtle misleads. It’s not just about blatant errors. For example, here even what it got right it got wrong. While it is vaguely (and trivially) true that “the Gospels are intentional literary constructs required for the transition from a celestial figure to a historical one,” this wording makes it sound like a teleological theory, that the authors of the Gospels intended that end game, when in fact my argument is that the first authors (Mark surely) did not intend to sell historicity to insiders, but were building a parable to confuse outsiders and edify insiders. The idea that the “historical Jesus” narrative would replace what it symbolized may even have horrified an author like Mark.

So how do you parse “Carrier argues that the need for a ‘seeming’ ministry was essential to bridge the gap”? Essential as in “that obviously had to have happened, being simply the only possible causal sequence”? Or essential as in “the Gospel authors needed to invent a ministry in order to convince people Jesus existed”? Because those are not the same thing. And confusing the two is a common error made by my critics. Which reminds us that chatbots are also “just believing” whatever my critics say, and jumbling that up with what I say, so as to “guess” at what I said—because chatbots literally don’t know the difference between an author and their critics, or between straw men and steel men, or between accurate statements and mistaken ones. It’s all equally the same to the bot. It just adds up how often each thing gets said, and literally decides that the thing most often said is true. Which is precisely the thing critical thinkers should never be doing. So critical thinkers should never be relying on chatbots.

And to test all this just now, I asked Google’s native bot “did Richard Carrier argue that Wallace Marshall is a liar” and it said yes, and linked to my demonstration that David Marshall lied in a live debate. It then cited as sources my written debate with Wallace Marshall, who never lied about anything and I’ve never suggested he did. In fact, the bot said:

Carrier stated that “Marshall’s dishonesty” was evident in these claims, which he considered “fantastically false”. The two engaged in a series of written and possibly spoken debates covering topics such as the existence of God, morality, and the historical Jesus.

It then linked to my written debate with Wallace Marshall. So it could not tell that Wallace and David Marshall were different people; it falsely reported that I called Wallace a liar (and even linked to my discussion with Wallace as “evidence” I did); it confused who I debated these subjects with (I never debated historicity with Wallace; and never debated morality with David); and got wrong who I debated in “written” (Wallace, not David) or “spoken” format (David, not Wallace).

This illustrates how bots don’t think. They literally cannot tell when two people have different names. They can’t even tell these are two different people. Because they are just going on which words are most often next to other ones, so that I refer to people by their last name results in a numerical “distance” from any given sentence and a pairing of their first and last name, which the bot reads as “those are less likely connected” than the rest, and so it statistically concluded that every Marshall is David, because I cued it to look for a charge of lying. Which shows the role of omission as an error vector: because my query framed the bot into only caring about what I mentioned (the word “liar”), it could not “see” the difference between my writings about David and Wallace, and thus did not report the lack of that word or its cognates in any of my articles about Wallace.

So the bot is doing no thinking here. It’s just blindly parroting statistical relationships, producing answers that are too often wrong to ever implicitly trust. I could go on because examples abound (like these, and these, and these, and these, and famously this; with an explanation of why bots do this here). We now even have an example from biblical scholar James Tabor that shows error, hallucination, lying, and agreement with what you want it to say—indeed it even covertly repeats Tabor’s own writings back at him, without questioning them or presenting more widely published contrary views, proving it can’t be trusted even when it’s right. But hopefully by now you get the picture.

Conclusion

Needless to say, AI bots also give bad, sometimes dangerous, medical advice. Because they don’t know anything about medicine. And if you ever think they do, you are being fooled. Likewise in law and academics and education and government and mental health. And so on. Examples of this kind of bizarre failure abound (remember when a company’s AI spontaneously decided to delete that company’s entire database?). And examples will never end, because this can never be improved. Yes, that includes corpos’ favorite application (because it supposedly eliminates “useless jobs”): AI customer service—in practice, a disaster (no, really, it’s worse even than those robots of yore everyone already hated). And it will never be better than this. Yet corpos are idiots and never learn.

Don’t be as dumb as them. Know that pseudo-AI is always unreliable, and will always be unreliable, because it never thinks, it is never reasoning, and it knows nothing. If you ever use it, you must always use it with a full understanding of that fact. It will often omit crucial information, yet will present the appearance of having been thorough. It will often conflate things, yet trick you into thinking it didn’t, because it is good at faking the appearance of an expert discourse. It will often hallucinate, yet you will have no way of ever knowing when, unless you literally check every single thing it says against a more reliable source.

And this is why you must always use chatbots with that skeptical and corrective toolkit, every single time you use them. Never trust it, only use it for ballparking (if at all), always back-check and fact-check everything it says, and never trust its statistically-crafted appearance of reasoning; always carefully check the validity and soundness of anything it presents you that even resembles logic. Do not be impressed when it presents some summary of a scholarly consensus, or an argument for or against something, or an analysis of a thing, that “sounds” astute and well-researched. It will often be wrong, sometimes catastrophically. It is only faking all of this. It is pretending to sound astute and well-researched. It is not actually producing astute and well-researched answers to your queries. It is cheating the test, a method that is really just repeating the internet at you (which you could have simply done yourself), and which is inherently and perpetually prone to errors.

So do not show me some ChatGPT analysis and ask me what I think of it. I never trust that shit and I see no value in it. Do the work yourself. Use your own mind. Do your own research. Build your own analysis. And keep it succinct and legitimately-sourced (no lengthy wordwalls or giant essays; I will never have time for it). Then maybe you’ll have something worth my time to show me—or something worth even your own time to contemplate. And you’ll have honed your critical thinking skills in the process, rather than withered them. Which is far better for you, me, and the world.

Special Request: I don’t usually ask for this, but on this thread it’s welcome: if you encounter your own examples of AI chatbots messing up, organically in your travels, please screencap the aberrant result and post it in comments below. The comment tools should have a button bottom right for adding images, which may limit file size, but smaller files are obviously preferred anyway. Just enough to be legible is all we need. If any explanation is needed of what is going wrong there, please add that, too. Bonus points if you can identify specific instances of (1) fatal omissions, (2) fatal conflations, or (3) full-on hallucinations. Double points for spotting a blatant math or logic error, or any clear-cut case demonstrating that it doesn’t understand what it is talking about.

§

All comments go to moderation except for Patrons etc. See Comments & Moderation Policy.

Share this:

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading