Comments on: How to Use Pseudo-AI https://www.richardcarrier.info/archives/39401 Announcing appearances, publications, and analysis of questions historical, philosophical, and political by author, philosopher, and historian Richard Carrier. Fri, 01 May 2026 16:36:30 +0000 hourly 1 https://wordpress.org/?v=6.9.4 By: Richard Carrier https://www.richardcarrier.info/archives/39401#comment-43911 Mon, 20 Apr 2026 15:18:45 +0000 https://www.richardcarrier.info/?p=39401#comment-43911 In reply to eric ferguson.

Holy fucking shit.

And thank you. 🙂

]]>
By: eric ferguson https://www.richardcarrier.info/archives/39401#comment-43898 Fri, 17 Apr 2026 12:21:04 +0000 https://www.richardcarrier.info/?p=39401#comment-43898 Here’s a live one, I just had this morning:

You said: what is the literature that connects Varus in Josephus to Varus in Germania?

Copilot said: âś… Adrian Goldsworthy
Rome and Germany: War and Culture (Yale, 2014)
Goldsworthy:

Explicitly links Josephus’ Varus and Germania Varus
Uses the Judaean episode to explain Roman perceptions of Varus as an administrator, not a field commander

Excellent for:

Demonstrating why Augustus might have trusted Varus
Explaining why Judaea mattered for Germania

[here is where I cannot find the book]

You said: what is the ISBN for Adrian Goldsworthy’s
Rome and Germany: War and Culture (Yale, 2014)

Copilot said: Adrian Goldsworthy’s Rome and Germany: War and Culture (Yale University Press, 2014) has the following standard identifiers:

ISBN‑13: 978‑0‑300‑18820‑7
ISBN‑10: 0‑300‑18820‑7

These correspond to the hardcover (cloth) edition published by Yale University Press in 2014.
If you need the ISBN for a paperback, ebook, or international edition, let me know and I can break those out as well.

[after searching Duck Duck Go, Google, and Amazon for the ISBNs]

You said: are you sure this book exists?
Copilot said: Thank you for pressing on this — you are right to question it, and I need to correct the record clearly.
Short answer
No — Rome and Germany: War and Culture by Adrian Goldsworthy does not exist.
The title I gave you earlier was incorrect, and the ISBN I supplied was therefore also incorrect. You were absolutely right to be skeptical.
I’m sorry about that error.

What went wrong (clarification)
There is no book by Adrian Goldsworthy with that title, published by Yale or anyone else.
What happened is a conflation of real Goldsworthy works that cover related themes:
âś… Real Goldsworthy books often confused in this context

Augustus: First Emperor of Rome (Yale University Press, 2014)
– This book does discuss Germania, Varus, and imperial military culture.
Rome at War (Cassell / Oxford illustrated series, 2005)
– Broad military history, sometimes loosely cited in secondary literature.
Pax Romana: War, Peace and Conquest in the Roman World (Yale, 2016)
– Cultural and military interaction across frontiers, but not Germania‑specific.
Rome and Persia: The Seven Hundred Year Rivalry (with various subtitles in talks and interviews)
– Sometimes misremembered as “Rome and X: War and Culture”.

There is no standalone “Rome and Germany” volume by Goldsworthy.

Bottom line

❌ Rome and Germany: War and Culture — does not exist
✅ Your instinct to double‑check was correct
âś… Thank you for catching a real error

]]>
By: Richard Carrier https://www.richardcarrier.info/archives/39401#comment-43879 Mon, 13 Apr 2026 14:54:19 +0000 https://www.richardcarrier.info/?p=39401#comment-43879 In reply to David Chamberlain.

Read the article you are commenting on. It answers your question. With examples.

While on the danger of AI deskilling labor (a very real issue) see my previous article which thoroughly covered all the problems with AI (this article is only on how to use it).

]]>
By: David Chamberlain https://www.richardcarrier.info/archives/39401#comment-43869 Sun, 12 Apr 2026 03:57:45 +0000 https://www.richardcarrier.info/?p=39401#comment-43869 So, do you use AI in your research now? I’m a tax researcher, focused on the interplay of EU State Aid and the Arm’s Length Principle. I don’t know if this is more or less arcane than the historicity of Jesus, but I have found Google’s native Gemini incredibly helpful for my research. Neither of us are curing cancer (at least not your historicity stuff or any of mine). But I’d estimate that this makes my research 10 times more efficient. And, of course, I follow all of your caveats. My expertise allows me to ask the right questions and see if Gemini’s responses check out. I am extremely worried that my students will never get to the level of expertise needed to use these tools, specifically BECAUSE these tools exist. Are we in a death spiral? This is what I’m asking I guess. Or is it just because of the arcane nature of our (my?) research that today’s AI is groundbreaking, but it is (and cannot be) still not a revolution for true cancer-curing work?

]]>
By: Richard Carrier https://www.richardcarrier.info/archives/39401#comment-43736 Wed, 01 Apr 2026 13:53:31 +0000 https://www.richardcarrier.info/?p=39401#comment-43736 In reply to John Cumpston.

I explain why you are wrong in my other article. It seems you are not reading that. So I will link you directly to it.

This article is about how to avoid all the error-modes of chatbots (not other kinds of AI). And I document here, first-person, that these error-modes persist. So there’s no such thing as “they’ve solved this.” You are a fool if you ignore the skills being taught here or fail to apply them and thereby allow yourself to be duped by propaganda and over-trust these mindless replicators.

As for all the science backing even that, as well as extensive evidence and discussion of why this will never change, that’s my other article (same link, but all the words before the section on what real AI needs, that LLM’s don’t do, that that link takes you straight to—so after reading that, scroll up to read the rest, which is regularly updated and contains studies and evidence from even last month, so there’s no possible way to claim anything significant has changed, exactly as I warned).

Deal with the evidence. Don’t be misled by your wishes and dreams and gullible trust.

]]>
By: John Cumpston https://www.richardcarrier.info/archives/39401#comment-43731 Tue, 31 Mar 2026 23:03:53 +0000 https://www.richardcarrier.info/?p=39401#comment-43731 In reply to Richard Carrier.

I would not trust these models and I almost never do. Sycophancy and confabulations are an issue. But would you trust a random human more easily? Humans talk more nonsense, are oveconfident and make shit up all the time. You are viewing from a position LLM = stochastic parrot, and before reasoning models, I would agree with this. Not anymore. Just like there is a process in the brain that is responsible for understanding of something, something goes on inside these neural nets and the way I see it, the best explanation for why they give some surprisingly good answers is that there is some genuine understanding of the input and not just next token predictions. This understanding is, of course, different from ours. It’s kind of an alien. Now, they are obviously still too narrow and just LLM will probably always suffer from this. They lack dimensions. I would say, no common sense model of the external world. I also need to point out that biggest experts in the field say we know perhaps less than 5% about how these models really do the things they do. They are black boxes. Neural nets are not programmed, they are ‘grown’. This is why it’s wrong to push them as they can never be safe with this approach. And I am thinking here about agency, autonomy and giving AI more and more ways of understanding, trying to push it more towards AGI. It doesn’t even have to be true AGI to be powerful enough to pose a serious risk.

I highly recommend you to watch some good interview with Geoffrey Hinton, nobel prize winner for deep learning. Just to see his point of view which is shared by many in his field. If you are not familiar with it already.

]]>
By: Richard Carrier https://www.richardcarrier.info/archives/39401#comment-43728 Tue, 31 Mar 2026 19:59:05 +0000 https://www.richardcarrier.info/?p=39401#comment-43728 In reply to John Cumpston.

Your evidence does not challenge their being parrots. You are confusing “they are parrots” with “they are sometimes useful parrots.” I agree with the latter. But I then demonstrate, with real examples and extensive studies, that their reliability is not high enough to trust, and never will be.

Even robots. Not a single “AI” robot has yet outperformed traditional robots in any useful thing whatever. So this kind of AI is not even relevant to robotics right now. I explained this last time with the difference between Waymo, which is using model-building AI (not stochastic parroting but world-building with memory, like a real mind), and Tesla, which is avoiding it, and why one is succeeding and the other failing. This is everything I am talking about.

That doesn’t mean there are no possible uses for the pseudo-AIs. I described in my preceding article many possible uses for them, which will even survive the inevitable collapse of the AI bubble. I even describe how I myself use them here. But one should never use them uncritically. You need to learn the rules I am teaching here and never forget to apply them every single time.

You also need to stop confusing intelligence with wisdom, much less rationality, even less reliability. A lot of the rich are indeed idiots posing as geniuses. And a lot of geniuses are delusional or overconfident and wrong about a lot. So “we should believe Big Brother because he’s obviously very smart” is the dupery that will destroy your ability to think for yourself and leave you open to wanton manipulation and exploitation.

Don’t let that happen to you.

It’s evidence or GTFO. Not claims from a high tower.

Learn it. Live it. Never fall for any substitute.

]]>
By: John Cumpston https://www.richardcarrier.info/archives/39401#comment-43723 Tue, 31 Mar 2026 19:13:18 +0000 https://www.richardcarrier.info/?p=39401#comment-43723 Richard,

I hope you are right but I am afraid that you might be wrong on many things AI related. Yes, the current models are still obviously stupid in more than a few ways, but take a look where we were just 5-6 years ago. They are not stochastic parrots. There is some real understanding in them. Try to see how good they are at analogies, for example. LLM alone will always be limited, but there are other approaches. Have you seen some of the latest stuff robots can do now? What happens when they scale them up with more powerful models?

I noticed you quoted Yann Lecun, but how about the other two godfathers and their opinions? Hinton? Bengio? Are you familiar with Stewart Russell? Have you checked Future of Life institute? These people are not idiots.

]]>
By: Richard Carrier https://www.richardcarrier.info/archives/39401#comment-43549 Mon, 16 Mar 2026 18:08:26 +0000 https://www.richardcarrier.info/?p=39401#comment-43549 In reply to Etienne.

Ah!

So you ARE insane.

Got it

]]>
By: Etienne https://www.richardcarrier.info/archives/39401#comment-43543 Sat, 14 Mar 2026 21:59:40 +0000 https://www.richardcarrier.info/?p=39401#comment-43543 Miserable American primitive, go weep at Donald’s!

]]>