Skip to main content

Posts

Showing posts from January, 2026

Is Neuroscience Compatible with Free Will?

I've tackled this question on occasion before, but it keeps coming up -- largely due to AI models that create human-like outputs and can effectively pass the Turing Test . Despite their complexity, modern large-language models (LLMs) are wholly deterministic and can therefore be understood in terms of mechanistic cause and effect: Put in this exact input, put it through these steps, and you get that specific output, 100% of the time. For the last couple of years, people have been talking about "generative AI" as though it was non-deterministic , but researchers at Cornell University in 2025 proved that this was false . You can in fact work backwards from an AI output to the prompt that was used to generate it. The idea at one time was that generative AI was "creative" because it assigned probability-based weights to various outputs and selected the most likely one. But the new research shows that if you tightly control inputs, you do always get the same output, ...

Some Things That AI Probably Shouldn't Do

  I'm on record endorsing the use of AI by students to improve the quality of their writing and their thinking, but also expressing concern about the potential for autonomous AI to end civilization! So what's the deal here? Am I for AI or against it? As in many areas of life, the answer is "it depends ...". In this blog post, I will look at some things that AI probably should not  be doing for us, which might help to delineate the areas in which it can be more beneficial. Let's start with ethics. Although some techno-futurists have argued that AI will eventually be better at knowing what's good for us than we are ourselves, a recent report showed that a "robo-ethicist" using large language models (LLM) showed notable flaws in its reasoning. LLMs' ethics were consistently more influenced by utilitarian thinking (do what causes the least harm or the most benefit in this specific situation) than by reasoning from first principles (Kant's id...