I've tackled this question on occasion before, but it keeps coming up -- largely due to AI models that create human-like outputs and can effectively pass the Turing Test . Despite their complexity, modern large-language models (LLMs) are wholly deterministic and can therefore be understood in terms of mechanistic cause and effect: Put in this exact input, put it through these steps, and you get that specific output, 100% of the time. For the last couple of years, people have been talking about "generative AI" as though it was non-deterministic , but researchers at Cornell University in 2025 proved that this was false . You can in fact work backwards from an AI output to the prompt that was used to generate it. The idea at one time was that generative AI was "creative" because it assigned probability-based weights to various outputs and selected the most likely one. But the new research shows that if you tightly control inputs, you do always get the same output, ...