If you have been exploring new large-language-model forms of artificial intelligence like ChatGPT over the past year, you are probably now convinced that these generators of humanlike writing are not actually conscious . They are instead like a very fancy version of the autocomplete response that pops up when you are typing an email. They make good predictions, and therefore they can approximate something that a human might have written (or drawn, or composed). But when you look under the hood, "there's no there there": The algorithm has no awareness of what it's saying. The current state of affairs begs the question of what it would take for an artificial intelligence to become actually self-aware. The major obstacle to achieving conscious AI is that we don't understand what makes us humans self-aware, i.e. what "consciousness" is in the first place. For most of the last century we had a work-around for this problem in the form of the Turing Test: ...