Skip to main content

Posts

Showing posts from January, 2026

New Evidence Supporting TMT as an Explanation for Type 1 Diabetes Self-Management Success

I have written previously about how the Intuitive Mind affects type 1 diabetes (T1D) self-management, for example based on people's  situational awareness of changes in their own blood sugar as they occur. In one prior study, our team found that Intuitive-level variables such as motivation and social perception (based on a daily survey) were related to successful daily blood sugar control (based on time-in-range [TIR], a commonly used metric from continuous glucose monitoring [CGM]) among adolescents with T1D. I also wrote about my own experience trying a CGM for 2 weeks, which did seem to result in increased situational awareness. In another study, we found that adolescents' proactive  use of a hybrid closed-loop system (pictured above), which incorporates an insulin pump and a dosing algorithm together with a CGM, resulted in better TIR results than when people waited for the technology to tell them what to do. Specifically, adolescents who looked at their CGM readout l...

Is Neuroscience Compatible with Free Will?

I've tackled this question on occasion before, but it keeps coming up -- largely due to AI models that create human-like outputs and can effectively pass the Turing Test . Despite their complexity, modern large-language models (LLMs) are wholly deterministic and can therefore be understood in terms of mechanistic cause and effect: Put in this exact input, put it through these steps, and you get that specific output, 100% of the time. For the last couple of years, people have been talking about "generative AI" as though it was non-deterministic , but researchers at Cornell University in 2025 proved that this was false . You can in fact work backwards from an AI output to the prompt that was used to generate it. The idea at one time was that generative AI was "creative" because it assigned probability-based weights to various outputs and selected the most likely one. But the new research shows that if you tightly control inputs, you do always get the same output, ...

Some Things That AI Probably Shouldn't Do

  I'm on record endorsing the use of AI by students to improve the quality of their writing and their thinking, but also expressing concern about the potential for autonomous AI to end civilization! So what's the deal here? Am I for AI or against it? As in many areas of life, the answer is "it depends ...". In this blog post, I will look at some things that AI probably should not  be doing for us, which might help to delineate the areas in which it can be more beneficial. Let's start with ethics. Although some techno-futurists have argued that AI will eventually be better at knowing what's good for us than we are ourselves, a recent report showed that a "robo-ethicist" using large language models (LLM) showed notable flaws in its reasoning. LLMs' ethics were consistently more influenced by utilitarian thinking (do what causes the least harm or the most benefit in this specific situation) than by reasoning from first principles (Kant's id...