This blog has always included a technology component, including using sensor devices to measure the workings of the Intuitive mind; using technology-based surveys to predict exercise or diabetes self-management; delivering tailored messages to support exercise or reduce fatigue; using technology to support a social reinforcement approach to behavior change; changing electronic health record systems to adapt to clinicians' needs as they become more tired during the course of a shift; and using biofeedback to support mindfulness. These technologies are also gaining momentum among researchers and technology developers nationally. In fall 2022 I had the chance to attend a Smart Health conference on technology and health that was jointly sponsored by the National Institutes of Health (which funds health care research) and the National Science Foundation (which funds basic science), which highlighted recent and ongoing work at the intersection of health care and technology.
"Smart health" is an umbrella term that includes three components:
- eHealth (e = electronic), which involves hospital- or clinic-based technology upgrades, "meaningful use" of electronic health record systems to improve patient care, clinician-facing interventions like electronic workflows or checklists, patient portals to share clinical information, and big data projects that sometimes aggregate patient data across large health systems
- mHealth (m = mobile), which involves sensors, apps, and tailored messages. mHealth interventions are mainly patient-facing, although there is increasing interest in developing interoperability between patients' personal sensor data (smartwatches, Fitbits, etc.) and their more formal healthcare records.
- uHealth (u = ubiquitous) is the newest face of Smart Health, involving smart homes, sensors to monitor health outcomes like mobility, and robotics or other home-based interventions to help patients with activities of daily living. Many uHealth interventions are targeted to older adults, and are designed to maintain them in independent living arrangements.
Several presentations at the conference highlighted efforts to infer a patient's emotional state from their physiological measurements. The best-researched of those measurements is heart rate variability (HRV), which is responsive to psychological stress as well as physical fitness. Several projects I'm personally continuing to investigate whether sensors can give us a window into someone's state of consciousness or behavior outside the level of conscious awareness. Some interesting examples at the conference were the use of heart rate and movement sensors during nurses' shifts at a hospital to proactively detect burnout, the use of HRV data to predict suicide risk among patients with mental health conditions, and the use of sensors to infer whether a patient is experiencing cravings for opioids and therefore has higher risk for relapse after medication-assisted treatment. I also saw a recent study about skin-patch sensors that allow for real-time physiological monitoring, combining the real-time data collection capabilities of a smartwatch with the analytic capabilities of a blood test. I have seen a number of research presentations about this technology over the years, but it seems like it's finally getting close to market. One particular research challenge connected to sensor data is the difficulty of establishing "ground truth," because each source of information may include its own biases and shortcomings, with no single gold-standard measure. Some solutions are to include multiple data streams into predictive models, to gather even more data to augment the information currently available, and to use machine learning to parse information that's too complex for humans to easily understand.
Researchers across the country are also using technology in new ways to diagnose illnesses, as we saw in the ability of HRV measures to detect COVID-19 infection. Researchers at the conference presented preliminary results from a study designed to detect autism in kids based on where their gaze lingered during a social interaction; an optical scan of blood vessels in the retina to predict Alzheimer's disease progression; auditory sensors to analyze breathing and prevent sudden infant death syndrome (SIDS) in newborns; and a kinematic analysis of patients' movement as a method for assessing their fall risk. New types of sensors are proliferating, such as a salivary sensor to predict a patient's overall health from their dental health, a "smart shoe" that measures things like a patient's gait and ankle support, and a "smart blanket" that tracks an older patient's health when draped across their lap. One team was even using data from a patient's driving patterns (collected by the car's on-board computer) in an effort to find early warning signs of mild cognitive decline. Any new diagnostic procedure using technology is potentially vulnerable to bias, and this concern is stronger the more that the data are used to make decisions with real-world consequences like taking away a patient's driver's license. Artificial intelligence methods provide both protections and novel risks, and ethical considerations are an important growth area for technology-focused health care research.
Technology is also a delivery method for health behavior change interventions, as in the case of apps that promote exercise. A number of projects were using mobile technology to deliver personalized messages to patients, particularly in the form of feedback or trends from sensor-based monitoring. Another promising use of messaging was to provide updates about older adults' health to their medical caregivers or family members. These uses of daily sensor data require a good bit of pre-processing, and ideally a simple visual presentation (e.g. infographics) to help end-users understand the complex information. New methods for longitudinal data analysis, like LASSO regression or testing "knockoff" variables to determine whether fake data points might be just as predictive as real ones. (Both of these are designed to protect against type 1 statistical errors, which are much more common in very large datasets). For clinicians, the presentation of data can also bring in relevant clinical rules or practice guidelines to enable effective treatment decisions. All of these aspects are active areas of study, with researchers looking for the best way to get the right information to the right person at the right time, and to present it in the most helpful way. As a psychologist in a group of mostly technical and clinical folks, what I didn't see was a lot of attention to patients' psychological characteristics such as daily ups and downs in mood and motivation. That's an area where I have done a good bit of research, and where I think that current efforts to help patients use data for self-management could be enhanced.
The Smart Health conference also included a fair amount of content about using robots in health care. Voice recognition tools like Siri or Alexa have already made it easier for older adults to manage certain tasks, like checking messages, communicating with health care professionals, or managing light and temperature in their home. Other currently active uses of robots involve the delivery of dangerous treatments like radiation to patients with cancer. Some of the most interesting projects in robotics actually involve the most boring daily activities, such as folding laundry, preparing a sandwich, or making a bed. These activities all require a complex mix of broad movements and fine motor control, and have proven to be particularly difficult for robots to master. And in a marriage of sensors and robotics, one team presented their work using muscle state sensors to allow patients with movement problems to control an exoskeleton with their minds. Because most of these applications are designed for use by older adults, there are important access considerations that start with whether someone's home has adequate Wi-Fi, as well as physical considerations like fine motor skills for typing, vision to see information on a screen, or mobility to interact with a user interface. Older adults might need more tech support, with some evidence that peer support is particularly helpful. And the potential to share data with family members and caregivers is helpful on one hand but also creates new privacy issues for older adults on the other. Health disparities are greater for people who are older and who are members of minority groups. And finally, the cost of many new medical tools is a major issue, with new materials needed to balance consumers' need for both greater durability and lower cost.
Finally, the conference had a strong focus on artificial intelligence, often as a technology that supports and underpins some of the other approaches being studied. In the realm of sensor data, researchers are using AI machine learning methods to integrate data across multiple input streams in order to draw valid clinical conclusions and share those with clinicians through the electronic health record. AI is also being used to detect rare adverse events and look for warning signs that might not be picked up in a standard clinical process. Again, the risk of bias is an important consideration for AI. Another contemporary area of research focuses on human interactions with AI, whether that relates to selecting a gender for AI agents or creating an AI chatbot that avoids the "uncanny valley" where something is almost human but not quite enough. There are some indications that a less human-seeming AI -- a robot that acts like a robot -- is actually preferable to many end users.
One last issue that cuts across all of these technological areas is the regulatory environment, and the conference included a special presentation by regulators at the FDA. You might be surprised to learn that the FDA does not currently regulate most devices or decision-making tools in the health care space; instead, these interventions tend to play by the rules of technology start-ups and innovators, and might not even be considered "health care" from the perspective of the HIPAA privacy law. FDA is particularly interested right now in diagnostic decision-making algorithms, for example the possibility of bias in AI tools to interpret radiology scans. It seems more likely that they will regulate predictive computational models (machine learning tools) than individual sensors or robots. But the more risky a device failure would be, or the more consequential the decisions made from the data, the more likely it seems that FDA will choose to regulate any of these new technologies. It's quite clear that they do have the legal authority to step in wherever they feel its necessary. At the moment they are simply choosing to focus their regulatory efforts in specific circumscribed areas. It's not quite the Wild West out there, but there is definitely a wide-open field for new uses of technology in health care.
Comments
Post a Comment