One of the most exciting things about this historical moment for studying our two minds is the widespread availability of cheap, accessible, consumer-grade sensor devices. Sensors provide data on physiological processes that (a) occur outside people's conscious awareness, (b) are happening all the time in the context of everyday life, and (c) are probably related to mental states or behaviors. These are the ideal characteristics for a measure of the Intuitive System. In previous years many devices were available but only in the context of an artificial laboratory situation, for example in a formal biofeedback intervention or in physiological research. Now many sensors can be worn all the time, and send data back to researchers via wireless upload. Time-stamped data can then be matched with information from other sources like environmental sensors, surveys, or participants' demographic and clinical characteristics (age, disease history, medications, race or ethnicity, gender, etc.). We recently updated our measures page with notes on various sensor devices that we have personally tested for potential use in our research programs.
Dr. Blaine Reeder presented a talk on choosing sensor measures as part of our symposium this April at the Western Institute of Nursing Research (WIN) conference in San Diego CA. He suggested selecting sensors for research based on a device's (a) functionality including whether it has all advertised features, (b) usability for everyday wear by participants, (c) usefulness for research based on factors like cost, availability, data access, and vendor support, and (d) user privacy. Dr. Reeder suggested that the first three considerations usually lead to the selection of a device, at which point privacy concerns tend to arise as a potential deal-killer. For the criteria of functionality, usability, and research usefulness, both hardware and software considerations come into play.
Functionality is whether a device works as advertised, and can be undermined by problems like failure to record data, loss of data based on inadequate storage, loss of power to the device during routine use, or inability to capture data from some types of participants. We have encountered many issues of this type. For instance, the HeartMath heart rate sensor captures data only while a user is actively attempting to focus or meditate, making it less useful for ambulatory monitoring. Spire Stone captures data on breathing but stores only about 6 hours' worth of information, so if the appropriate app is not open on the user's phone data beyond 6 hours will be lost. The Apple Watch captures a variety of data but only holds enough power for a day or less, requiring the user to charge it frequently. And the Muse headband captures brainwave data, but only while the user is sitting relatively still, only while the appropriate phone app is running, and only for users with a minimum head circumference, which has the effect of excluding children. Any of these problems can make a device unusable for some studies, although it may still be appropriate for others.
Usability includes end-user considerations like whether participants find the device easy to interact with, whether the hardware is sturdy enough to endure normal wear and tear, and whether the software uses too much data or takes up too much space on their phones. Although our research team had good experiences with FitBit’s hardware, we found that their software took a long time to download and was too large when one of our participants had an older-model phone. The default wristband was also too small for men with thick wrists, which caused us to lose some data. Pillsy pill bottle sensors were easy to use from a software perspective, but several users found the plastic bottles broke when not handled carefully. And some users were not patient enough to allow the Muse brainwave headband to run through its required calibration cycle, so they never got to the stage of recording data. Users’ frustration, impatience, or excessive use of force can be a major barrier to successful data collection if these problems happen often enough, and frustrated users are more likely to drop out of your study entirely.
Usefulness for research, in contrast, is the level of burden to the investigator. Dr. Reeder gave the example of "code rot" in which an application becomes less useful over time because it refers to third-party code that hasn't been kept up. An example in our experience was the Pebble Time watch, which was promising as an activity sensor but became less useful after its manufacturer was purchased by FitBit and the product was no longer supported. We had better luck with FitBit's own sensor device, although the vendor's own API interface stopped working after a few months and we needed to find a third-party option to access raw data instead. Another type of problem is lack of data access, as in the example of the Apple Watch. The product has good data-gathering capabilities but for most users the information can only be viewed on the user's smartphone in aggregate form; to access raw data, one needs to be an application developer using Apple's HealthKit software. The use of aggregate metrics reveals a third problem, which is that these metrics often reply on proprietary algorithms -- for instance, Apple's resting heart rate calculation or FitBit's data on sleep stages. Because the underlying formulas used to calculate these results are not public and are generally not tested against gold-standard measures, researchers can't be completely sure of what the data mean. Finally, cost and ability to obtain sensor devices are important concerns, especially for studies that require a large number of units so that each participant can have one. Devices manufactured overseas, like MEMS pill-bottle sensors for adherence, take longer to obtain and may involve shipping charges or delays due to U.S. Customs (for more about our experiences using MEMS, see this article). When devices are expensive, researchers need to keep close track of them and ensure that they are returned, while cheap devices may be given to participants to keep.
Finally, privacy concerns may be more or less important depending on the type of information collected. The idea of "privacy" is dependent on user perceptions, and is distinct from "security" which can be more objectively quantified based on the features available in a system to make sure the data are secure. Although clinicians and patients both have an understanding about the sensitivity of some types of information, like mental health diagnoses or sexually transmitted infections, other data like weight or smoking status may be seen as very sensitive by patients but simply descriptive by clinicians. Symptom information may or may not be seen as sensitive. In a recent study by Dr. Reeder and Dr. Kathy Jankowski, patients reported few concerns about sharing their activity data online, but they were more concerned about geolocation data, worrying for instance that a burglar could use it to tell whether or not they were at home. The more sensitive the data, the more concerned patients are likely to be about privacy, and these concerns may or may not diminish simply based on the data security features available. Instead, if privacy concerns are high enough, some patients may be unwilling to participate in a sensor study regardless of what safeguards are in place. For other types of information, security features may not be a consideration because patients' privacy worries about that type of data are low. The level of privacy concern that patients have about some data will depend on whether patients are identified by their real name or otherwise linked to their data; data collected under an anonymous username may be seen as more acceptable.
As with any research methodology, sensor-based data collection involves trade offs. Some sensors may be easier for the participant to use, while others may provide higher-quality data to the researcher. Some sensors may provide excellent functionality but have unacceptable privacy risks. Researchers should be careful to assess new technologies from the patient's perspective -- for instance, Drs. Reeder and Jankowski's recent study found that older women preferred sensor devices to monitor their own activity levels, but suggested environment-based smart home sensors for other older adults! The selection of a specific device depends on the needs of the study, the characteristics of the user including their level of technical sophistication, and the type of data to be collected. There is no perfect solution, but each new generation of technology brings more options to consider.
Finally, privacy concerns may be more or less important depending on the type of information collected. The idea of "privacy" is dependent on user perceptions, and is distinct from "security" which can be more objectively quantified based on the features available in a system to make sure the data are secure. Although clinicians and patients both have an understanding about the sensitivity of some types of information, like mental health diagnoses or sexually transmitted infections, other data like weight or smoking status may be seen as very sensitive by patients but simply descriptive by clinicians. Symptom information may or may not be seen as sensitive. In a recent study by Dr. Reeder and Dr. Kathy Jankowski, patients reported few concerns about sharing their activity data online, but they were more concerned about geolocation data, worrying for instance that a burglar could use it to tell whether or not they were at home. The more sensitive the data, the more concerned patients are likely to be about privacy, and these concerns may or may not diminish simply based on the data security features available. Instead, if privacy concerns are high enough, some patients may be unwilling to participate in a sensor study regardless of what safeguards are in place. For other types of information, security features may not be a consideration because patients' privacy worries about that type of data are low. The level of privacy concern that patients have about some data will depend on whether patients are identified by their real name or otherwise linked to their data; data collected under an anonymous username may be seen as more acceptable.
As with any research methodology, sensor-based data collection involves trade offs. Some sensors may be easier for the participant to use, while others may provide higher-quality data to the researcher. Some sensors may provide excellent functionality but have unacceptable privacy risks. Researchers should be careful to assess new technologies from the patient's perspective -- for instance, Drs. Reeder and Jankowski's recent study found that older women preferred sensor devices to monitor their own activity levels, but suggested environment-based smart home sensors for other older adults! The selection of a specific device depends on the needs of the study, the characteristics of the user including their level of technical sophistication, and the type of data to be collected. There is no perfect solution, but each new generation of technology brings more options to consider.
Comments
Post a Comment