The idea of tailored messaging sounds great: Just deliver the right message, to the right person, at the right time, and in the right context (Kiesler, 1966). But implementation can be challenging. What is the "right message," for example, and how does it vary from person to person, or for a single person from one point in time to another? There are various approaches to creating tailored messages, and my research team has tried several of them in previous research.
A first pass at message tailoring might involve personalization. For instance, if you pick a message about exercise that was written specifically for 50-year-old White men with a family history of heart disease, I'm likely to pay attention because it speaks to my own situation and my personal risk factors. A generic message like "exercise is good for your heart" is less persuasive because it's not specifically about me, even though it is equally true. One common type of message tailoring, therefore, is based on a recipient's demographic characteristics. A 2007 meta-analysis by Seth Noar and colleagues found that demographic tailoring made messages more effective in changing people's behavior.
Although demographic tailoring works, it might be nothing more than a placebo effect. Some of the
same benefit can be achieved simply by telling people "this message was chosen for people like you," and then proceeding to deliver
exactly the same message to everyone after that preface. This is sometimes called a
pseudotailoring effect because it creates the appearance of customization but does not actually customize the content. In other words, people pay more attention to messages that they believe have been selected especially for them, but you don't actually have to give different messages to different groups in order to reap the benefits of that predisposition. Pseudotailoring is an important methodological confound that must be considered in studies of tailored-messaging interventions, especially when tailored messages are compared to a control group in which people receive messages that were clearly
not selected specifically for them. In that situation, it's hard to differentiate the effects of actual message customization from the pseudotailoring effect of people simply
believing that their message was customized.
Noar et al.'s meta-analysis also looked at the benefits of tailoring messages on theory-related variables -- in other words, people's psychological characteristics instead of their demographics. Research can suggest theoretical variables that affect people's health behaviors, either from
standard correlational studies or (better, in my view)
longitudinal within-person data. The identification of relevant predictors is only the first step, however. The funnel diagram at the top of this post shows that these "hunches" from predictive research must be translated to actual messages. Knowing that people
take less of their medication when they are less motivated for treatment doesn't tell you exactly what you should say to the less-motivated individuals in your study. To create actual messages (the middle stage of the funnel), investigators use a mix of creativity, intuition, and (rarely) concrete research that shows one type of message works better than another. A notable example of such research is the
stages of change, which
have been successfully used to match messages to people with specific levels of readiness to modify their behavior. In an
early study, I used another concept -- people's theory of problem causation -- to select specific intervention messages for in-person delivery. The approach didn't pan out exactly as planned, but it was a good example of theory-based tailoring.
Even though
Noar et al. found that theory-based message tailoring was effective, their results also suggest that no one theory is better than any other. This is a
typical finding in psychotherapy research, where very diverse treatment approaches tend to produce the same level of benefit. Some common theoretical constructs used to design tailored messages include (a) a person's level of readiness or motivation to change, (b) their beliefs about a health problem like diabetes, (c) their beliefs about a behavior like exercise or taking medication that is intended to address the problem, (d) their current stress level, and (e) their level of confidence in their ability to take action. My
daily-process studies of people with HIV suggest that (f) mood and (g) perceived social support might be other useful variables to consider. Unfortunately, it's probably not possible to tailor messages based on every potentially important variable. Noar et al. found that the average person could only respond based on 4 to 6 message characteristics; beyond that, further tailoring did no good. I wrote
here about a case study in which we gave a patient tailored recommendations to manage fatigue based on data about his sleep, physical activity, and stress management, and giving recommendations based on these three theory-relevant variables led to positive results. Noar et al. also found that the effects of theory-based tailoring were additive to those of demographic tailoring, so an optimal tailored-messaging intervention should probably use some of each.
One other way in which messages can be tailored is through variations in the words used to express message content (often called a "
message framing" effect). For instance, a message could emphasize the risks of being inactive or the health benefits of exercising, even though the underlying information -- "exercise is good for you" -- is the same. I used a message-framing approach to tailoring in a text-messaging intervention to improve HIV medication adherence.
That study had only ten possible messages, each addressing a different reason people might not take their medication. But each of the 10 messages could be framed in 32 different ways, with specific wording elements selected based on people's responses to a daily survey. The idea was to change
how we said things, rather than
what we said. Here's a sample message from the article:
(1) IMPORTANT MESSAGE: A vacation from meds is no
vacation for you! (2) It’s not easy to take medication every
day. (3) Taking all of your HIV medication (4) is essential to
prevent getting sicker. (5) People are counting on you to
take the best care of your health that you can. (also 2) Think
about what makes it hard for you to take medication right
now, and tell your health care providers.
Element (1) is tailored based on stress, the two segments labeled (2) are tailored for self-efficacy, (3) was selected based on the participant's current coping strategies, (4) was tailored for mood, and (5) for current level of social support. The real trick to this messaging intervention was to maintain good sentence structure and grammar in producing a complex combination of elements, any one of which could be traded out based on different survey results without interfering with any of the others.
Now here's the same message with each of those features tailored in exactly the opposite way -- low rather than high stress, high instead of low self-efficacy, etc. The goal was for this to be exactly the same information -- HIV medications work best when taken consistently -- but communicated in a very different way.
(1) HIV medications work best when taken at least
95% of the time. (2) You can take control of your health. (3) Skipping doses or taking a break from HIV medication (4) can make your HIV worse. (5) Taking care of your health
is something very important that you can do for
yourself. (also 2) Keep taking your medication! Don’t stop or skip
without talking to your health care providers first.
Try trading out any element of the second message with any similarly-numbered element from the first: It will still be readable. That was the real innovation in this particular study, which allowed us to match messages to participants' needs on several different theory-related variables at the same time. It should be said, unfortunately, that in this study there was no benefit of either message frame, and no difference based on whether we gave someone a message that was exactly matched to their current mental state or exactly
mismatched to it! Instead, the simple novelty of receiving tailored messages seemed to help. We did produce improvements in adherence, but it seems to have been a pseudotailoring effect (or simply a
Hawthorne effect), rather than a genuine tailoring effect.
Finally, technology allows for personalization of messages based on factors other than their content.
Another meta-analysis by Katharine Head, Seth Noar, and colleagues (2013) showed that Internet-based interventions were more effective when their messages started out more frequent and then spaced out over time. There was also a benefit to giving participants the ability to control the frequency of message delivery for themselves.
Personalization is a well-known way to increase people's engagement with any activity or source of information, so simply giving people more virtual buttons and knobs to adjust their own experience can make a tailored-messaging intervention more effective.
Rich content is another commonly suggested strategy for increasing engagement, and Internet-based interventions can easily include non-text components like pictures, sound, or video. But in their meta-analysis of tailored messages the researchers didn't find any strong advantage of these components compared to text alone.
In our current study of
exercise for people with HIV, we are currently collecting data on the effectiveness of individual messages for specific people at specific times. As I described in
my last post about that study, we initially created messages based on a combination of hunches, theory, and prior data (left area of the figure at the top of this page), and then refined them with feedback from experts and patient groups (middle area). Although an algorithm does generate the messages automatically, it's a pretty simplistic one: It tailors on the basis of a single theory-relevant variable, the participant's primary barrier to exercise on that particular day as determined from their responses to a survey. The system then has a random component that generates messages with a range of other theoretical characteristics. We included the random feature specifically because of our lack of success with detailed tailoring of message wording in our prior study, and to capitalize on the apparent benefits of novelty or pseudotailoring. We did include some extra characteristics like website links or personalized data to go along with specific messages, and we could look at those features' contributions to message efficacy in addition to the message content. The hope is that specific messages will be found to work better for specific types of people at specific points in time, and that the algorithm can be refined to incorporate the most effective characteristics (right-hand side of the funnel) in future iterations of our tailored-messaging software.
Tailored messaging interventions can be quite complex, as shown in the text examples above, and even a simple selection algorithm as in our current exercise study can be technologically complex. And our own uneven history of success with tailored messaging illustrates the difficulty of designing and testing this type of intervention. Pattern-matching effects have been particularly hard to isolate using traditional randomized controlled trial methods due to the many different variables that may go into a single tailored message -- there's no obvious "intervention" versus "control" comparison for statistical testing. However, complex matching problems are increasingly solvable using modern machine-learning methods, as long as we have some outcome measure to tell us which messages landed effectively and which ones fell flat. We're collecting that type of data in our current study. That's what will ultimately allow for
a sophisticated algorithm that selects the right message for the right person at the right time.
Comments
Post a Comment