Lately, I’ve been interested in looking at applying dynamical systems models to studying health behavior change interventions. This will be a short series of posts trying to work through a basic understanding of the approach, and some thoughts on what might be done with it. This posts starts with some basic motivation: what kinds of questions am I trying to answer?
Suppose we are designing some kind of intervention that’s supposed to help people get more physical activity, improve their diet, or help with some other health behavior. The intervention could be lots of different things: regular visits with a counselor or a peer support group, self-management methods like a food diary, a smartphone-based mHealth application, a web-based virtual counselor, some combination of them, or something else entirely.
Changing health behavior takes time, if changes are to be lasting. Most (well-designed) interventions are meant for a relatively long period of participation: weeks at the least, and possibly years. During these time, a participant might have many interactions, where interactions, depending on the intervention, might be counseling visits, visits to a website, or daily use of a smartphone application.
For many of these kinds of interventions it is possible to collect data for each participant at each interaction. This data will typically include some ongoing assessment of the target behavior. In a physical activity intervention, for example, participants might report how many minutes of planned activity they have performed, or it might be reported automatically by a pedometer or other device. It may also include information about the interaction itself: most basically, whether a participant actually did interact (attended their session, used the application, etc.). This is all especially true for automated interventions, where a rich stream of data can often be collected automatically, as a side effect of keeping a log of the system.
In academic settings, where an intervention might be studied as a research project, or in a clinical trial, it’s been typical (historically, at least) to focus on endpoints. If participants have a 6-month physical activity intervention, researchers might assess their activity and fitness at the start of the study, at 6 months (after the intervention), and perhaps at a one year followup as well. With an appropriate control group who did not participate in the intervention (but might have had some substitute), this can answer questions like whether the intervention changes behavior for the average participant, and whether those changes remain at some time later or disappear.
These can be useful questions, but I don’t feel they are the most interesting questions to ask. These questions come from a confirmatory viewpoint, and would make sense to me if I’d designed a behavioral health intervention and was planning to package it up and deploy it — sell it commercially, or have it prescribed by healthcare providers — as unchanged as possible, and I wanted to prove how effective it is. But this is very rare, in my experience. Interventions are complicated: the simplest automated intervention has dozens of subtle design decisions, both in choice of theoretical approach and in more “surface” issues of interface design, and any in-person intervention involves an immense number of decisions by a trained counselor (or whoever does the intervention), which are hard or impossible to completely capture and standardize. Interventions are constantly being redesigned and improved, either through new versions of an automated intervention, or through human counselors gaining training and experience.
So, what questions are more interesting, and how can rich streams of data from an intervention be used to try and answer them?
When during an intervention does a participant’s behavior change? Is there a gradual change over time? Is there an abrupt change in response to some part of the intervention, or some external event? Is behavior change stable? Does a participant’s behavior revert back to their previous behavior over time?
Do different participants respond differently to interventions? If (as in the last question), we see that behavior changes in response to some event, does every participant respond the same way? If behavior reverts over time, does everyone revert to the same place, or at the same rate?
Can we describe a model of behavior change in an intervention? That is, are there things that we can measure that predict (and, we might argue, “cause”) behavior change? For example, if an intervention asks participants about their self-esteem, does this predict later changes in behavior? And, following the last two questions, how fast and long-lasting are those changes, and do all participants respond in the same way?
These are all exploratory questions, not confirmatory. They’re not directly useful if what you want to do is know how well an intervention works, but may be very useful things to ask if your goal is to design new and better interventions.
Recently, there’s begun to be some published work that looks at using dynamical systems as a mathematical (statistical) model to describe the process of a behavioral intervention (for example Hekler et al., 2013). In this series of posts, I’ll try and give my own brief (and very incomplete) introduction to these approaches, look at why they may be useful for these kinds of research questions, and give some idea of what I think the next steps in the field might be.
Hekler, E. B., Buman, M. P., Poothakandiyil, N., Rivera, D. E., Dzierzewski, J. M., Morgan, A. A., … Giacobbi, P. R. (2013). Exploring Behavioral Markers of Long-Term Physical Activity Maintenance A Case Study of System Identification Modeling Within a Behavioral Intervention. Health Education & Behavior, 40(1 suppl), 51S–62S. doi:10.1177/1090198113496787