Abstract
According to the dual systems perspective, risk taking peaks during adolescence because activation of an early-maturing socioemotional-incentive processing system amplifies adolescents’ affinity for exciting, pleasurable, and novel activities at a time when a still immature cognitive control system is not yet strong enough to consistently restrain potentially hazardous impulses. We review evidence from both the psychological and neuroimaging literatures that has emerged since 2008, when this perspective was originally articulated. Although there are occasional exceptions to the general trends, studies show that, as predicted, psychological and neural manifestations of reward sensitivity increase between childhood and adolescence, peak sometime during the late teen years, and decline thereafter, whereas psychological and neural reflections of better cognitive control increase gradually and linearly throughout adolescence and into the early 20s. While some forms of real-world risky behavior peak at a later age than predicted, this likely reflects differential opportunities for risk-taking in late adolescence and young adulthood, rather than neurobiological differences that make this age group more reckless. Although it is admittedly an oversimplification, as a heuristic device, the dual systems model provides a far more accurate account of adolescent risk taking than prior models that have attributed adolescent recklessness to cognitive deficiencies.
Introduction
Social scientists and casual observers of human development have long noted that the transitional period between childhood and adulthood is a time of heightened risk-taking. Indeed, despite the relative absence of illness and disease during this period, rates of morbidity and mortality increase substantially in adolescence, largely due to risk taking. The question of why adolescents seem predisposed toward recklessness is age-old; however, work in the field of developmental psychology, and more recently, developmental neuroscience, has provided new insights into the phenomenon.
For many years psychologists had attempted to explain adolescent recklessness as a consequence of cognitive deficiencies in young people's thinking, including irrationality, poor information processing, and ignorance about risk. As we have noted in previous publications (e.g., Steinberg, 2008), these accounts have been largely undermined by available evidence. Generally speaking, by age 15 or so, adolescents perform as well as adults on tasks measuring logical reasoning, information processing, and risk perception.
1. The emergence of dual systems models
About a decade ago, the budding field of developmental cognitive neuroscience began to provide insight into how patterns of brain development might explain aspects of adolescent decision-making (see, e.g. Dahl, 2004). In 2008, our lab at Temple University (Steinberg, 2008, Steinberg et al., 2008) and Casey's lab at Cornell (Casey et al., 2008) simultaneously proposed similar variations of a “dual systems” account of adolescent decision-making. This perspective attributes adolescents’ vulnerability to risky, often reckless, behavior in part to the divergent developmental courses of two brain systems: one (localized in the striatum, as well as the medial and orbital prefrontal cortices) that increases motivation to pursue rewards and one (encompassing the lateral prefrontal, lateral parietal, and anterior cingulate cortices) that restrains imprudent impulses (see e.g., Casey et al., 2008, Duckworth and Steinberg, 2015, Evans and Stanovich, 2013, Luna and Wright, 2016, Metcalfe and Mischel, 1999, Steinberg, 2008). Specifically, it proposes that risk-taking behaviors peak during adolescence because activation of an early-maturing incentive-processing system (the “socioemotional system”) amplifies adolescents’ affinity for exciting, novel, and risky activities, while a countervailing, but slower to mature, “cognitive control” system is not yet far enough along in its development to consistently restrain potentially hazardous impulses.
Several variations on this dual systems model have been proposed. The version that guides our work (Steinberg, 2008) is very similar to that proposed by Casey et al. (2008). Both conceive of a slowly developing cognitive control system, which continues to mature through late adolescence. However, whereas we propose that the socioemotional system follows an inverted-U shaped developmental course, such that responsiveness to reward increases in early adolescence and declines in early adulthood, Casey et al. have portrayed the socioemotional system as increasing in arousability until mid-adolescence, at which point it reaches a plateau, remaining at this level into adulthood. Furthermore, our version of the dual systems model posits that the decline in socioemotional arousability occurs independently of the development of the control system, whereas Casey et al.’s model proposes that the strengthening of the cognitive control system causes the socioemotional system to become less arousable. More recently, Luna and Wright (2016) have proposed another variation on the dual systems model (the “driven dual systems” model), which, like our model, hypothesizes an inverted-U shaped trajectory of socioemotional arousability, but, unlike our model, hypothesizes a trajectory of cognitive control that plateaus in mid-adolescence rather than continuing to increase into the 20s, as suggested by us and by Casey et al. In a similar vein, Luciana and Collins (2012) endorse a model that emphasizes the role of a hyperactive socioemotional system (“subcortical limbic-striatal systems” in their terminology) undermining the regulatory ability of the cognitive control system (the “prefrontal executive system”) resulting in greater risk-taking during adolescence. Like Luna and Wright, Luciana and Collins argue that the development of cognitive control is complete by mid-adolescence, as evidenced by adolescents’ adult-like performance on non-affective measures of cognitive capacity. Fig. 1 illustrates the similarities and differences between these versions of the dual systems model.
Fig. 1. Alternative theoretical models of the development of the socioemotional (reward processing) and cognitive control systems from about age 10 to age 25.
Another perspective, Ernst's (2014) triadic model, expands on the dual systems concept by hypothesizing that a third brain system—one responsible for emotional intensity and avoidance, anchored in the amygdala—is also important for understanding the developmental differences in “motivated behavior.” With respect to the type of reward-seeking risky behavior that the dual systems models seek to explain, Ernst (2014) speculates that this emotion/avoidance system may serve to boost impulsive decisions in adolescence by amplifying the perceived cost of delay. She also proposes that this system may become hypoactive—dampening avoidance impulses—in the face of a potential reward that activates the socioemotional system. While this model is intuitively appealing, there is not much evidence to date indicating that the emotion/avoidance system and its developmental trajectory help to explain heightened levels of risk taking in adolescence. Also, the role of the amygdala in decision-making is not yet clear (see e.g., Somerville et al., 2014). Therefore, our review does not address this third hypothesized system.
2. The current article
In this article, we review evidence from both the behavioral and neuroimaging literatures that has emerged since the dual systems model was originally articulated in 2008. In particular, we consider the degree to which extant research findings support, extend, modify, and challenge the theory. We focus our discussion on three main propositions of the model: (1) that reward sensitivity peaks in adolescence; (2) that cognitive control increases linearly during this period; and (3) that heightened risk-taking during adolescence is the product of heightened reward-seeking and relatively weaker cognitive control.
We begin by addressing a recent criticism of the basic premise that middle adolescence is an especially intensified period of risky behavior. We then examine evidence regarding the trajectory of sensation seeking across development, the reward processing circuitry that might underlie developmental changes in sensation-seeking behavior, and the extent to which heightened sensation seeking and reward sensitivity are related to pubertal development. Next, we survey evidence on the developmental trajectory of the ability to control impulsive behavior through self-regulatory processes, and on the maturation of the brain's cognitive control network, which is proposed to undergird this ability. Finally, we consider evidence concerning the interaction of the two proposed systems during risky decision making, identify several unresolved issues, and offer some recommendations for how they might be addressed in future research.
In examining how recent evidence informs the dual systems model, we are cognizant of critiques of this viewpoint, including contentions that the model inadequately accounts for studies that do not find adolescents to be particularly sensitive to reward (Pfeifer and Allen, 2012; but see Strang et al., 2013 for a response to this critique), that cognitive control does not unequivocally improve during adolescence (Crone and Dahl, 2012), and that adolescence may not actually be a peak period of vulnerability to risk-taking (e.g., Defoe et al., 2014, Willoughby et al., 2013; but see Ernst, 2014 for a response to Willoughby et al.). We briefly address these critiques here.
We do not disagree with a fourth critique of the dual systems model—that it is insufficiently nuanced (Pfeifer and Allen, 2012)—because this is almost certainly correct. However, we believe that even an admittedly simplified model can serve as a useful heuristic and, more important, can help to motivate research needed to flesh out the details of an initially simplistic account (for a full discussion see Strang et al., 2013). Moreover, given the influence this perspective continues to have on legal policy and practice, public health, and popular discourse about adolescence (Steinberg, 2014), it is important to ask whether this simplified account is helpful or misguided.
It may be useful at this juncture to clarify our terminology. To begin, the term “adolescence” warrants discussion. Largely as a matter of convenience, scholars generally agree that adolescence begins when pubertal development becomes evident, around age 10 (somewhat later among males). The end of adolescence—the attainment of adult status—is not easily pegged to any single biological or social event, however. In research, adulthood is often defined as beginning at either age 18 or 21, the two ages most often tied to legal majority in the developed world. However, given that 18- to 21-year-olds in industrialized societies are rarely regarded outside the legal system as fully mature adults, and typically have not attained many of the traditional markers of adult status (e.g., financial independence, completion of formal education, stable romantic relationships, full-time employment, parenthood), we prefer to refer to this age range as “late adolescence.” For purposes of this paper, our focus is mainly on the second decade of life, from about ages 10 to 21, which we subdivide into early adolescence (10–13), middle adolescence (14–17), and late adolescence (18–21).
Another source of confusion in discussions of the dual systems perspective concerns levels of analysis, since the perspective refers to overt behaviors (such as risk taking), the psychological states hypothesized to motivate them (such as sensation seeking), and the neural processes believed to undergird these states (such as reward sensitivity). In an earlier paper (Smith et al., 2013), we suggested that “reward sensitivity” and “cognitive control” be used to refer to the neurobiological constructs that are measured in studies of brain structure or function (see Fig. 2). These neurobiological phenomena have psychological manifestations (in our terminology, “sensation seeking” and “self-regulation”) that are measured by assessing psychological states or traits through the subjective reports of individuals or their evaluators.
Fig. 2. Constructs implicated in the dual systems model of adolescent risk-taking arranged by level of analysis.
For heuristic purposes, we use “sensation seeking” as an overarching label for a number of interrelated constructs that refer to the inclination to pursue “varied, novel, complex, and intense sensations and experiences and the willingness to take physical, social, legal, and financial risks for the sake of such experiences” (Zuckerman, 1994, p. 26). Recruitment of brain regions and systems implicated in reward-processing (e.g., ventral striatum, orbitofrontal cortex) has been linked to measures of sensation seeking in humans and other animals (Abler et al., 2006, Leyton et al., 2002, Lind et al., 2005). In a similar vein, we use the label “self-regulation” to refer to a group of interrelated but distinguishable constructs that refer to the capacity to deliberately modulate one's thoughts, feelings, or actions in the pursuit of planned goals; among these constructs are impulse control, response inhibition, emotion regulation, and attentional control. Aspects of self-regulation have been linked to the functioning of brain regions and systems that subserve cognitive control (e.g., lateral prefrontal, lateral parietal, and anterior cingulate cortices) (Luna et al., 2010, Mennigen et al., 2014).
Variations in sensation seeking and self-regulation, in turn, are associated with variations in behaviors, including risk taking, which can be measured through objective reports or observations. In our model, risk taking is a subset of many aspects of decision making that share some, but not all, characteristics in common. Furthermore, as Fig. 2 indicates, all decision making takes place within a broader context that encourages and enables some acts but discourages and prohibits others. As we discuss, the fact that adolescents’ risk taking is influenced by the broader context in which it occurs makes it difficult to move seamlessly between laboratory studies and the real world.
3. Are adolescents particularly prone to risk taking?
Allusions to adolescence as a time of rash behavior and poor decision making predate the articulation of the dual systems model by centuries. And yet, empirical evidence of a mid-adolescent peak in risk taking (at least in humans) is not unequivocal. As pointed out in a recent review of epidemiological data, the peak age for risk taking varies across different behaviors, and very often it is late adolescents, not middle adolescents, who exhibit the highest levels of recklessness (see Willoughby et al., 2013). For example, one of the most dangerous forms of substance use—binge drinking—is most common during the early 20s (Chassin et al., 2002, Willoughby et al., 2013).
Although some argue that these data pose a problem for the dual systems model, we disagree. The model does not posit that middle adolescents necessarily demonstrate the highest levels of all forms of risk taking in the real world. Rather, it asserts that risk-taking propensity is highest in mid-adolescence, but that the expression of this propensity is expected to vary depending on the context (as noted in Fig. 2). Our position is that late adolescents are less biologically predisposed to risk taking than middle adolescents (consistent with the dual systems model), but that they exhibit higher levels of many forms of real-world risk-taking due to greater opportunity. Compared to younger individuals, people in their early 20s typically experience less supervision from adults, have more financial resources, and are afforded greater legal access to many forms of risk taking (e.g., driving, alcohol, and gambling). Thus, we contend that maturational factors predispose middle adolescents to greater risk taking, but that social and legal factors constrain their opportunities to realize this predisposition. Simply put, it is far easier for the average 21-year-old to take risks with alcohol, cars, and gambling than it is for the average 15-year-old. If 15-year-olds were permitted to drive, purchase alcohol, and enter casinos legally, our prediction is that they would likely crash, binge drink, and gamble more than people in their early 20s.
3.1. Risk taking in the laboratory
In an effort to investigate age differences in risk-taking propensity, unconfounded by age differences in opportunity, researchers have tested adolescent and adult participants using artificial tasks—typically gambling games and driving simulations—that give them the option to take risks in the safety of a laboratory setting. While such tasks are often lacking in ecological validity, they do have the advantage of controlling for contextual differences between adolescents and other age groups, as well as for age differences in behavior preferences. These studies yield inconsistent results, with some finding greater risk taking in adolescence than in adulthood (e.g., Burnett et al., 2010, Mitchell et al., 2008, Van Leijenhorst et al., 2008, Van Leijenhorst et al., 2010a), others finding no age effects (e.g., Bjork et al., 2007, Eshel et al., 2007, de Water et al., 2014), and still others finding that adolescents engage in less risk taking compared to children (Paulsen et al., 2011). These inconsistent findings suggest that if there is an increased risk taking propensity in adolescence it may only manifest under certain conditions (see Defoe et al., 2014 for a recent meta-analysis).
Recently, researchers have used laboratory tasks and manipulations that better approximate certain aspects of real-life risky decision-making. These studies have helped to delineate the conditions under which adolescents may be more predisposed than other age groups to take risks. For example, noting that during most real-world risk taking the actual chances of a positive or negative outcome are unknown, researchers recently tested whether age differences in risk taking depend on whether the probabilities of a successful outcome are known or unknown (Tymula et al., 2012, Tymula et al., 2013). Tymula and colleagues (2012) had adolescents and adults complete a risk-taking task with two different conditions: a “known risk” condition and an “ambiguous risk” condition. In the “known risk” condition, participants chose between a sure bet (100% chance of receiving $5) and a “risky” bet with known reward probabilities (e.g., a 50% chance of winning $50, versus $0 if they lost). In the “ambiguous risk” condition, participants again chose between a sure and risky option, but this time the likelihood of winning or losing on the risky option was unknown. Compared to adults, adolescents made fewer risky decisions when the probabilities of loss were known (i.e., adolescents were less risk tolerant). However, when the probabilities were unknown, adolescents made significantly more risky decisions than adults. Thus, under conditions that are more representative of real-life risk-taking (where risk probabilities are typically unknown), adolescents evince a greater risk-taking propensity than adults.
Another way in which real-life risk taking differs from risk taking in the laboratory is with respect to emotional arousal. Contexts in which risk taking occurs outside the lab are often thrilling or frightening; in the lab, both the nature of the risk taking (the stakes and considerations involved) and the surrounding environment are typically less exciting. Scholars have argued that differences in arousal give rise to fundamentally different ways of processing information (e.g., Luna and Wright, 2016, Metcalfe and Mischel, 1999). The dual systems model holds that, to the extent that decision-making occurs under conditions that arouse the socioemotional system (e.g., conditions that are relatively more thrilling), differences between adolescent and adult decision-making and, hence, risk taking will be more pronounced. This pattern was observed in one study that experimentally manipulated the degree to which a card game risk-taking task was affectively arousing (Figner et al., 2009). Consistent with the dual systems account, adolescents evinced greater risk taking and poorer use of risk-relevant information than adults, but only in the more arousing version of the task.
A third difference between most laboratory risk-taking tasks and real-life risk taking is that, in the laboratory, adolescents are asked to make decisions when they are alone, whereas the majority of risky behaviors during adolescence occur in groups (Albert et al., 2013). To mimic this context in the lab, researchers have employed experimental manipulations in which adolescents complete risk-taking tasks in the presence of peers (real or illusory). Some studies have asked participants to bring same-aged peers to the lab (Chein et al., 2011, Gardner and Steinberg, 2005, Kretsch and Harden, 2014), while others have deceived participants into believing that they are being observed remotely by a peer (Smith et al., 2014a). Not only does the “presence” of peers increase the ecological validity of the risk-taking task (because adolescent risk taking often occurs in groups), but it also appears to elevate emotional arousal, which further increases the comparability to real-world risk-taking contexts.
Studies that have manipulated the social context have found that adolescents are more induced by peer presence to take risks than are adults (Chein et al., 2011, Gardner and Steinberg, 2005, Smith et al., 2014a). These findings, which are largely consistent with other studies of peer effects on adolescent driving [e.g., Segalowitz et al., 2012; see Lambert et al. (2014) for a review], suggest that adolescents are particularly vulnerable to the effects of peer presence on risk-taking behaviors. Moreover, neuroimaging data suggest that the effect of peer presence on risk taking is due to increased affective arousal, as evidenced by greater activation of brain regions within the socioemotional system (Chein et al., 2011). A recent extension of this line of work in our lab using a rodent model found that adolescent mice, but not adult mice, consume more alcohol in the presence of same-aged conspecifics than when alone (Logue et al., 2014).
Overall, then, there is evidence for increased risk taking in adolescence compared to adulthood, though developmental differences may only be evident under certain conditions, such as emotional arousal, ambiguous risk, and the presence of others. The tendency for adolescents to engage in more risky behaviors in highly-arousing contexts together with increased engagement of their socioemotional system during peer observation point to the importance of reward processing in decision making during this period of life, a topic to which we now turn.
4. The development of sensation seeking and reward sensitivity
Increased adolescent risk taking in contexts that are emotionally arousing is consistent with one of the central tenets of the dual systems model—that activation and reactivity of the socioemotional system reaches its peak during mid- to late adolescence. A growing literature interrogates this aspect of the model by examining the psychological and neurological evidence for heightened responsiveness of the socioemotional system during adolescence, including in situations that do not involve risky decision-making. This is important because the dual systems model proposes that the socioemotional system is more responsive generally in adolescence than at other ages, not only in the context of risk taking.
Moreover, the model hypothesizes that the developmental course of the socioemotional system is, unlike the development of the cognitive control system, closely tied to pubertal development (for review see Smith et al., 2013). Around age 12 (for boys) or 11 (for girls), pubertal hormones inundate the brain, triggering a series of changes in neural structure and function (Euling et al., 2008, Schulz et al., 2009), especially in dopamine-rich limbic regions associated with reward processing (Blakemore et al., 2010, Sinclair et al., 2014). It is thought that these hormone-related changes sensitize the adolescent brain to reward (Forbes and Dahl, 2010, Peper and Dahl, 2013), as appears to be the case in animal studies (Alexander et al., 1994, Clark et al., 1996, Miele et al., 1988). More specifically, the reward system is particularly sensitive to the sudden surge of hormones at the start of puberty, heightening sensitivity to affective stimuli. Although pubertal hormones do not decline into adulthood, we posit that a decrease in reward sensitivity ensues during later adolescence and into young adulthood as the reward system becomes desensitized to the effects of these hormones (Smith et al., 2013). While admittedly limited, recent evidence integrating measures of puberty into psychological, behavioral, and neuroscience studies supports this claim as well.
4.1. Sensation seeking
One psychological manifestation of socioemotional reactivity is sensation seeking. As anticipated by the dual systems model, measures of sensation seeking are often found to be predictive of self-reported risk taking (e.g., Kong et al., 2013, MacPherson et al., 2010). True sensation-seeking behavior is difficult to elicit in laboratory environments (at least, among human subjects); consequently, the vast majority of studies examining age-related changes in sensation seeking rely on self-report. As would be expected within the dual systems account, longitudinal and cross-sectional studies generally find evidence of a peak in self-reported sensation seeking around mid-adolescence and a decrease into adulthood (Harden and Tucker-Drob, 2011, Peach and Gaultney, 2013, Quinn and Harden, 2013, Romer and Hennessy, 2007, Shulman et al., 2014a, Shulman et al., 2014b, Steinberg and Chein, 2015, Steinberg et al., 2008). This overall pattern is further corroborated by a number of longitudinal studies following individuals from childhood into adolescence, which find that sensation seeking increases across this time period (Collado et al., 2014, Lynne-Landsman et al., 2011, MacPherson et al., 2010). For example, using the Brief Sensation-seeking Scale (Hoyle et al., 2002), Collado and colleagues (2014) found a linear increase in sensation seeking in individuals aged 9–13. Fewer longitudinal studies of sensation seeking have followed individuals from adolescence into adulthood. However, two recent studies using a large, longitudinal data set (the National Longitudinal Study of Youth 1979 Child and Young Adult Survey) have helped to address this gap and clarify the developmental pattern of sensation seeking across adolescence. Harden and Tucker-Drob (2011) found that self-reported sensation seeking increased from age 10 to mid-adolescence, and then decreased thereafter into early adulthood. Analyzing the same data set, Shulman et al., 2014a, Shulman et al., 2014b found that females demonstrated an earlier peak in sensation seeking (age 16–17) than males (age 18–19), and a steeper decline thereafter. Overall, these studies suggest that, as the dual systems model would predict, sensation seeking follows an inverted-U pattern over time, consistent with the proposed pattern of change in the socioemotional system.
The hypothesis that pubertal development drives developmental change in the socioemotional system in adolescence is derived in part from older studies linking higher levels of sensation seeking to more advanced pubertal status (Martin et al., 2002, Resnick et al., 1993). Newer studies have replicated this result (Castellanos-Ryan et al., 2013, Gunn and Smith, 2010, Quevedo et al., 2009, Urošević et al., 2014) and have found evidence that the correlation between self-reported pubertal development and sensation seeking may be stronger for boys than for girls (Castellanos-Ryan et al., 2013, Steinberg et al., 2008). Also, as would be expected based on the link between puberty and sensation seeking, recent studies have found that more advanced pubertal status in adolescents is associated with greater involvement in behaviors that are closely related to sensation seeking, such as substance use (Castellanos-Ryan et al., 2013, de Water et al., 2013, Gunn and Smith, 2010), law-breaking (Collado et al., 2014, Kretschmer et al., 2014), and risk taking in laboratory contexts (Collado et al., 2014, Kretsch and Harden, 2014, Steinberg et al., 2008; but see van Duijvenvoorde et al., 2014 who did not find a correlation between pubertal status and performance on a gambling task).
4.2. Behavioral manifestations of reward sensitivity
Compared to self-report studies of sensation seeking, there are markedly fewer behavioral studies examining the development of reward sensitivity, and these have heterogeneous methodologies and findings, which makes it difficult to draw firm conclusions about age differences. One large-scale study utilized the Iowa Gambling Task (IGT; Cauffman et al., 2010) to explore age-related changes in reward sensitivity. In the standard version of the IGT, participants are presented with four decks of cards, two that will win them money over repeated play (advantageous decks) and two that will lose them money over repeated play (disadvantageous decks); participants are permitted to choose freely from the four decks (e.g., Smith et al., 2011b). However, Cauffman et al. (2010) modified the task such that the computer pseudorandomly selected a deck on each trial and the participant was asked to decide whether to play or pass. This modification allowed the researchers to disentangle affinity for the advantageous decks—a measure of reward sensitivity—from avoidance of disadvantageous decks. The results indicated that mid-adolescents aged 14–17 and older adolescents aged 18–21 learned to play from advantageous decks faster than either younger adolescents (ages 10–13) or adults (ages 22–25), a finding that was recently replicated in an international sample of more than 5000 individuals (Steinberg and Chein, 2015). This outcome suggests that ages 14–21 are a period of heightened sensitivity to reward. Using the same data set, Steinberg (2010) also found that self-reported sensation seeking, but not impulsivity, was associated with overall rate of plays from rewarding decks at the end of the task.
Another way researchers have examined developmental differences in reward sensitivity is by substituting neutral stimuli (e.g., letters) with rewarding ones (e.g., happy faces) in traditional behavioral tasks (e.g., measures of impulse control), and then observing the extent to which the presence of rewarding stimuli impacts performance. Two such studies have examined age differences (comparing children, adolescents, and adults) in performance on an “Emotional Go/No-Go” task. In all Go/No-Go tasks, participants are presented with a rapid sequence of target and non-target stimuli, both of which are typically emotionally neutral. Participants are instructed to press a button when a target stimulus is presented (a “go” trial) and to withhold the button press (do nothing) when a non-target stimulus is presented (a “no-go” trial). Non-target events occur relatively infrequently, making it challenging for participants to restrain the impulse to press the button on no-go trials. [As with most measures of self-regulation, performance improves linearly with age on traditional Go/No-Go tasks (see Casey et al., 2002 for a review).]
In the Emotional Go/No-Go task employed by Somerville et al. (2011), the stimuli were photographs of either happy or calm faces. They found that for adolescents (ages 13–17) more than for children (ages 6–12) or adults (ages 18–29), withholding a button press was more difficult when the no-go stimulus was a happy face—a rewarding stimulus—than when it was a calm face. In fact, only adolescents showed impaired impulse control in the happy, relative to the calm, no-go condition. The researchers proposed that adolescents’ greater emotional response to the (rewarding) happy face made it harder for them to restrain the impulse to “approach” it (i.e., press the “go” button). If so, these results support the proposition that adolescents are particularly sensitive to reward. However, another study (Tottenham et al., 2011) using a similar, but not identical, Emotional Go/No-Go task did not find this pattern (i.e., they found no emotion by condition by age group interaction for erroneous button presses). Though there were methodological differences between these two studies, the failure to find the effect in one of the two underscores the need for further research in this vein. It also highlights the benefits of being able to incorporate neuroimaging methods. Engagement of the socioemotional system may not always be robust enough (especially in laboratory settings) to consistently bias behavior. Neuroimaging enables researchers to detect age differences in the engagement of this system, even absent behavioral consequences.
4.3. Neuroimaging of reward sensitivity
In recent years, many neuroimaging studies have asked whether adolescents are particularly sensitive to reward. Beyond identifying differences between adolescents and other age groups, these studies help address questions about the neural mechanisms underlying adolescents’ heightened reward-seeking. For example, among those who agree that adolescents are more inclined than adults to seek out rewards, there has been disagreement over whether this results from the fact that rewards are experienced as exceedingly pleasurable during adolescence (and are therefore more enticing) or because they are experienced as less so (and are therefore less satisfying). Indeed, one early notion, now largely discredited, posited that adolescents suffer from a “reward deficiency syndrome” which impels them to seek out exciting experiences because mundane ones are not sufficiently rewarding, much like addicts who seek out drugs because quotidian experiences no longer excite them (for a discussion, see Spear, 2002).
To date, most of the developmental neuroscience literature has focused on developmental differences in the striatum, and more specifically in the ventral portion of the striatum, which is considered one of the main regions involved in the calculation of reward (Knutson et al., 2001, Luciana and Collins, 2012). In our dual systems account, increases in risk taking and other reward-seeking behaviors are thought to be a consequence of increased engagement of the striatum during decision-making, thus biasing adolescents toward more rewarding choices. Heightened sensitivity to rewarding outcomes of prior decisions may contribute to adolescent risk-taking as well. There is evidence, for example, that the volume of the nucleus accumbens (part of the ventral striatum and presumed to be the central structure in the reward system) increases during the first part of adolescence and then shrinks thereafter (Luciana and Collins, 2012).
As discussed by Steinberg (2008), the neuroscience literature includes both studies that support and challenge the dual systems account of heightened striatal engagement during adolescence (e.g., Bjork et al., 2004, Galvan et al., 2006). Since that 2008 publication, several reviews have discussed methodological differences across these studies that may have contributed to the inconsistent findings (see Galvan, 2010, Richards et al., 2013). Indeed, the developmental neuroimaging literature on reward processing has grown substantially over the last several years, and we believe there are patterns to be noted, and conclusions to be drawn, that help explain what appear to be contradictory findings.
In its current state, the literature provides considerable evidence that when developmental differences in striatal activation are present during reward processing (both during the anticipation and the receipt of reward) adolescents engage the striatum to a greater extent than both children and adults (Barkley-Levenson and Galvan, 2014, Christakou et al., 2011, Galvan and McGlennen, 2013, Galvan et al., 2006, Geier et al., 2010, Hoogendam et al., 2013, Jarcho et al., 2012, Padmanabhan et al., 2011, Silverman et al., 2015, Van Leijenhorst et al., 2010b). For example, a recent longitudinal study found that, across mid-adolescence (roughly ages 15–18), ventral striatal activation in response to “risk taking” on the balloon analogue task (which also reflects reward-seeking) declines intra-individually over time, and that striatal activation during the task is correlated with self-reported risk taking outside the laboratory (Qu et al., 2015). However, a handful of studies find the opposite pattern—dampened striatal response during adolescence relative to adulthood (Bjork et al., 2004, Bjork et al., 2010, Hoogendam et al., 2013, Lamm et al., 2014)—and others fail to demonstrate any age differences (Krain et al., 2006, Teslovich et al., 2014, Van Leijenhorst et al., 2006).
In trying to explain this inconsistency, it is important to note that disparate findings emerge only for contrasts that focus on the anticipation of a reward. Studies focusing on striatal engagement during the receipt of a reward consistently find that adolescents engage the striatum to a greater extent than adults (Galvan and McGlennen, 2013, Hoogendam et al., 2013, Van Leijenhorst et al., 2010b), suggesting that adolescents are—as the dual systems model claims—more sensitive to rewarding outcomes.
The fact that striatal engagement is relatively higher among adolescents than among children or adults during receipt of rewards but not necessarily during reward anticipation potentially challenges our conception of adolescent risk taking as being driven by the prospect of a reward. However, nuances in task design, modeling of the anticipatory event in imaging analyses, and the relationship between striatal engagement and behavioral reward sensitivity may account for these seemingly inconsistent results, for several reasons. First, the time points at which events are modeled, and the specific trial periods that are included within the model, can dramatically affect the observed neural response (e.g., Geier et al., 2010). One factor that seems to differentiate studies that do and don’t report increased adolescent engagement of the striatum during reward anticipation is the degree to which anticipatory cues reliably predict the delivery of the reward. Studies using a task design in which the reward cue signals not only the opportunity for reward, but also an increased likelihood of earning that reward, tend to find increased adolescent activity in the striatum during anticipation (e.g., Barkley-Levenson and Galvan, 2014, Van Leijenhorst et al., 2010b). Meanwhile, studies using tasks for which the anticipatory cue signals the possibility to earn a reward, but is equivocal with respect to the likelihood of succeeding in obtaining the reward (as in typical implementations of the Monetary Incentive Delay task), do not yield a consistent pattern of developmental differences (e.g., Bjork et al., 2007, Teslovich et al., 2014).
Second, developmental findings regarding striatal outputs during reward anticipation are more consistent in studies where there is also concomitant behavioral evidence that the adolescents are relatively more sensitive to the rewards being presented (e.g., faster reaction times on rewarded trials, more reward-related errors, etc.). While most reward processing tasks used in neuroimaging studies do not include a behavioral measure or control for behavioral differences in reward sensitivity across development, the handful of studies that do (Barkley-Levenson and Galvan, 2014, Christakou et al., 2011, Geier et al., 2010, Padmanabhan et al., 2011, Somerville et al., 2011) all report both greater recruitment of the striatum during anticipation of reward and higher reward sensitivity among adolescents compared to adults, reflected in the behavioral outcomes. Unfortunately, the majority of reward tasks used in the developmental literature lack a useful behavioral index of reward sensitivity—an issue that may also account for variability in striatal findings across development.
Our lab recently explored how socioemotional arousal influences adolescents’ neural responses to reward by testing whether the presence of peers increased striatal activation during a reward-processing task in which no risk was involved (Smith et al., 2015). In this study, we examined the effects of peer observation on adolescents’ and adults’ neural response to reward using a modified version of the High/Low Card Guessing Task (Delgado et al., 2003, May et al., 2004). During the receipt of reward, adolescents who completed the task in the presence of their peers recruited the striatum to a greater degree than when they completed the task alone. Furthermore, only when peers were present did adolescents evince greater striatal activation than adults. These findings provide corroborating evidence that, during adolescence, social context is an important modulator of reward processing, even when this processing is uncoupled from risk taking. Consistent with this claim, we have shown that, in the presence of peers, adolescents evince a stronger preference for immediate (as opposed to delayed) rewards on a Delay Discounting task that does not involve risk taking (O’Brien et al., 2011, Weigard et al., 2014).
Recent neuroimaging studies also support the idea that, in addition to having profound effects on brain structure [a topic not covered in the present article; see Blakemore et al. (2010) and Smith et al. (2013) for reviews], pubertal development plays a role in developmental change in the sensitivity of the striatum to reward (Braams et al., 2015, Forbes et al., 2010, Op de Macks et al., 2011). For example, a landmark, longitudinal neuroimaging study of children, adolescents and young adults (N = 299, ages 8–27) found, as previous studies have, that activation of the nucleus accumbens in response to monetary reward (relative to loss) was higher in mid-adolescence than at other ages (Braams et al., 2015). Moreover, activation of this region was related both to greater self-reported pubertal stage and higher levels of salivary testosterone (Braams et al., 2015). This finding provides strong support for the claim that the heightened responsiveness of the socioemotional system during adolescence is, at least in part, a result of pubertal development.
Thus far, we have discussed reward sensitivity specifically with respect to striatal activation. However, there also have been advances in how we understand developmental changes in the functioning of other regions hypothesized to participate in reward processing, including the dorsal portion of the striatum (Benningfield et al., 2014, Hoogendam et al., 2013, Lamm et al., 2014), mPFC (Christakou et al., 2011), OFC (Galvan et al., 2006, Galvan and McGlennen, 2013, Hoogendam et al., 2013, Van Leijenhorst et al., 2010b), and the anterior insular cortex (AIC) (Galvan and McGlennen, 2013, Van Leijenhorst et al., 2010b).
In a recent paper, we posited that continuing maturation of connectivity between the striatum and the AIC, which appears to act as connective hub that influences the engagement of both the control and reward processing networks, may account for inconsistent recruitment of the striatum in adolescent reward processing (Smith et al., 2014b). Because reward processing entails the coordinated action of a network of regions, developmental studies examining the reward system as a whole, rather than focusing on activation of specific regions considered in isolation, will likely yield greater insight into changes in reward processing during adolescence, including the reasons for the inconsistent recruitment of the striatum in adolescent reward processing (Smith et al., 2014b).
One study has already demonstrated the potential value of such a network-based approach. Using resting state data, Cho and colleagues (2012) examined functional connectivity between the striatum, thalamus, and AIC as adolescents and adults completed a reward processing task. They found that during anticipation of reward (i.e., during cue presentation) adolescents and adults did not differ in the functional connectivity between these regions. Further, they observed that activity in the AIC and thalamus preceded VS activation in both adolescents and adults. These results suggest that the bottom-up processing of rewards (as demonstrated by communication between these three regions) is adequately developed by adolescence. Therefore, it may be that developmental differences between adolescents and adults in reward sensitivity are not due to immature connectivity, but rather to differences in top-down influences on the subjective valuation of reward. More studies considering the socioemotional system as a coordinated network are needed to inform our understanding of how the development of this system relates to age-differences in reward processing.
In summary, despite occasional inconsistencies in the literature, self-reported sensation seeking, behavioral measures of reward sensitivity, and neuroimaging studies of reward processing support the contention that reward sensitivity reaches its apex during adolescence (e.g., Barkley-Levenson and Galvan, 2014, Christakou et al., 2011, Collado et al., 2014, Galvan and McGlennen, 2013, MacPherson et al., 2010, Shulman et al., 2014a, Shulman et al., 2014b, Somerville et al., 2011, Van Leijenhorst et al., 2010b). The bulk of developmental research on this topic provides evidence for a mid-adolescent peak in reward sensitivity, and although the neuroimaging literature does not allow for a precise estimation of age of peak striatal response, the weight of the evidence indicates that adolescents engage the striatum (and other components of the reward network) to a greater extent than adults, particularly during receipt of reward and when differences in reward sensitivity are reflected in decision-making behavior. Also consistent with the dual systems account, studies that have incorporated measures of puberty typically find that sensation seeking and reward sensitivity are higher among those (particularly boys) who are more pubertally advanced.
5. The development of self-regulation and cognitive control
5.1. Self-reported impulsivity
A second major claim of the dual systems model is that cognitive control increases linearly across adolescence and does not reach full maturity until several years after the peak period of reward sensitivity. In the developmental literature, impulse control (or its inverse, impulsivity) is the psychological variable most often used to assess self-regulation (or its absence). Impulsiveness—acting in an unplanned and reactive, or less thought out, fashion—is often considered a quintessential adolescent characteristic that predisposes adolescents to engage in reckless behaviors (Romer, 2010). To date, studies examining age differences in self-reported impulsivity—both cross-sectional (Leshem and Glicksohn, 2007, Steinberg et al., 2008) and longitudinal (Harden and Tucker-Drob, 2011)—find that impulsivity decreases with age across the second decade of life.
Importantly, the protracted maturation of impulse control is believed to continue into young adulthood, where even 18–19 year olds report higher impulsivity (i.e., less impulse control) than individuals in their early twenties (Vaidya et al., 2010). Although adults sometimes engage in impulsive acts, by the early-to-mid 20s the frequency of impulsive behavior appears to stabilize at levels much lower than those exhibited by adolescents (Steinberg et al., 2008, Quinn and Harden, 2013). For example, using a three-item impulsivity scale, Quinn and Harden (2013) found a linear decrease in self-reported impulsivity between the ages of 15 and 21, but no further age differences among individuals between 21 and 25.
5.2. Behavioral measures of self-regulation
Self-regulation is commonly assessed in behavioral tasks that require response inhibition, a form of cognitive control that involves overcoming automatic or inappropriate responses in favor of goal-relevant information processing and actions (Casey et al., 2002). The most widely used measures of response inhibition (e.g., Go/NoGo, antisaccade, and Stroop paradigms) are typically configured to assess “reactive inhibition,” which refers to the outright restraint of motor and perceptual impulses in response to an external stimulus (e.g., canceling a prepotent response upon seeing a signal, or maintaining attention in the presence of distractions). A wealth of behavioral evidence on reactive inhibitory control demonstrates that self-regulation improves from childhood to adulthood (Bezdjian et al., 2014, Bunge et al., 2002, Casey et al., 1997, Casey et al., 2002, Durston et al., 2002, Marsh et al., 2006, Paulsen et al., 2015, Rubia et al., 2006, Rubia et al., 2013, Smith et al., 2011a, Tamm et al., 2002, Velanova et al., 2009, Veroude et al., 2013).
Within this literature, adolescents and adults consistently demonstrate better inhibitory control compared to children; however, differences between adolescents and adults are not consistently found unless the behavioral paradigm is particularly challenging. For example, studies that use the traditional Stroop color-word task find no differences in cognitive control between adolescents and adults (e.g., Andrews-Hanna et al., 2011), whereas studies that use an emotional version of the Stroop to assess the effect of emotional interference in cognitive control report improvements in self-regulation from adolescence to adulthood (e.g., Veroude et al., 2013). Thus, while adolescents’ ability to inhibit impulses appears to be comparable to adults’ in relatively simple tasks, the sort of self-regulatory skills necessary to appropriately respond to more cognitively demanding situations continue to improve from adolescence to adulthood. This developmental pattern is also observed in measures of proactive (as opposed to reactive) inhibitory control, which involves advance planning and monitoring in anticipation of the need to stop a response or to not engage in a future action (e.g., slowing down responses to go-stimuli in anticipation of a no-go signal approaching, therefore allowing more time to appropriately cancel a response when the no-go signal appears; Vink et al., 2014). These findings suggest that while basic response inhibition mechanisms may be mature by adolescence, self-regulatory mechanisms underlying challenging reactive response inhibition tasks and proactive response inhibition (e.g., planning) may still be developing into the early 20s.
The proposition that the prolonged development of self-regulation is more evident under challenging conditions has been demonstrated using the Tower of London task. In this task, which requires strategic planning, participants must rearrange objects on pegs (either real or depicted on a computer monitor) to produce a specific pattern in the fewest possible moves (De Luca et al., 2003, Steinberg et al., 2008). Researchers manipulate the difficulty of trials by increasing the number of moves required to complete the rearrangement successfully. The amount of time a participant spends deliberating before making his or her first move (latency to first move) is used as a measure of impulse control (because making an initial move too rashly can extend the number of moves needed to solve the problem). A study from our lab found no differences between children, adolescents, and adults in latency to first move or in the number of moves taken to complete the trial on easy trials (i.e., those that can be solved in 3 moves) (Albert and Steinberg, 2011). However, on difficult trials (i.e., those that required 5 or more moves to be solved), performance improved with age from childhood to adulthood, and this trend coincided with greater deliberation time prior to the initial move. These findings suggest that when difficult tasks are used, such as those that require strategic planning, improvement in self-regulation continues throughout adolescence and into the early 20s, consistent with the dual systems model.
Importantly, the ongoing development of self-regulation into early adulthood is also in line with the idea that the development of self-regulation is independent of pubertal development (Smith et al., 2013). In the most comprehensive test of the relationship between self-regulation, and pubertal status Steinberg and colleagues (2008) found that self-reported and behavioral self-regulation was correlated with age but not pubertal status. Instead, pubertal status was more closely tied to sensation seeking. While this is the only study we know of that simultaneously examines the relationship between age, pubertal status, self-regulation, and sensation seeking, thus far the findings support the notion that the development of the socioemotional system is dependent on pubertal status while self-regulation seems to develop independently.
Other findings suggest that adolescents’ ability to exert adult-like self-control also may vary depending on whether rewards are offered for better performance (Luna et al., 2001). In several studies that have rewarded participants for better performance on an antisaccade task, researchers have found that incentives boost adolescents’ performance to adult levels (Geier et al., 2010, Jazbec et al., 2006, Padmanabhan et al., 2011). For example, using an antisaccade task where some trials were rewarded and some were not, Geier and colleagues (2010) found that adolescents performed better on rewarded trials, compared to non-rewarded trials, though their overall task performance did not differ from that of adults. At first blush, it may appear that these results are incompatible with the dual systems model, since its basic claim is that heightened awareness of the availability of rewards should undermine cognitive control in adolescents, not strengthen it. It is important to note, though, that this proposition of the dual systems model is posited specifically with respect to situations in which reward-seeking impulses conflict with self-regulatory efforts, as do most instances of risk taking. In contexts where increased sensitivity to the opportunity for reward serves to motivate faster and more attentive responding, without disturbing relevant cognitive processes, adolescents’ relatively heightened sensitivity to reward may be helpful rather than harmful. For that matter, even in certain risk-taking scenarios—in particular, those in which increased risk taking results in more optimal performance, such as in certain gambling tasks for which risky choices have a higher expected value—greater reward sensitivity can offer an advantage.
Overall, the self-report and behavioral literatures on self-regulation suggest that this capacity improves with age across childhood, adolescence, and into adulthood. Furthermore, it may be that adolescents’ ability to self-regulate is more dependent than adults’ on contextual factors, such as task difficulty, the prospect of a reward for better self-control, and the manner in which rewards are presented. Although adolescents may exhibit adult-like self-regulation under ideal circumstances by around age 15, this capacity is still tenuous, and maturation of self-regulation may be best indexed by the consistency with which individuals demonstrate self-control across different contextual circumstances.
As we have noted previously (Strang et al., 2013), it is imprudent to conclude that heightened reward sensitivity is inherently disadvantageous, or that impulsivity is always problematic. In situations in which greater attentiveness to reward or more impetuous behavior is desirable, adolescents may enjoy a distinct advantage over adults. Indeed, one of the tenets of the dual systems model is that adolescence evolved as a period during which individuals are more likely to engage in sensation seeking and less likely to restrain urges to pursue immediate rewards because this combination may confer a reproductive advantage during a period of heightened fertility (Steinberg, 2014).
5.3. Neuroimaging of cognitive control
In recent years, developmental neuroimaging has helped elucidate the neural mechanisms underlying age-related improvements in cognitive control. Continuing maturation of response inhibition is often examined in terms of development of the prefrontal cortex, and particularly the lateral prefrontal cortex (lPFC). In line with the dual systems framework, we postulate that developmental improvements in cognitive control are supported by the concurrent maturation of these underlying neural regions and by enhancements in top-down connectivity between frontal cognitive control regions and other cortical and subcortical areas associated with motor processing, affective processing, and the execution of selected actions.
Irrespective of age, individuals who perform better on response inhibition tasks (i.e., Go/No-Go, Flanker, Stroop, Stop Signal, antisaccade) exhibit greater activation of the lPFC compared to those who perform poorly (Durston et al., 2006, Rubia et al., 2006, Rubia et al., 2013, Velanova et al., 2009). Across adolescence, performance on response inhibition tasks improves with age—a pattern that appears to be explained by continuing maturation of the lPFC, with most studies finding either a linear increase in lPFC recruitment with age (Adleman et al., 2002, Bunge et al., 2002, Durston et al., 2006, Marsh et al., 2006, Paulsen et al., 2015, Spielberg et al., 2015, Tamm et al., 2002, Velanova et al., 2009, Vink et al., 2014) or significantly increased engagement of the lPFC from adolescence to adulthood (Rubia et al., 2000, Rubia et al., 2006, Rubia et al., 2013, Veroude et al., 2013). Furthermore, several studies have demonstrated a direct relationship between age-related increases in lPFC engagement and successful cognitive control (Adleman et al., 2002, Andrews-Hanna et al., 2011, Bunge et al., 2002, Casey et al., 1997, Durston et al., 2006, Rubia et al., 2006, Rubia et al., 2007, Rubia et al., 2013, Velanova et al., 2009).
Whereas the behavioral and neuroimaging literatures generally indicate a relationship between increases in cognitive control and engagement of the lPFC from adolescence into adulthood, the relationship between age, behavior, and neural engagement from childhood to adolescence is not as consistent (Alahyane et al., 2014, Booth et al., 2003, Braet et al., 2009, Casey et al., 1997, Durston et al., 2002). In fact, some studies find that children utilize more frontal regions than adults—in terms of overall volume and/or magnitude of activity—in order to successfully withhold a prepotent action. These findings have led researchers to posit that increases in self-regulation from childhood to adolescence and into adulthood may be due to a developmental progression from diffuse to focal activation (Durston et al., 2002). In this account, during childhood and early adolescence, the brain is inefficient and needs to “work harder,” recruiting neurons across a larger frontal area in order to successfully inhibit a response (though see Poldrack, 2014 for a critique of the explanatory value of the term “efficiency” in this context). As the brain undergoes continued reorganization across adolescence, necessary neural connections are strengthened and unnecessary ones are pruned, creating a more efficient brain and leading to more focal recruitment of regions within the lPFC during successful inhibition.
Cognitive control encompasses the integration of several (often simultaneous) processes that support planning behavior in accord with one's intentions (Miller, 2000). The effective integration of these processes relies not only on the functional recruitment of implicated brain regions, but also on the strength of connectivity among them (Hwang et al., 2010, van Belle et al., 2014). For example, a study by Hwang and colleagues (2010) examined developmental changes in connectivity underlying inhibitory control and found that connectivity between the prefrontal cortex and other cortical areas increased from childhood into adolescence, with some connections continuing to strengthen from adolescence to adulthood. The increases they observed in the number and strength of frontal connections to both cortical and subcortical regions during the transition from adolescence into adulthood suggest that developmental improvements in cognitive control may be supported by age related enhancements in the top-down regulation of task-engaged regions. Results such as these underscore the potential benefit to the field of fMRI studies moving beyond simplistic models of regional activation toward more elaborate models that consider connectivity among regions throughout development, as well as the strength and efficiency of those connections, which likely support age-related increases in the acquisition and execution of complex cognitive control skills (see, e.g., Satterthwaite et al., 2013). This is particularly true because, as noted earlier, there is reason to believe that continuing changes in connectivity account for the observation that some aspects of cognitive control continue to strengthen into early adulthood, instead of plateauing in adolescence.
6. Is risk taking during adolescence related to heightened reward sensitivity and immature cognitive control?
As reviewed above, research largely supports the dual systems model's characterization of adolescence as a time of heightened socioemotional reactivity (relative to earlier and later periods) and still maturing cognitive control. Moreover, there is considerable evidence consistent with the proposition that the developmental trajectories of reward sensitivity and cognitive control (and, by extension, sensation seeking and self-regulation) differ, with the former following an inverted U-shaped pattern and the latter evincing protracted, linear improvement that extends into the third decade of life.
How well does the literature support the claim that developmental change in these two systems explains heightened risk taking during adolescence? The model posits that it is the confluence of the developmental patterns of the socioemotional and cognitive control systems—relatively high responsiveness to reward combined with relatively weak self-regulation—that renders adolescents particularly vulnerable to risk taking. If the two systems contribute to risk taking in an additive manner, we should find independent correlations between the functional state of each system and risk-taking propensity. Indeed, there is evidence for this pattern in the literature.
In order to serve as a test of the dual systems model in predicting risk taking, a behavioral study must include measures of both socioemotional reactivity and cognitive control. Unfortunately, constructs reflecting the functional status of the socioemotional and cognitive control systems, like sensation seeking and impulsivity, tend to be highly correlated (e.g., Shulman and Cauffman, 2014, Steinberg et al., 2008), despite being theoretically and empirically separable (Duckworth and Kern, 2011, Duckworth and Steinberg, 2015). Thus, for studies that examine the relationship between only one of these constructs and risk taking, the correlation will be contaminated by the contribution of the unmeasured construct. Some of this overlap between sensation seeking and self-regulation may be artifactual—a result of the difficulty of developing measures that cleanly assess one construct and not the other. (For example, items like “I often get myself into trouble” could reflect either sensation seeking or impulsivity.) But some of the observed association between these constructs may be attributable to an ongoing dynamic interplay between the socioemotional and cognitive control systems; for example, when the socioemotional system generates an impulse to pursue an intrinsically rewarding experience and the cognitive control system counters with a signal meant to restrain the impulse. Although a large number of studies have examined risky behavior in relation to measures of either sensation seeking or impulse control, very few have examined the concurrent contributions to risk taking of psychological manifestations of socioemotional activation and cognitive control. Even fewer have examined this question in a sample spanning childhood, adolescence, and adulthood, which would be necessary to fully test whether variation in the functional status of these two systems explains age-related patterns in risk taking.
In the few studies that have simultaneously assessed constructs reflective of the socioemotional and cognitive control systems (e.g., sensation seeking and impulse control) along with measures of risk taking, the anticipated correlations are found. Both higher levels of sensation seeking and lower levels of impulse control explain variation in risk taking, over and above the effects of one another (Cyders et al., 2009, Donohew et al., 2000, Quinn and Harden, 2013). For example, one study of college students found that sensation seeking uniquely predicted increases in the frequency of alcohol use, over and above several measures of impulsivity (Cyders et al., 2009). Another found that both sensation seeking and “impulsive decision making” were independently associated with greater odds of ninth-graders engaging in sex, non-coital sexual behavior, alcohol use, and marijuana use (Donohew et al., 2000). Moreover, these associations were comparable in magnitude, except that impulsive decision-making was more strongly associated with having sex and sensation seeking was more strongly associated with marijuana use. Similarly, unpublished data from our lab—based on a sample of 283 10–30-year olds and using a scale that surveyed involvement in a wide range of risk-taking behaviors (see Shulman and Cauffman, 2014)—suggests that impulse control and sensation seeking contribute equally (betas = −.21 and .21, respectively) to self-reported engagement in risky behaviors (controlling for age, sex, and each other). An obvious shortcoming of these studies is that they rely exclusively on self-report. However, if common method variance alone were driving the findings, one would not expect to see independent relations between risk taking and either sensation seeking or impulsivity once the other predictor was controlled. Another limitation of these studies is that it is not yet clear how well self-report measures reflect the functional status of the socioemotional and cognitive control systems.
Neuroimaging studies have the advantage over behavioral studies of being able to measure activation within distinguishable regions thought to correspond to the socioemotional and cognitive control systems (although heightened activity in these regions during a laboratory task does not constitute a direct measure of the functional status of these systems). A few studies have examined engagement of regions associated with the socioemotional and/or cognitive control systems during adolescent decision making (e.g., Cascio et al., 2015, Kahn et al., 2015, Paulsen et al., 2011, van Duijvenvoorde et al., 2015b). However, only Chein et al. (2011) found increased engagement of structures within the socioemotional system and decreased activation of structures within the cognitive control system simultaneously within a risk-taking task (a driving simulation). One additional study (Paulsen et al., 2011) found age effects in both the PFC and striatum during risk taking. During risky (i.e., varying expected values) compared to sure (i.e., guaranteed reward) decisions, several PFC regions showed increased activation with age, consistent with Chein et al. (2011). On the other hand, striatal activation was inconsistent, making these results difficult to interpret.
Another recent study examined the extent to which age differences in impatience during a temporal discounting task—in which participants choose between a smaller immediate reward and a larger delayed reward—are explained by variations in self-reported “present hedonism” (i.e., reward sensitivity) and “future orientation” (i.e., impulse control) and engagement of neural regions and networks during decision making (van den Bos et al., 2015). Though the decision making task did not involve risk, the study is nonetheless relevant to the dual systems model because it was designed to probe the degree to which adolescents’ tendency to discount the future is due to greater reward sensitivity or weaker self control. The results indicated that choices to delay gratification in the decision task were associated with greater self-reported future orientation and increased engagement of frontoparietal control circuitry, but not with variation in self reported hedonism. Also, improvements in frontostriatal connectivity mediated the link between age and willingness to wait for a larger reward. The authors interpreted these results as showing that weak cognitive control, rather than heightened reward sensitivity, explains adolescents’ tendency to discount the future. However, limitations of the methodology (e.g., limited range on the present hedonism scale, lumping immediate rewards together with rewards to be received in two weeks for analysis of the discounting data, and the unemotional context of the laboratory) may have biased the study against finding linkages between reward sensitivity and impatience (see Steinberg and Chein, 2015).
Whereas other studies have not demonstrated simultaneously heightened socioemotional activation and dampened cognitive control within the same task, a few recent ones have observed heightened striatal activation when adolescents receive a reward following a decision (Braams et al., 2014, Braams et al., 2015). In one further relevant study (Cascio et al., 2015), researchers recruited recently licensed drivers (∼age 16) to complete a response inhibition task and, one week later, a driving simulation in the presence of a peer confederate. The peer either encouraged risky driving or safe driving. In the latter condition (encouragement of safe driving), greater engagement of cognitive control circuitry (i.e., IFG and BG) during the response inhibition task (indicative of better cognitive control) predicted safer driving behavior in the simulated driving task. Participants who exhibited higher cognitive control also showed no increase in risky driving in the condition in which the peer encouraged risk taking, suggesting that individuals who evince greater engagement of cognitive control circuitry may be more resistant socioemotional arousal. These findings indicate that poor cognitive control, as expected, also plays a role in risk-taking behavior. However, because the study did not compare age groups, it cannot address whether maturation of cognitive control helps to account for developmental patterns in risk taking.
In another recent study, van Duijvenvoorde and colleagues (2015b) had children, adolescents, and adults complete a risk-taking task (Columbia Card Task). While overall risk-taking tendency did not differ by age, adolescents showed greater activation of control circuitry (including the dmPFC) as the riskiness of the decision increased. This effect was not seen in children or adults. The authors suggest that heightened recruitment of control circuitry was necessary due to the heightened emotional response to risk during this age. Although there is good reason to believe that the functional status of both the socioemotional and cognitive control systems during adolescence contribute to heightened risk taking during this stage of development, the dual systems model still awaits a comprehensive study that confirms (or disconfirms) the purported joint effects of the developmental trajectories of the socioemotional and cognitive control systems on risk-taking behavior.
7. Unresolved questions and future directions
There are many unresolved issues in the literature that await further research attention. Here we highlight just a few of them. First, because of differences in opportunity to engage in risky behavior outside the laboratory environment, the effects of maturation of the socioemotional system and cognitive control system on real-world risk taking are likely to be modest and difficult to detect. Undoubtedly, contextual constraints on the behavior of adolescents relative to adults overwhelm any putative effect on actual risk taking. To take an obvious example, even if 15-year-olds are higher in sensation seeking and lower in self-regulation than people in their 20s, these differences will not be reflected in age differences in reckless driving in a country where 15-year-olds are not permitted to drive. Thus, tests of the dual systems model will require the continued development of laboratory tasks that are ecologically valid but that afford individuals of different ages equal opportunity to take risks. Future efforts to test the dual systems model's claims also would benefit from collaboration between behavioral researchers and neuroscientists to develop measures that more precisely reflect the functioning of the neural systems underlying the socioemotional and cognitive control systems.
Second, still unresolved is the question of why the socioemotional system declines in responsiveness between adolescence and adulthood. Luciana and Collins (2012) have speculated that experience with rewards and learning lead to lower background levels of dopamine, a proposition that has not yet been tested. Casey et al.’s (2008) model implies that decreases in risk taking after the maturation of the socioemotional system, which in their view is complete by mid-adolescence, are ultimately attributable to the continued strengthening of the cognitive control system. Given the evidence of reduced reward responsiveness in the key node of the socioemotional system in adulthood (relative to adolescence), it would seem that their version of the model suggests that strengthening of the cognitive control system prospectively dampens the reactivity of the socioemotional system. One study from our lab has tested this hypothesis: Shulman et al. (2014b) examined the effects of self-reported impulse control (a reflection of the cognitive control system) and sensation seeking (a reflection of the socioemotional system) on one another over time in a large sample of youth, ages 10–25, who were assessed biennially as part of the NLSY79 Children and Young Adults Study. The analysis failed to find evidence that increases in impulse control prospectively predict decreases in sensation seeking; in general, these two traits developed independently. Recently, a neuroimaging study using intrinsic connectivity found that increases in dlPFC-subcortical (thalamus and striatum when not controlling for age2) connectivity across adolescence were associated with increases in cognitive control but not with decreases in reward sensitivity (van Duijvenvoorde et al., 2015a). Instead, decreases in reward sensitivity were related to age-related decreases in connectivity within the socioemotional system (vmPFC, OFC, and striatum). Together these findings lend support to the notion that these traits develop independently. However, further investigation of this question is warranted; in particular, longitudinal studies employing more closely spaced measures and more sensitive assessments of the functional states of the socioemotional and cognitive control systems are needed.
A third issue concerns the operationalization of mature cognitive control. It is clear that in some respects, key nodes of the cognitive control system have reached adult levels of structural and functional maturation by mid-adolescence, a point made recently by Luna et al. (2014). Yet there are other signs that aspects of cognitive control are immature, unreliable, or easily disrupted during mid-adolescence relative to adulthood, which challenges Luna and Wright's (2016) notion that cognitive control is mature by mid-adolescence. Part of this inconsistency stems, we believe, from heterogeneous operationalizations of “mature” cognitive control. It is now eminently clear that whether activation of this system is weaker or stronger is not a useful way of conceptualizing the maturity of the cognitive control system, because on some tasks adolescents show greater or wider activation than adults, whereas on others the reverse is true. As we have argued (Strang et al., 2013), a more sensible index of maturation of cognitive control would focus on structural and functional connectivity within the cognitive control network and between this system and other brain regions (DeWitt et al., 2014, Jacobus et al., 2013). In order to pursue this idea further, research is needed that correlates psychological and behavioral measures of self-regulation with indices of structural and functional connectivity that involve the cognitive control system. Recent connectivity analyses have demonstrated that increases in control behaviors across adolescence are associated with an increase in connectivity between striatal and prefrontal regions (van den Bos et al., 2015, van Duijvenvoorde et al., 2015a, Vink et al., 2014). However, some studies have also found that same-aged individuals with more well-developed connections between cortical and subcortical areas engage in relatively more risk taking (Berns et al., 2009, DeWitt et al., 2014). This apparent inconsistency between studies of development and studies of individual differences among same-aged adolescents warrants further study.
Finally, it will almost certainly be necessary to consider the development of, and interactions among, brain systems that are not included in the dual systems model in order to account for the full array of evidence on the development of risky behavior across the period from preadolescence into adulthood. Such expansion of the model is consistent with the triadic model (Ernst, 2014) and with our recent efforts to consider how development in other pathways, including those linking the AIC to the cognitive control and reward systems (Smith et al., 2014b), may impact risk-taking behavior. According to our recent work, the transition between adolescence and adulthood may involve a shift in the ways in which the VS, PFC, and AIC are functionally connected, with relatively the stronger connections between the insula and striatum characteristic of adolescence giving way to stronger connections between the PFC and insula in adulthood.
8. Concluding comment
The dual systems model attributes elevated levels of risk taking in adolescence to the heightened arousal of the socioemotional system before the cognitive control system fully attains functional maturity. Moreover, the decrease in risky behavior between adolescence and adulthood is attributed to the continued strengthening of the cognitive control system and the attenuation of arousal within the socioemotional system. Whether and in what respects the contributions of these changes in the cognitive control and socioemotional systems are independent, interactive, or reciprocal—or a combination of all three—are important questions for future research.
It is important to note, however, that the ways in which these systems work together in motivating increases in risky behavior between childhood and adolescence are not necessarily the same as the ways in which they combine to create a decline in risky behavior between adolescence and adulthood. It is entirely possible, for example, that the increase in recklessness seen in early adolescence is due mainly to increases in reward sensitivity whereas the decrease in recklessness seen in young adulthood is driven mainly by improvements in cognitive control. It is also possible (albeit unlikely) that the initial increase and later decline in risk taking seen during the transition between childhood and adulthood is entirely explained by the rise and fall in socioemotional reactivity and not related to changes in cognitive control. While advances in neuroscience have permitted researchers to distinguish between these systems in studies of brain structure and function, these systems likely engage in ongoing interactions with one another, and it is therefore unwise to think about them as if they are independent entities.
As we have asserted throughout this review, the weight of the evidence amassed to date is consistent with the dual systems perspective. Although there are occasional exceptions to the general trends, self-report, behavioral, and neuroimaging studies generally support the model, finding that psychological and neural manifestations of reward sensitivity increase between childhood and adolescence, peak sometime during the late teen years, and decline thereafter, whereas psychological and neural reflections of better cognitive control increase gradually and linearly throughout adolescence and into the early 20s, and that the combination of amplified reward sensitivity and still-developing cognitive control makes middle and late adolescence a time of heightened predisposition to risky and reckless behavior. Whether this inclination translates into real-world risk-taking, however, is contingent on the context in which adolescent development occurs.
In our view, the published research that has appeared since the introduction of this viewpoint has strengthened, rather than called into question, the model's utility. Of course, there have been studies yielding results inconsistent with one or more aspects of the dual systems model. This is to be expected given the large number of relevant studies and wide variety of methodologies employed. Importantly, studies that have failed to support the dual systems model have not provided consistent evidence for an alternative developmental model. They do, however, serve as a reminder that there may be conditions under which the general finding of heightened reward sensitivity in adolescence or age-related increases in cognitive control may not apply. This highlights the fact that, as we have pointed out, the dual systems perspective is at times overly simplistic. As a heuristic device, however, the model provides a far better account of adolescent risk taking than prior models that have attributed this period of transient recklessness to adolescents’ cognitive deficiencies. It also continues to be generative, and has informed ongoing research in multiple fields, research that will almost certainly support continued refinement of the model (refinements already partially reflected in the multiple variations of the perspective advanced by different research groups).
Importantly, the dual systems model does not suggest that adolescents are universally risky or incompetent decision makers. On the contrary, the model recognizes that basic reasoning capacity is almost fully mature by mid-adolescence. Indeed, under conditions that minimize arousal of the socioemotional system and allow for deliberative, calculated decision making, adolescents tend to make decisions and judgments that are quite similar to those of adults (e.g., Chein et al., 2011, Figner et al., 2009, Van Leijenhorst et al., 2008). Instead, what the dual systems model suggests is that when decision making occurs under conditions that excite, or activate, the socioemotional system (e.g., when decisions are made in the presence of friends, under emotionally arousing circumstances, or when there is a potential to obtain an immediate reward) adolescents are more prone than other age groups to pursue exciting, novel, and risky courses of action. Far from being a biologically deterministic model, the dual systems perspective explicitly emphasizes the context in which decision making takes place.