July 8, 2021

High Dropout Rate in Six-Year Cohort Study of Medication Treatment for ADHD

Few studies have examined the safety and tolerability of ADHD medications (stimulants and atomoxetine) extending beyond six months, and none beyond a few years. A pair of Swedish neuroscientists at Uppsala University Hospital set out to explore longer-term outcomes. They conducted a six-year prospective study of 112 adults diagnosed with ADHD who were being treated with ADHD medications (primarily MPH, but also dexamphetamine and atomoxetine).


They found that at the end of that period, roughly half were still on medication, and half had discontinued treatment. There were no significant differences between the two groups in age, sex, ADHD severity, or comorbidity. The average ADHD score for the entire cohort declined to vary significantly, from a mean of 37 to a mean of 26, with less than one in a thousand odds of that being due to chance. There was also no sign of drug tolerance or a need to increase the dosage over time.
All 55 adults who discontinued treatment had taken MPH for at least part of the time. Eleven had also been treated with dexamphetamine(DEX) and 15 with atomoxetine (ATX). The average time on treatment was just under two years. Almost a third quit MPH because they perceived no beneficial effect. Since they were on average taking higher doses at discontinuation than initiation, that is unlikely to have been due to suboptimal dosage. Almost another third was discontinued for various adverse mental effects, including hyperactivity, elation, depressive moods, aggression, insomnia, fatigue, and lethargy. Another one in eleven quit when they lost contact with the prescribing physician. In the case of ATX, almost half quit because of what they perceived as adverse mental effects.


Among the 57 adults who remained on medication, four out of five reported a strong beneficial effect. Only two reported minimal or no effect. Compared with the group that discontinued, the group that remained on medication was far more likely to agree with the statements, "My quality of life has improved," and "My level of functioning has improved." Yet, as the authors caution, it is possible "that the subjects' subjective ratings contained a placebo-related mechanism in those who are compliant with the medication and pursue treatment over time." The authors reported that there were no significant differences in ADHD scores or ADHD severity between the group that quit and the group that remained on medication, even though, on average, the group that quit had been off medication for four years at follow-up.


We cannot explain why the patients who quit treatment showed similar levels of ADHD symptoms to those who continued treatment.  It is possible that some patients remit symptoms over time and do not require sustained treatment.  But we must keep in mind that there was a wide range of outcomes in both groups. Future work needs to find predictors of those who will do well after treatment withdrawal and those who do not.


Any decision on whether to maintain a course of medication should always weigh expected gains against adverse side effects. Short of hard evidence of continuing efficacy beyond two years, adverse events gain in relative importance. With that in mind, it is worth noting that this study reports that among those who remained on MPH, many reported side effects. More than a quarter complained of decreased appetite, one in four of dry mouth, one in five of anxiousness and increased heart rate, one in six of decreased sexual desire, one in nine of depressed mood, and one in eleven of insomnia.


This study breaks important ground in looking at the long-term effects of medication. It reaffirms findings elsewhere of the efficacy of ADHD medications. But contrary to the authors' conclusion, the data they present suggests the possibility that permanently medicating ADHD patients may not be more efficacious than discontinuation beyond a certain point, especially when balanced against adverse side effects.
But this is just one study with a relatively small sample size. This suggests a need for additional studies with larger sample sizes to pursue these questions with greater statistical reliability.

Dan Edvinsson and Lisa Ekselius, "Long-Term Tolerability and safety of Pharmacological Treatment of Adult Attention-Deficit/hyperactivity disorder," Journal of clinical psychopharmacology, vol. 38, no. 4(2018).

Related posts

No items found.

Two New Meta-analyses Point to Benefits of Transcranial Direct Current Stimulation

Background: 

ADHD treatment includes medication, behavioral therapy, dietary changes, and special education. Stimulants are usually the first choice but may cause side effects like appetite loss and stomach discomfort, leading some to stop using them. Cognitive behavioral therapy (CBT) is effective but not always sufficient on its own. Research is increasingly exploring non-drug options, such as transcranial direct current stimulation (tDCS), which may boost medication effectiveness and improve results. 

What is tDCS?

tDCS delivers a weak electric current (1.0–2.0 mA) via scalp electrodes to modulate brain activity, with current flowing from anode to cathode. Anodal stimulation increases neuronal activity, while cathodal stimulation generally inhibits it, though effects vary by region and neural circuitry. The impact of tDCS depends on factors such as current intensity, duration, and electrode shape. It targets cortical areas, often stimulating the dorsolateral prefrontal cortex for ADHD due to its role in cognitive control. Stimulation of the inferior frontal gyrus has also been shown to improve response inhibition, making it another target for ADHD therapy. 

There is an ongoing debate about how effective tDCS is for individuals with ADHD. One study found that applying tDCS to the left dorsolateral prefrontal cortex can help reduce impulsivity symptoms in ADHD, whereas another study reported that several sessions of anodic tDCS did not lead to improvements in ADHD symptoms or cognitive abilities.  

New Research:

Two recent meta-analyses have searched for a resolution to these conflicting findings. Both included only randomized controlled trials (RCTs) using either sham stimulation or a waitlist for controls. 

Each team included seven studies in their respective meta-analyses, three of which appeared in both. 

Both Wang et al. (three RCTs totaling 97 participants) and Wen et al. (three RCTs combining 121 participants) reported very large effect size reductions in inattention symptoms from tDCS versus controls. There was only one RCT overlap between them. Wang et al. had moderate to high  variation (heterogeneity) in individual study outcomes, whereas Wen et al. had virtually none. There was no indication of publication bias. 

Whereas Wen et al.’s same three RCTs found no significant reduction in hyperactivity/impulsivity symptoms, Wang et al. combined five RCTs with 221 total participants and reported a medium effect size reduction in impulsivity symptoms. This time, there was an overlap of two RCTs between the studies. Wen et al. had no heterogeneity, while Wang et al. had moderate heterogeneity. Neither showed signs of publication bias.  

Turning to performance-based tasks, Wang et al. reported a medium effect size improvement in attentional performance from tDCS over controls (three RCTs totaling 136 participants), but no improvement in inhibitory control (five RCTs combining 234 persons). 

Wang et al. found no significant difference in adverse events (four RCTs combining 161 participants) between tDCS and controls, with no heterogeneity. Wen et al. found no significant difference in dropout rates (4 RCTs totaling 143 individuals), again with no heterogeneity.  

Wang et al. concluded, “tDCS may improve impulsive symptoms and inattentive symptoms among ADHD patients without increasing adverse effects, which is critical for clinical practice, especially when considering noninvasive brain stimulation, where patient safety is a key concern.” 

Wen et al. further concluded, “Our study supported the use of tDCS for improving the self-reported symptoms of inattention and objective attentional performance in adults diagnosed with ADHD. However, the limited number of available trials hindered a robust investigation into the parameters required for establishing a standard protocol, such as the optimal location of electrode placement and treatment frequency in this setting. Further large-scale double-blind sham-controlled clinical trials that include assessments of self-reported symptoms and performance-based tasks both immediately after interventions and during follow-up periods, as well as comparisons of the efficacy of tDCS targeting different brain locations, are warranted to address these issues.” 

The Take-Away: 

Previous studies have shown mixed results on the benefits of this therapy on ADHD. These new findings suggest that tDCS may hold some real promise for adults with ADHD. While the technique didn’t meaningfully shift hyperactivity or impulsivity, it was well-tolerated and showed benefit, especially in self-reported symptoms. However, with only a handful of trials to draw from, it would be a mistake to suggest tDCS as a standard treatment protocol. Larger, well-designed studies are the next essential step to clarify where, how, and how often tDCS works best.

Meta-analysis Reports Executive Function Gains from Exercise Interventions for ADHD

Background:

The development of ADHD is strongly associated with functional impairments in the prefrontal cortex, particularly the dorsolateral prefrontal cortex, which plays a key role in maintaining attention and controlling impulses. Moreover, imbalances in neurotransmitters like dopamine and norepinephrine are widely regarded as major neurobiological factors contributing to ADHD. 

Executive functions are a group of higher-order cognitive skills that guide thoughts and actions toward goals. “Executive function” refers to three main components: inhibitory control, working memory, and cognitive flexibility. Inhibitory control helps curb impulsive actions to stay on track. Working memory allows temporary storage and manipulation of information for complex tasks. Cognitive flexibility enables switching attention and strategies in varied or demanding situations. 

Research shows that about 89% of children with ADHD have specific executive function impairments. These difficulties in attention, self-control, and working memory often result in academic and social issues. Without timely intervention, these issues can lead to emotional disorders like depression, anxiety, and irritability, further affecting both physical health and social development. 

Currently, primary treatments for executive function deficits in school-aged children with ADHD include medication and behavioral or psychological therapies, such as Cognitive Behavioral Therapy (CBT). While stimulant medications do improve executive function, not all patients are able to tolerate these medications. Behavioral interventions like neurofeedback provide customized care but show variable effectiveness and require specialized resources, making them hard to sustain. Safer, more practical, and long-lasting treatment options are urgently needed. 

Exercise interventions are increasingly recognized as a safe, effective way to improve executive function in children with ADHD. However, systematic studies on school-aged children remain limited.  

Moreover, there are two main scoring methods for assessing executive function: positive scoring (higher values mean better performance, such as accuracy) and reverse scoring (lower values mean better performance, such as reaction time). These different methods can affect how results are interpreted and compared across studies. This meta-analysis explored how different measurement and scoring methods might influence results, addressing important gaps in the research. 

The Study:

Only randomized controlled trials (RCTs) involving school-aged children (6–13 years old) diagnosed with ADHD by DSM-IV, DSM-5, ICD-10, ICD-11, or the SNAP-IV scale were included. Studies were excluded if the experimental group received non-exercise interventions or exercise combined with other interventions. 

Cognitive Flexibility 

Using positive scoring, exercise interventions were associated with a narrowly non-significant small effect size improvement relative to controls (eight RCTs, 268 children). Using reverse scoring, however, they were associated with a medium effect size improvement (eleven RCTs, 452 children). Variation (heterogeneity) in individual RCT outcomes was moderate, with no sign of publication bias in both instances. 

Inhibitory Control 

Using positive scoring, exercise interventions were associated with a medium effect size improvement relative to controls (ten RCTs, 421 children). Using reverse scoring, there was an association with a medium effect size improvement (eight RCTs, 265 children). Heterogeneity was moderate with no sign of publication bias in either case. 

Working Memory 

Using positive scoring, exercise interventions were associated with a medium effect size improvement relative to controls (six RCTs, 321 children). Using reverse scoring, the exercise was associated with a medium effect size improvement (five RCTs, 143 children). Heterogeneity was low with no indication of publication bias in both instances. 

Conclusion:

The team concluded, “Exercise interventions can effectively improve inhibitory control and working memory in school-aged children with ADHD, regardless of whether positive or reverse scoring methods are applied. However, the effects of exercise on cognitive flexibility appear to be limited, with significant improvements observed only under reverse scoring. Moreover, the effects of exercise interventions on inhibitory control, working memory, and cognitive flexibility vary across different measurement paradigms and scoring methods, indicating the importance of considering these methodological differences when interpreting results.” 

Although this work is intriguing, it does not show that exercise significantly improves the symptoms of ADHD in children. This means that exercise, although beneficial for many reasons, should not be viewed as a replacement for evidence-based treatments for the disorder.

December 3, 2025

Here’s What the Wall Street Journal Got Wrong about the Medication Treatment of ADHD Patients: A Lesson in Science Media Literacy

A recent Wall Street Journal article raised alarms by concluding that many children who start medication for ADHD will later end up on several psychiatric drugs. It’s an emotional topic that will make many parents, teachers, and even doctors worry: “Are we putting kids on a conveyor belt of medications?”

The article seeks to shine a light on the use of more than one psychiatric medication for children with ADHD.   My biggest worry about the article is that it presents itself as a scientific study because they analyzed a database.  It is not a scientific study.  It is a journalistic investigation that does not meet the standards of a scientific report..

The WJS brings attention to several issues that parents and prescribers should think about. It documents that some kids with ADHD are on more than one psychiatric medication, and some are receiving drugs like antipsychotics, which have serious side effects.  Is that appropriate? Access to good therapy, careful evaluation, and follow-up care can be lacking, especially for low-income families.  Can that be improved?  On that level, the article is doing something valuable: it’s shining a spotlight on potential problems.

It is, of course, fine for a journalist to raise questions, but it is not OK for them to pretend that they’ve done a scientific investigation that proves anything. Journalism pretending to be science is both bad science and bad journalism.

Journalism vs. Science: Why Peer Review Matters

Journalists can get big datasets, hire data journalists, and present numbers that look scientific.  But consider the differences between Journalism and Science. These types of articles are usually checked by editors and fact-checkers. Their main goals are:

 Is this fact basically correct?

 Are we being fair?

 Are we avoiding legal problems?

But editors are not qualified to evaluate scientific data analysis methods.  Scientific reports are evaluated by experts who are not part of the project.  They ask tough questions like: 

Exactly how did you define ADHD? 

How did you handle missing data? 

Did you address confounding? 

Did you confuse correlation with causation?

If the authors of the study cannot address these and other technical issues, the paper is rejected.

The WSJ article has the veneer of science but lacks its methodology.  

Correlation vs. Causation: A Classic Trap

The article’s storyline goes something like this:  A kid starts ADHD medication.  She has additional problems or side effects caused by the ADHD medications.   Because of that, the prescriber adds more drugs.  That leads to the patient being put on several drugs.  Although it is true that some ADHD youth are on multiple drugs, the WSJ is wrong to conclude that the medications for ADHD cause this to occur.  That simply confuses correlation with causation, which only the most naïve scientist would do.

In science, this problem is called confounding. It means other factors (like how severe or complex a child’s condition is) explain the results, not just the thing we’re focused on (medication for ADHD). 

The WSJ analyzed a database of prescriptions.  They did not survey the prescribers who made the prescriptions of the patients who received them.  So they cannot conclude that ADHD medication caused the later prescriptions, or that the later medications were unnecessary or inappropriate. 

Other explanations are very likely.   It has been well documented that youth with ADHD are at high risk for developing other disorders such as anxiety, depression,  and substance use.  The kids in the WSJ database might have developed these disorders and needed several medications.  A peer-reviewed article in a scientific journal would be expected to adjust for other diagnoses. If that is not possible, as it is in the case of the WSJ’s database, a journal would not allow the author to make strong conclusions about cause-and-effect.

Powerful Stories Don’t Always Mean Typical Stories

The article includes emotional accounts of children who seemed harmed by being put on multiple psychiatric drugs.  Strong, emotional stories can make rare events feel common.  They also frighten parents and patients, which might lead some to decline appropriate care. 

These stories matter. They remind us that each data point is a real person.  But these stories are the weakest form of data.  They can raise important questions and lead scientists to design definitive studies, but we cannot use them to draw conclusions about the experiences of other patients.  These stories serve as a warning about the importance of finding a qualified provider,  not as against the use of multiple medications.  That decision should be made by the parent or adult patient based on an informed discussion with the prescriber.

Many children and adults with ADHD benefit from multiple medications. The WSJ does not tell those stories, which creates an unbalanced and misleading presentation.  

Newspapers frequently publish stories that send the message:  “Beware!  Doctors are practicing medicine in a way that will harm you and your family.”   They then use case studies to prove their point.  The title of the article is, itself, emotional clickbait designed to get more readers and advertising revenue.  Don’t be confused by such journalistic trickery.

What Should We Conclude?

Here’s a balanced way to read the article.  It is true that some patients are prescribed more than one medication for mental health problems.  But the article does not tell us whether this prescribing practice is or is not warranted for most patients.  I agree that the use of antipsychotic medications needs careful justification and close monitoring.  I also agree that patients on multiple medications should be monitored closely to see if some of the medications can be eliminated.  Many prescribers do exactly that, but the WSJ did not tell their stories.  

It is not appropriate to conclude that ADHD medications typically cause combined pharmacotherapy or to suggest that combined pharmacotherapy is usually bad. The data presented by the WSJ does not adequately address these concerns.  It does not prove that medications for ADHD cause dangerous medication cascades.

We have to remember that even when a journalist analyzes data, that is not the same as a peer-reviewed scientific study. Journalism pretending to be science is both bad science and bad journalism.