After hours: If urgent, life threatening (please call 988, 911, or go to nearest ER. Otherwise, for side effects to medications, please text 816-766-0119. For all other non-urgent issues, please contact us Practice Q messaging portal or contact us during clinic hours at 888-855-0947.




Ready for your Mental Health Transformation?
Child Psychiatrist /Adult Psychiatrist
Search Results
711 results found with an empty search
- Vitamin D Especially Important for Brain Health in Women, but Not in Men?
LOS ANGELES — Vitamin D is important for brain health, but this might be particularly true for women but doesn’t appear to have this beneficial effect in men, early research suggested. The large study showed an association between greater plasma vitamin D levels in females and better memory and larger subcortical brain structures. “We found that vitamin D for women was correlated with better cognitive outcomes, but we need to do more research to find out what role vitamin D actually plays at a mechanistic level,” study investigator Meghan Reddy, MD, Psychiatry Resident, UCLA Semel Institute for Neuroscience and Human Behavior, Los Angeles, told Medscape Medical News . The findings were presented here at the American Psychiatric Association (APA) 2025 Annual Meeting . Protective Effects This latest study added to the growing body of literature of research on vitamin D and brain health. Previous studies have shown that vitamin D may influence cognition and brain function in older adults, potentially through its anti-inflammatory, antioxidant, and neuroprotective effects. Research also suggested it may promote brain health by increasing neurotrophic factors and aiding in the clearance of amyloid from the brain. Recent findings published in the American Journal of Clinical Nutrition suggested that vitamin D may also affect biological aging by preserving telomeres — the protective caps at the ends of chromosomes that shorten with age. Other research has also shown telomere length may help protect against brain diseases, including a study previously reported by Medscape Medical News, which linked longer leukocyte telomere length to a lower risk for stroke, dementia, and late-life depression. In the current study, Reddy and colleagues used data from the multisite Human Connectome Project to track individuals over time to understand age-related changes in brain structure, function, and connectivity. They are investigating various biomarkers that might correlate with aging, including hemoglobin, creatine, glycated hemoglobin (for blood glucose levels), high-density lipoprotein, and low-density lipoprotein, in addition to vitamin D. The idea, said Reddy, is to track cognitive health using biomarkers in addition to brain imaging and cognitive testing. The study included 1132 individuals, 57% of whom were women and 66% of whom were White. The average age was approximately 62 years, with participants ranging from 36 to 102 years old. Participants underwent neuropsychological testing to assess short-term memory and fluid intelligence — the capacity to reason and solve problems, which is closely linked to comprehension and learning. They also provided blood samples and underwent MRI scans. Researchers divided participants into two age groups: those younger than 65 years and those 65 years or older. The investigators found a significant association between vitamin D levels and memory in women (P = .04). Sex Differences “What’s interesting is that when we looked specifically at memory, higher vitamin D levels were linked to better memory performance — but only in women, not men,” said Reddy, adding that she found this somewhat surprising. In women, investigators found a significant association between vitamin D levels and the volume of the putamen (P = .05) and pallidum (P = .08), with a near-significant trend for the thalamus. In contrast, studies show that in men, higher vitamin D levels were associated with smaller volumes of the thalamus, putamen, and pallidum. There were no differences in the impact of vitamin D by age group. Sex differences in the relationship between vitamin D, cognition, and brain volume warrant further investigation, Reddy said. She also noted that the study is correlational, examining memory, brain volume, and vitamin D levels at a single timepoint, and therefore it can only offer a hypothesis. Future studies will include multiple time points to explore these relationships over time. The results did not determine an ideal vitamin D plasma level to promote brain health in women. Commenting on the research for Medscape Medical News, Badr Ratnakaran, MD, a geriatric psychiatrist in Roanoke, Virginia, and chair of the APA’s Council on Geriatric Psychiatry, said the finding that women may get more brain benefits from vitamin D than men is “key” because dementia is more prevalent among women since they tend to live longer. Other research has shown vitamin D may help manage depression in older women , which makes some sense as dementia and depression “go hand in hand,” he said. Ratnakaran recommended that women take a vitamin supplement only if they’re deficient, as too much vitamin D can lead to kidney stones and other adverse side effects. Note: This article originally appeared on Medscape .
- Depression Linked to 14% Increased Risk for Heart Failure
TOPLINE: A history of depression was associated with a 14% higher risk for incident heart failure (HF) than no history of depression in a new study, even after adjusting for known HF risk factors and sociodemographic data. METHODOLOGY: This cohort study analyzed data of more than 2.8 million US veterans (median age, 54 years; 94% men; 69.5% White individuals; and 20% Black individuals) of the Veterans Affairs Birth Cohort between 2000 and 2015, with a median follow-up of 6.9 years. The included participants were free of HF at baseline and had three outpatient visits within 5 years. The time to incident HF was compared among participants with prevalent depression at baseline (8%) and those without depression at and after baseline. The analysis was adjusted for sociodemographic covariates, such as age, sex, race, and ethnicity, and clinical comorbidities and HF risk factors, such as diabetes, cholesterol, coronary artery disease, stroke, and atrial fibrillation. TAKEAWAY: Participants with depression had a higher rate of incident HF than those without depression (136.9 vs 114.6 cases per 10,000 person-years). After adjusting for covariates and cardiovascular risk factors, participants with depression had a 14% increased risk for incident HF compared with those without depression (adjusted hazard ratio [HR], 1.14; 95% CI, 1.13-1.16). Analysis of a low-risk cohort without comorbidities at baseline revealed that depression was associated with a 58% higher risk for incident HF (adjusted HR, 1.58; 95% CI, 1.39-1.80) after adjustment. Among participants with prevalent depression, men had a greater risk for incident HF than women (adjusted HR, 1.70; 95% CI, 1.60-1.80). IN PRACTICE: “Depression is a leading cause of disability around the world, affecting 4.4% of the world’s population (322 million people), and this rate continues to increase. Thus, depression remains a widely prevalent disease and a risk factor for HF that may be modifiable,” the investigators wrote. SOURCE: The study was led by Jamie L. Pfaff, MD, Vanderbilt University Medical Center in Nashville, Tennessee. It was published online on May 8 in JAMA Network Open . LIMITATIONS: This retrospective study relied on older electronic health record data and billing codes through 2015, which may have led to misclassification bias. It lacked detailed information on depression treatment and socioeconomic risk factors, and it did not compare depression with other mental health conditions linked to cardiovascular risk. DISCLOSURES: The study was funded by the National Institutes of Health . One investigator reported receiving grants from the National Institutes of Health during the conduct of the study. Note: This article originally appeared on Medscape .
- Mindfulness Works For Depression When Other Talk Therapies Fail
Mindfulness-based cognitive therapy (MBCT) is a cost-effective treatment option for adults with major depressive disorder (MDD) who have not achieved remission through standard psychological therapies, a new study showed. Compared with treatment as usual (TAU), addition of MBCT achieved a greater reduction in depressive symptoms, with benefits sustained up to 6 months later. Improvements in participants’ work and social functioning were also observed. “We know there’s a gap in services for people with depression who haven’t got better through National Health Service [NHS] Talking Therapies,” study coauthor Barney Dunn, PhD, from University of Exeter, Exeter, England, said in a statement. “These people often don’t qualify for further specialist mental health care, and so, are left with no further options. We’ve shown that offering MBCT to this group can be effective and cost-efficient to deliver, and we hope this will lead to it being implemented widely,” said Dunn. The study was published online on May 14 in Lancet Psychiatry . Nonremission Common It’s estimated that about half of individuals with MDD fail to achieve remission after psychological therapy within the UK NHS Talking Therapies program. MBCT offers a promising nondrug option, leveraging mindfulness training to counter habitual maladaptive thinking. However, no large trials had conclusively tested its efficacy and cost-effectiveness in psychological therapy nonremitters until now. In a randomized controlled superiority trial conducted across 20 NHS services in the United Kingdom, 234 adults (mean age, 42 years; 71% women) who did not reach remission after 12 or more sessions of high-intensity therapy were assigned to MBCT plus TAU or TAU alone. MBCT consisted of eight weekly group-based sessions delivered by videoconferences and teaching mindfulness skills and how to respond more effectively to difficult emotions. Six months after treatment, patients in the MBCT group had significantly lower levels of depression symptoms than peers in the TAU-only group, with an adjusted between-group difference on the Patient Heath Questionnaire-9 of 2.49 points (P = .0006). The average effect of MBCT was in the small to moderate range, similar to other trials of psychological treatment for MDD and comparable to treatment with antidepressants. Effects were maintained up to 6 months after treatment ended, the study team said. MBCT plus TAU was superior to TAU alone in reducing symptoms of generalized anxiety and increasing mental wellbeing more broadly. Based on economic analyses, MBCT had an estimated 99% chance of being cost-effective at a threshold of £20,000 per quality-adjusted life year gained and a 91% probability of being less costly and more effective than TAU alone, the study team found. No serious trial-related adverse events were observed. This study adds “robust” evidence to existing research and brings the combined evidence to a level at which MBCT “should be considered for guideline endorsement as a further-line treatment in the UK,” the investigators concluded. Conclusive Evidence, With Caveats In a statement from the nonprofit UK Science Media Centre, Jesús Montero-Marín, PhD, Department of Psychiatry, University of Oxford, Oxford, England, said this study is a “major advance in the treatment of resistant depression.” “This work provides conclusive evidence that MBCT can be an effective and cost-effective second-line treatment option in structured clinical settings. Its implementation could lead to a substantial improvement in the continuity of care for cases of difficult-to-treat depression,” said Montero-Marín. Also weighing in, Elena Makovac, PhD, senior lecturer in clinical psychology, Brunel University of London, London, England pointed out a key limitation of the study. “By comparing MCBT plus treatment as usual with the treatment-as-usual group, we cannot definitively determine whether the observed improvements were specifically due to the MCBT or if they resulted from the fact that the MCBT group received more treatment overall compared to the control group. This improvement could potentially have been achieved with an extension of the originally delivered Talking Therapies,” Makovac said in the statement. “While research into additional treatments for difficult-to-treat depression is essential, it is even more important to offer interventions grounded in well-understood mechanisms. This process begins with a crucial first step: Answering the question of why some patients do not respond to talking therapies,” Makovac added. Note: This article originally appeared on Medscape .
- Stimulant Medications Don’t Cause Psychosis, New Study Finds
Prescription stimulants taken during childhood for attention-deficit/hyperactivity disorder (ADHD) do not cause psychosis, according to a new study published on Monday in Pediatrics, contradicting what some observational studies had suggested. Instead, the new study found children with more severe attention and hyperactivity issues or other mental health conditions such as anxiety were more likely to be medicated for ADHD. These children were also more likely to experience psychotic episodes, suggesting stimulants such as Adderall were not the cause. “These results provide reassurance both to families and to prescribers that routine ADHD medication treatment is unlikely to cause psychotic experiences,” said Ian Kelleher, PhD, chair of child and adolescent psychiatry in the Centre for Clinical Brain Sciences at The University of Edinburgh in Edinburgh, Scotland, who led the trial. Kelleher and his colleagues used data from the Adolescent Brain Cognitive Development Study, a longitudinal study tracking brain development and child health in the United States. The study included more than 8300 children and teens who were between 9 years and 14 years of age from 2016 to 2020. At the beginning of the study, none of the children were taking a prescription stimulant. In the first year, 460 kids were prescribed a drug for ADHD, which included methylphenidate, dexmethylphenidate, amphetamine, dextroamphetamine, and lisdexamfetamine. They did not have information on the dose of medication. Researchers then compared rates of psychosis among these children with approximately 7900 children who were not on one of these medications. The researchers analyzed self-reported questionnaires from each child, which they filled out at baseline and again 1 year into the study. The screener for psychosis risk asked 21 questions about whether they had experienced hallucinations or delusions, and if so, how distressed they were by these experiences. Among the population taking stimulants, the drugs were not associated with psychosis after adjusting for confounding factors that can predispose a person to psychosis, including mental illness, parental income, and race (odds ratio [OR], 1.09; 95% CI, 0.71-1.56). In the unweighted analysis, children prescribed ADHD medications were about 1.5 times more likely to have had a psychotic episode than those not taking a stimulant (OR, 1.46; 95% CI, 1.15-1.84). Those who had more severe ADHD symptoms , like hyperactivity or impulsiveness, and those who had other, co-occurring mental health symptoms like anxiety and depression, were most likely to report psychotic episodes. This group, as well as boys generally, were also more likely to have been prescribed stimulants. “It’s important to recognize that any difference in risk may not be due to stimulant treatment,” Kelleher said. “If you take children with ADHD and you divide them into two groups, kids treated with medication and kids not treated with medication, those two groups are not the same.” Kelleher said previous research linking stimulants with episodes of psychosis had not done a thorough job of factoring in a person’s mental health or severity of ADHD. One observational study published in 2024 suggested higher doses of prescription amphetamines were associated with more than a fivefold increase in the risk of developing psychosis. A 2023 meta-analysis also cited studies suggesting taking higher doses of stimulants than typically prescribed for ADHD could cause psychosis. An observational study published in 2024, in the Journal of the American Academy of Child & Adolescent Psychiatry, concluded that while risk for psychosis was low, taking amphetamines and atomoxetine for longer periods of time may increase a person’s risk for psychosis. However, the authors of the meta-analysis did note that clinicians may have misidentified a child’s symptoms as ADHD instead of signs of psychosis. In addition to factors like race and age, researchers should consider that hallucinations and delusions are “quite common” during childhood, said Melissa Batt, MD, MPH, an assistant professor of psychiatry at the University of Colorado Anschutz Medical Campus, Aurora, Colorado, who was not involved with the trial. They should take this knowledge into account when looking into potential causes of psychotic episodes in kids. “They are usually fleeting and usually go away,” Batt said, adding that only a small number of young people who report having a psychotic episode are eventually diagnosed with a psychotic disorder. “Upwards of 90% do not go on to have a diagnosis,” she said. Batt said one limitation of the study was the 9- to 14-year-old age range. “They are missing a pretty critical range of folks, especially older teens and people in their 20s. Those are the ages that we see who develop psychosis or mania,” she said. Batt agreed observational studies left out a lot of factors that could connect psychotic episodes to influences other than stimulant medications. Family history, she said, is a huge influence in whether a person develops a psychotic disorder. Future trials should build on these new findings and take different patient characteristics into account, she said. “We should be looking at family history, other medications they are taking, they could be using other substances such as cannabis, which they didn’t control for,” she said. “That is a huge variable we should be looking at.”. Note: This article originally appeared on Medscape .
- The Impact of Sleep Medications on Psychiatric Disorders
Psychiatrists and sleep physicians may be using the same medications to treat certain issues... so why not work together? Sam A. Kashani, a sleep physician, discusses the impact of sleep medication on psychiatric disorders and how the treatment team can best work together to improve outcomes. As discussed in a previous video , collaboration between psychiatrists and sleep physicians helps improve "the whole picture" for the patient, especially since psychiatric and sleep clinicians have a lot of overlap in terms of prescribing pharmacologic treatments. "Collaboration is everything, especially when it comes to prescribing medications," shared Kashani. According to Kashani, narcolepsy is often underdiagnosed, as there is often a long delay from the onset of symptoms to the time of diagnosis. It is also mistaken for major depressive disorder or attention-deficit/hyperactivity disorder (ADHD) . It is therefore important for clinicians to tease out the disorders they are dealing with, which can be difficult. "It's very common to have narcolepsy with comorbid ADHD or with comorbid mood disorders or anxiety," said Kashani. "Because there's so much overlap in the medications that are used for these specific entities, I just feel that collaboration is the first and foremost thing." Note: This article originally appeared on Psychiatric Times .
- Sleep Disorders in Children vs Adults: What's the Difference?
Key Takeaways Pediatric sleep disordered breathing often resolves with tonsil and adenoid removal, unlike adult cases. Insomnia in children is managed with behavioral interventions, while adults may require medication. Treatments for hypersomnias like narcolepsy are similar across adult and pediatric patients. Treatment approaches for sleep disorders are case-dependent, highlighting the need for individualized care. What's the difference between sleep disorders in adult patients vs pediatric patients? Quite a lot, shares Sam A. Kashani, a sleep physician and expert. He describes a number of specific points of difference, including: Sleep disordered breathing, such as sleep apnea, is treated differently in pediatric patients. If pediatric patients still have their tonsils and adenoids, removing them can resolve sleep disordered breathing. Pediatric patients are typically not prescribed medication for insomnia. Evidence has demonstrated that behavioral measures and treatments are sufficient and effective. Adults may be less inclined to want to try behavioral treatments. However, notes Kashani, treatment for hypersomnias like narcolepsy are relatively the same between adult and pediatric patients. "It really just depends on the case," said Kashani. Note: This article originally appeared on Psychiatric Times .
- Distinguishing Borderline From Bipolar Spectrum Disorders
Key points: Accurate diagnosis of borderline and bipolar conditions is essential. Borderline personality disorder is marked by primitive defenses and interpersonal hypersensitivity. Mood shifts in borderline personality are reactive; in bipolar disorder, they’re cyclical. How do we distinguish between bipolar mood disorder, cyclothymic temperament, and borderline personality disorder? This may be among the most important questions in clinical psychiatry and psychotherapy. The ability to accurately distinguish between these conditions frequently leads to treatment success; misdiagnosis can result in major failures. For instance, bipolar disorder is often well-treated with medication. Lithium therapy can lead to remission of symptoms and prevention of future episodes in appropriately selected patients (Ghaemi, 2024). In contrast, medication does not treat the core symptoms of borderline personality disorder. Misdiagnosis of borderline personality disorder as a mood disorder often results in multiple failed medication trials and prolonged suffering and impairment. Much diagnostic confusion results from significant symptomatic overlap, particularly in the area of mood symptomatology. Below are two fundamental features of borderline personality disorder that are not present in bipolar spectrum disorders. I teach psychiatry residents to use these pointers to help differentiate the two forms of psychopathology. They are: 1. Borderline personality disorder is marked by a reliance on primitive defense mechanisms: splitting, projection, and projective identification (Kernberg, 1975). These defenses pervade the lives of borderline patients. For instance, splitting refers to the tendency to see self and others as being either "all good" or "all bad," with an inability to see things in "shades of gray." This symptom is captured by DSM-5 criteria for borderline personality disorder ("…alternating between extremes of idealization and devaluation") (American Psychiatric Association, 2013). We do not see this as a prominent symptom in bipolar disorders. 2. Borderline personality disorder is marked by the patient's fundamental interpersonal hypersensitivity (Gunderson & Lyons-Ruth, 2008). Symptoms of borderline personality disorder are mediated by the patient's subjective experience of the major object (sometimes referred to as the patient's "favorite person"). This was Gunderson's (1984) seminal contribution to understanding borderline psychopathology. He wrote, for instance, that "these characteristic disturbances in interpersonal relations uniformly provide the most distinguishing aspect of the borderline syndrome vis-à-vis a variety of other diagnoses" (Gunderson, 1984). Contrary to Linehan's (1993) claim that emotional dysregulation represents the "core" of borderline personality disorder, mood problems in borderline patients are actually secondary to interpersonal hypersensitivity, which is the "engine" that drives all borderline symptoms. Patients with borderline personality disorder are prone to abandonment depression, which Masterson (2000) saw as the primary affective state in the borderline personality. For instance, a person with borderline personality disorder may experience feelings of depression, emptiness, and suicidal despair in response to a minor or imagined distancing by their romantic partner. These symptoms can easily be misdiagnosed as a major mood disorder. Mood symptoms are actually a very poor discriminating feature, and this is the source of much confusion, misdiagnosis, and mistreatment (Ghaemi, 2014). Both borderline personality disorder and bipolar spectrum disorders can present with intense affective shifts, but in borderline patients, these shifts are reactive to interpersonal events rather than driven by endogenous mood cycling. In addition, individuals with cyclothymic temperament exhibit chronic, low-grade mood instability that fluctuates independently of relational dynamics and lacks the identity diffusion, primitive defenses, and object-related dysregulation characteristic of borderline personality disorder. The differential diagnosis of these problems becomes much easier if one keeps these two points in mind. To find a therapist, visit the Psychology Today Therapy Directory .
- No, Your Patients Are Not Wrong: Sometimes Antidepressant Side Effects Do Not Get Better
Key Takeaways Antidepressant side effects often improve over time, but some patients experience worsening, leading to dropout. STAR*D trial data showed patients who dropped out reported more severe and worsening side effects than completers. Clinicians should consider alternative treatments for patients with severe side effects to prevent dropout. Research is ongoing to identify specific side effects linked to dropout and develop a tool to assess dropout risk. CLINICAL REFLECTIONS When a patient starting antidepressants for major depressive disorder voices their concerns about potential side effects, it is common for clinicians to offer patients the same reassurance that many major health agencies have advised: Stick with the medications, and your side effects should improve with time. For example, the National Institutes of Health (NIH)’s public-facing webpage on mental health medications reads: “The side effects [of antidepressants] are generally mild and tend to go away with time.” Likewise, the Centers for Disease Control and Prevention (CDC) publicly states, “Side effects usually do not get in the way of daily life‚ and they often go away as your body adjusts to the medication.” This perspective that antidepressant side effects will eventually go away is not just exclusive to the United States: The United Kingdom’s National Health Service (NHS) states on their public-facing antidepressants overview page that “the most common side effects of antidepressants are usually mild. Side effects should improve within a few days or weeks of treatment as the body gets used to the medicine.” Nevertheless, many psychiatrists and other mental health clinicians have encountered patients who report the opposite experience. Although many patients experience an improvement in side effects with time, not everyone’s side effects improve. In fact, it is not uncommon to encounter patients who report worsening side effects to the point where some decide to quit treatment. Indeed, the No. 1 self-reported reason for why patients prematurely discontinue antidepressant pharmacotherapy is side effects. One question then arises: Why does such a dichotomy exist between the clinical consensus (as publicly stated by the NIH, CDC, and NHS) that side effects improve with time and the anecdotal experiences of patients who report that their side effects do not go away or, in some cases, even worsen? We noticed that past research examining antidepressant side effects often failed to account for 1 important confounder: dropout. That is, many studies on antidepressant side effects focused on individuals who completed treatment while neglecting perhaps the most interesting group of patients: those who may have dropped out of antidepressant treatment prematurely due to side effects. We conducted a secondary analysis of side effects data from patients in the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) trial, the largest antidepressant trial ever conducted. In the first treatment step of the STAR*D trial, all patients received citalopram for an intended 12 weeks per protocol. During these 12 weeks, patients reported their side effect frequency, intensity, and burden on the Frequency, Intensity, and Burden of Side Effects Rating (FIBSER) scale at weeks 2, 4, 6, 9, and 12. Additionally, patients reported which side effects they experienced in 9 organ/function systems on the Patient-Rated Inventory of Side Effects (PRISE) scale. We wanted to examine how side effect frequency, intensity, and burden on the FIBSER scale changed over the course of citalopram treatment. What we were most interested in was how side effect complaints of patients who dropped out early in treatment differed from those who completed treatment. To answer this question, we used pattern-mixture modeling to model side effect complaints at each time point (weeks 2, 4, 6, 9, and 12) for each potential treatment attrition pattern (dropout at week 2, 4, 6, or 9 or full 12-week treatment completion) while controlling for changes in depressive severity over the course of treatment. What we found does not disagree with the NIH/CDC/NHS consensus that side effects improve over time: Indeed, when examining only data from those who completed the full 12-week treatment, these patients reported decreases in side effect frequency, intensity, and burden over the course of treatment. Yet our findings also validated the experience of those patients who report that their side effects never improve. Specifically, when examining data from patients who dropped out of the trial early, a different pattern of side effects emerged: Patients who dropped out after weeks 2, 4, and 6 reported significantly more severe initial side effect complaints than those who completed treatment. And perhaps even more importantly, patients who dropped out after weeks 4 and 6 further showed a worsening of side effects over the course of treatment. Taken together, what we see in the STAR*D data are several distinct patterns in how patients experience antidepressant side effects. On the one hand, there are many patients—namely, those who complete treatment—who are able to tolerate the side effects of antidepressants. These patients not only report lower severity of side effects after antidepressant initiation but also an improvement of side effects over time. It is likely that these are the patients whom the NIH/CDC/NHS consensus guidelines refer to when they offer the reassurance that side effects will decrease over time. On the other hand, there is a nonnegligible population of patients with a much lower tolerance for side effects. These patients not only report more severe side effects immediately after antidepressant initiation, but many also report experiencing a worsening of side effects over the course of treatment—up to and until the point they drop out. These are the patients whom we and our colleagues see in clinical practice every day: those who attempt to persist with their prescribed antidepressant but ultimately drop out due to the intolerability of their side effect symptoms. What is especially surprising is that this second group of patients—those with intolerance for side effects and for whom side effects do not improve—have gone previously unnoticed in the research literature. Our analysis was not of novel data: The STAR*D trial is famous for being the largest antidepressant trial ever conducted, and the data are publicly available from the NIH. However, it seems possible that previous research on antidepressant side effects—from which the major health agencies of the NIH, CDC, and NHS may have derived their guidelines—has focused primarily on treatment completion and has neglected those who drop out of treatment early. The unfortunate part of this oversight is that this second group of patients is perhaps most affected by side effects. After all, side effects are the No. 1 self-reported reason for why patients prematurely discontinue antidepressant treatment. What do our findings mean for psychiatrists and other mental health clinicians? Past research has shown that patients who prematurely drop out of antidepressant treatment often do not return for any mental health treatment and show a poorer long-term prognosis. Consequently, it is especially important for clinicians to pay attention to patient reports of severe or worsening antidepressant side effects as potential warning signs of attrition. The next time one of your patients reports experiencing antidepressant side effects, instead of universally offering the same assurance from the NIH/CDC/NHS guidelines that their problems ought to eventually improve, it may be worth considering switching to an alternative treatment, such as a different medication or even a nonpharmacological treatment such as psychotherapy. There is another question that remains unanswered: Which side effects are more strongly linked to dropout? It is common for patients to self-report being concerned about some side effects more than others, but less is known about which effects cause patients to drop out. We are currently conducting a study to answer this question using data from the PRISE scale in the STAR*D trial and plan on developing a tool for clinicians that will flag patients for dropout risk based on their side effect profile. This tool could help inform psychiatrists in developing their treatment plans, especially for the patients at highest risk for dropout due to antidepressant side effects. We plan on validating our preliminary results in other data from clinical trials or medical records of antidepressant side effects. If you have access to such data and are interested in collaborating, please contact Colin Xu, PhD, at colinxu@uidaho.edu. Note: This article originally appeared on Psychiatric Times .
- What Do We Know About the Causes of Autism?
The latest surveillance data from the US Centers for Disease Control and Prevention (CDC) show a steep rise in the prevalence of autism spectrum disorder (ASD) , extending a years-long trend of increasing diagnoses. While greater awareness and improved diagnostic criteria have likely played a role, other potential contributing factors remain unclear and questions persist about what’s truly driving this phenomenon. These new surveillance data came on the heels of an April 10 announcement by US Department of Health and Human Services (HHS) Secretary Robert F. Kennedy Jr, who set a September deadline to determine the cause of what he called an “autism epidemic.” “By September, we will know what has caused the autism epidemic and we’ll be able to eliminate those exposures,” Kennedy said. However, many scientists who have spent their careers studying ASD are deeply skeptical that a definitive answer could be found in just a few months — if at all. What Do the Latest Data Show? The CDC regularly compiles data on ASD prevalence through the Autism and Developmental Disabilities Monitoring (ADDM) Network. The findings are considered to be among the most reliable snapshots of autism rates in children. The CDC’s most recent data from the 2022 ADDM surveillance cycle are based on 393,353 8-year-olds across 16 US sites. The CDC report shows that ASD affects 1 in 31 children (32.2 per 1000), up from 1 in 36 in 2020 and 1 in 150 in 2000. ASD continues to be more common in boys than girls (ratio 3.4:1). ASD prevalence was higher among Asian/Pacific Islander, Black, and Hispanic children than White children, continuing a pattern first observed in 2020. Children born in 2018 were more likely to be diagnosed by age 48 months compared with those born in 2014, suggesting increased early identification consistent with historical patterns. Why Is ASD Prevalence Rising? The CDC’s latest findings have prompted renewed scrutiny over why ASD prevalence continues to rise. CDC investigators noted several factors that may be driving the increase, including broader diagnostic criteria, greater awareness among parents and pediatricians, and improved access to specialized services. Together, these shifts mean children who may have been overlooked in previous decades are now being identified. Kennedy has long expressed concern about environmental toxins and their potential role in ASD. At an April 16 press conference, he claimed that such toxins disrupt neurodevelopment and are behind the rising caseload. He described autism as “a preventable disease” and pledged to identify the environmental culprit by September. “We’re going to follow the science no matter what it says,” Kennedy said. “And we will have some of the answers by September.” In a statement, the International Society for Autism Research said referring to the condition as a “preventable disease” is “out of touch with contemporary, evidence-based understanding of autism.” “Based on current autism research, we know that there are many causes of autism, and virtually all of these occur prenatally,” the statement continued. “In other words, you are born with autism.” What’s Driving ASD: Genes, Environment, or Both? A robust body of evidence points to a substantial genetic component in ASD etiology. Studies of twins dating back to the 1970s have consistently shown that the vast majority of ASD is due to genetics, said Alexander Kolevzon, MD, clinical director of the Seaver Autism Center at Mount Sinai in New York City. “With advances in genetic technology and analytic methods, hundreds of specific genetic changes have now been identified and are commonly accepted to cause autism. Yet the same twin studies show that if one identical twin has ASD, the other may not about 10% of the time, leaving room for some environmental influence,” Kolevzon told Medscape Medical News. “Environmental effects may be acting through epigenetic mechanisms where certain factors, as of yet unidentified, influence the expression of genes. However, despite being an active area of study, no widespread environmental effects have been reliably established to date,” he added. When it comes to environmental contributors, a substantial amount of research has focused on exposures during the prenatal period — a critical window for neurodevelopment. For example, a 2019 JAMA Pediatrics population-based cohort study of 132,256 births showed that maternal exposure to nitric oxide during pregnancy was associated with increased risk for ASD in offspring. Investigators leading a 2022 study of 294,937 mother-child pairs found that exposure to particulate matter 2.5 in the first two gestational trimesters were associated with increased ASD risk in children. In addition, a 2022 study from France showed prenatal exposure to organophosphate pesticides was linked to an increase in autistic traits among 11-year-old children. Maternal metabolic conditions may also play a role. In April 2025, a meta-analysis of 202 studies including more than 56 million mother-child pairs showed that children born to mothers with gestational diabetes were 25% more likely to be diagnosed with autism. Researchers have also linked ASD risk to preterm birth and advanced parental age. It’s thought that these exposures likely act as modifiers — influencing gene expression, immune activation, or neuronal development — rather than standalone causes. Gut-Brain Link? An emerging area of autism research involves the gut microbiome and whether gut dysbiosis contributes to ASD risk. “There have been several studies showing that there is gut dysbiosis in autism, and that it correlates with autism symptoms,” Lisa Aziz-Zadeh, PhD, professor, Department of Psychology, University of Southern California, Los Angeles, told Medscape Medical News. “However, we know that any behavioral differences must be via gut microbiome/metabolite interactions with the human nervous system,” she said. In an April 2025 study published in Nature Communications, Aziz-Zadeh’s team was the first to identify links between gut microbial tryptophan metabolites, ASD symptoms, and brain activity in individuals with autism, particularly in brain regions associated with interoceptive processing. This points to a “mechanistic model by which gut metabolites may impact autism,” she said. “It’s possible that addressing gut imbalances (via diet, probiotics, prebiotics, fecal transplants) may be helpful. However, we still don’t know if there is a critical age where this may need to happen (prenatal, early life). There is still a lot of work to be done to answer this question,” she said. In another recent study, microbiota transfer therapy led to significant improvements in gastrointestinal (GI) symptoms, autism-related symptoms, and gut microbiota in children with ASD. The effects of the initial treatment on both gut microbiota and GI symptoms were maintained at the 2-year follow-up, with continued improvement in autism-like behaviors, the researchers reported. A Realistic Deadline? When Kennedy declared a September deadline for identifying the cause of autism, reaction was swift. Advocacy organizations, professional societies, and many research scientists expressed skepticism regarding the feasibility of such a deadline, noting that complexity argues against finding a single cause. “The odds of identifying a single factor that causes autism, whether genetic or environmental, is zero,” said Kolevzon. The 5-month timeline Kennedy set “gives people a false sense of hope” and risks politicizing science, the Autism Society of America said in a statement. “The Autism Society of America finds the administration’s claim that ‘we will know what has caused the Autism epidemic and we’ll be able to eliminate those exposures’ — to be harmful, misleading, and unrealistic.” Aziz-Zadeh said that 60%-90% of the causes of autism are likely due to genetic factors. “However, since that number isn’t 100%, there are also contributing environmental factors — what those might be, we still don’t know — and likely there isn’t a single one,” he said. In a letter signed by more than 130 scientists, the newly formed Coalition of Autism Scientists rejected Kennedy’s “false narrative” about the incidence and causes of ASD. “We are unified in our commitment to conduct the highest quality research and build mutual respect and trust with the public. This trust is seriously threatened by the Secretary’s interpretation of the rising prevalence rates and his plans to carry out a study that will deliver findings within a few months on an environmental toxin that causes autism,” the statement said. An ASD Registry? Equally concerning to many in the autism community was a recent announcement from the National Institutes of Health (NIH) about plans to establish a “new disease registry” focused on ASD that would collect federal and private health data for upcoming autism studies. NIH Director Jay Bhattacharya, MD, PhD, made the announcement during a presentation to the Council of Councils on April 21. However, HHS walked back that plan 3 days later, following an outcry from the autism community. HHS spokeswoman Vianca N. Rodriguez Feliciano told Medscape Medical News that the agency is not creating an autism registry but is developing a “real-world data platform” linking existing datasets “that maintains the highest standards of security and patient privacy while supporting research into autism and other areas such as chronic diseases.” NIH is also investing $50 million to launch a comprehensive research effort aimed at understanding the causes of ASD and improving treatments by leveraging large-scale data resources and fostering cross-sector collaboration, Feliciano added. Feliciano did not respond to follow-up questions from Medscape Medical News to clarify whether the data platform would include patient identifying information or such data sources as pharmacies, private insurers, and personal wearable sensors, as noted by Bhattacharya during his presentation. Autism Speaks, an advocacy group, said that research should not focus solely on the causes of autism. “We also need to invest in studies that lead to real improvements in people’s lives — like better healthcare, education, job opportunities, and support at every stage of life for autistic people and their families,” the group said in a statement. Note: This article originally appeared on Medscape .
- Living Alone With Depression, Anxiety May Up Suicide Risk
TOPLINE: Living alone and having both depression and anxiety was associated with a 558% increase in risk for suicide compared with living with others and without these conditions, a new population-based study showed. METHODOLOGY: Researchers assessed data for more than 3.7 million adults (mean age, 47.2 years; 56% men) from the Korean National Health Insurance Service (NHIS) from 2009 through 2021 to determine the associations among living arrangements, mental health conditions (depression and anxiety), and risk for suicide. Living arrangements were categorized as either living alone (for ≥ 5 years) or living with others. Depression and anxiety were determined using NHIS claims. The primary outcome was death by suicide, identified using national death records; the mean follow-up duration was 11.1 years. Suicide cases were identified on the basis of International Statistical Classification of Diseases and Related Health Problems (10th Revision) codes. TAKEAWAY: Overall, 3% of participants had depression, 6.2% had anxiety, and 8.5% lived alone. The mortality rate was 6.3%, with suicide accounting for 0.3% of all deaths. Compared with individuals living with others and without either depression or anxiety, those living alone and with both conditions had a 558% increased risk for suicide (adjusted hazard ratio [AHR], 6.58; 95% CI, 4.86-8.92; P < .001). Living alone and having depression only was associated with a 290% increased risk for suicide (AHR, 3.91), whereas living alone with anxiety only was associated with a 90% increased risk for suicide (AHR, 1.90). The association between living alone and risk for suicide was greater among middle-aged individuals (age, 40-64 years) with depression (AHR, 6.0) or anxiety (2.6), as well as in men (AHRs, 4.32 and 2.07, respectively). IN PRACTICE: “These findings highlight the importance of considering living arrangements in individuals with depression or anxiety, especially for specific demographic groups, such as middle-aged individuals and men, in suicide risk assessments. Targeted interventions addressing these factors together are crucial to mitigate risk,” the investigators wrote. Note: This article originally appeared on Medscape .
- Is Rural Living Better for Mental Health?
Key points In the past, social psychiatrists were interested in mental health in rural settings. Studies found that rates of mental illness in rural settings were similar to those in urban settings. Although researchers found that social problems contributed to mental illness, they failed to call for action. If you’re not a psychiatric epidemiologist or a mental health historian, it is doubtful that you’ve heard of Stirling County, Nova Scotia. And for good reason. Stirling Country, Nova Scotia, doesn’t exist, at least in the sense that you’ll never find it named on a map. But it is a real place, and a place that played an enormous role in shaping our understanding of the social factors that contribute to mental health and illness. Why haven’t you heard of it? The answer is simple. Stirling County is a pseudonym. It stands in for a county in Nova Scotia that hosted one of the world’s longest and most important epidemiological studies, in this case focusing on mental health. Why the pseudonym? When the study began in 1948, mental illness was deeply stigmatized, even more than today. Unlike the Framingham Heart Study, which focuses on risk factors related to cardiovascular health and began in the same year (and is based in Framingham, Massachusetts), many people in “Stirling County” didn’t want to be associated with mental illness. And even though it is quite easy to identify the real name of the host county (I’ll leave you to figure that out for yourself), researchers were still keeping it a secret very recently. You might ask another question: Why situate a study about mental health in a rural setting? As my last post suggested, most people during the middle of the twentieth century were much more concerned with the threat cities posed to mental health. In a way, there’s your answer. Psychiatrists and social scientists interested in social psychiatry, or the social determinants of mental health, wondered about whether rural settings were better for mental health . Stirling County helped to provide some of the answers. The Stirling County Study was the brainchild of two prototypical social psychiatrists: husband and wife team Dorothea Cross Leighton (1908-1992) and Alexander Leighton (1908-2007). The Leightons met at Johns Hopkins, where they both earned medical degrees, specializing in psychiatry. But they were both intrigued by the social sciences and took advantage of opportunities to do field work in indigenous communities in Alaska, New Mexico, and Arizona. Their time in such places, along with their experiences in World War II, convinced both of them to switch their attention to studying mental health using methods from the social sciences. Following the Second World War, which catalysed psychiatric interest in the impact of social factors on mental health, the Leightons looked for opportunities to lead their epidemiological study. They opted for Nova Scotia in part because Alexander Leighton had spent his summers there ever since he was a boy. The Leightons received one of the first grants from the newly founded National Institute of Mental Health , which, along with other funders, paid for a team of 100 researchers, including psychiatrists, social scientists, historians, and even a photographer. As with other pioneering social psychiatry studies, including those in Manhattan and New Haven, the Leightons delved deeply into the history and social structure of Stirling County. One of the three books the study published, People of Cove and Woodlot (1960), focused exclusively on Stirling County’s historical and social context. Although Stirling County was a rural setting, it was remarkably mixed in terms of geography, ethnicity, and economy. The county was primarily made up of English Canadians, who tended to be Protestant, and French Canadians (Acadians), who were Roman Catholic, along with indigenous people, Black Canadians, and a few other ethnic minorities. It was on the sea and many people worked in the fishery, but others worked in the lumber industry or farming. People’s economic situation also varied considerably, ranging from the comfortably well-off to those living in abject poverty. Did Stirling County’s residents have better mental health than people living in cities? The short answer, to the surprise of many, was no. Statistics revealed that the rates of mental illness in Stirling County were very similar to those found in the Midtown Manhattan Study, which Leighton also ended up running in the late 1950s. Moreover, many of the risk factors in both places were similar: poverty, inequality, social isolation, and community disintegration. Simply being in a crowded city, it seemed, wasn’t problematic in itself. Interestingly, when the Leightons reassessed rates of mental illness in one of the most deprived communities, “The Road,” a decade later, they found that residents’ mental health had improved . While Alexander Leighton argued that this was due to an adult education program and more mixing with wealthier people due to the consolidation of two schools, it is also notable that new employment opportunities had come to the area. I would imagine that these new jobs lifted people out of poverty, gave their lives new meaning, and fomented new social relationships was probably more of a factor. But Alexander Leighton, much like most social psychiatrists of the period and, indeed, just like the architects of Lyndon Johnson’s “War on Poverty,” was not minded to lift people out of poverty by giving them more material resources. Rather, they believed there was something inherently wrong with the poor, that could be taught out of them (thus explaining the adult education programs). Such thinking was one of the greatest shortcomings of social psychiatry and Johnson’s Great Society policy initiative. By the 1970s and 1980s, psychiatry shifted away from social explanations for mental illness and onto neurological and genetic ones, along with psychopharmaceutical treatments. Although the Stirling County Study kept trundling on, most psychiatrists moved on from social psychiatry. These days, as we try to contend with ever-rising rates of mental illness , it is important to reassess such studies and to determine what their real lessons are. Note: This article originally appeared on Psychology Today .
- 2025 Is a Landmark Year for Emergency Psychiatry
Key Takeaways Emergency psychiatry is gaining recognition with the approval of a focused practice designation by the American Board of Medical Specialties. The new designation allows both psychiatrists and emergency medicine physicians to practice in emergency psychiatry settings, addressing staffing challenges. EmPATH units are expanding, providing patient-centric care and reducing psychiatric boarding times in emergency departments. SAMHSA's recognition of emergency behavioral health centers enhances access, reimbursement, and parity with physical medical care. SPECIAL REPORT: EMERGENCY PSYCHIATRY 2025 is shaping up to be one of the most consequential years ever for the burgeoning subspecialty of emergency psychiatry. Not only are new programs and hospital departments opening seemingly on a daily basis, while even more sites begin nouveau development, there has been unprecedented ascent in the academic gravitas and clinical recognition for emergency psychiatry’s role and stature. Perhaps most compelling is the recent approval by the American Board of Medical Specialties (ABMS) of the proposal by the American Board of Emergency Medicine (ABEM) to recognize a focused practice designation (FPD) in emergency behavioral health.1 This new designation is an enormous step toward emergency behavioral health becoming a full-fledged boarded subspecialty, long a dream of many practitioners of emergency psychiatry. According to the ABMS, an FPD “recognizes the value that physicians who focus some or all their practice within a specific area of a specialty and/or subspecialty can provide to improving health care. Focused practice designation enables the ABMS Member Boards to set standards for, assess, and acknowledge additional expertise that physicians gain through clinical experience, and may include formal training.” ABEM worked in tandem with the American Board of Psychiatry and Neurology to support this new designation, which, as a result, will be available to either psychiatrists or emergency medicine physicians. This expansion of the types of physicians who will be able to practice in emergency psychiatry settings in the future is a welcome development, as one of the biggest questions in the past few years, as more and more emergency psychiatry programs have been coming online, has been: How can there be sufficient providers to staff all these new 24/7 sites? Adding emergency medicine physicians to the mix along with psychiatrists might not only address these staffing questions but also help out some practitioners with a major issue in recent years for emergency medicine: burnout. Splitting shifts between the emergency department (ED) and a psychiatric ED might be a great way for some physicians to keep themselves fresh, energetic, and optimistic. With multiple emergency psychiatry fellowships currently available to emergency medicine physicians, the opportunity for them to become trained and credentialed as emergency behavioral health providers already awaits. The timing of the new FPD could not be better, because there will be a huge demand for prescribing behavioral health providers in the coming months as dozens of new Emergency Psychiatric Assessment, Treatment, and Healing (EmPATH) units open across the US. With nearly 50 such programs already operating across the country, it is projected that more than 100 EmPATH units will be in service nationally by 2027. As described by Cooley et al in this Special Report, EmPATH units are soothing hospital-based alternatives to the medical ED where patients can quickly be moved for prompt, appropriate, noncoercive, and patient-centric care, rather than the common approach of boarding with long hours in the ED waiting for an inpatient admission. This EmPATH solution for psychiatric boarding also stabilizes most patients—even those with involuntary status—to the point of discharge home in hours, often less time than these individuals would commonly otherwise be boarding, untreated, in the ED. Speaking of which, everyone working at, constructing, or considering the creation of an EmPATH unit now has a place to share ideas for the first time. The inaugural National EmPATH Summit is taking place May 21 to 22, 2025, in Dallas, Texas, with a packed house of speakers, experts, clinicians, health care architects, and everyone whom one might imagine would participate in the development of an EmPATH unit. It looks like it will be quite the event. EmPATH units and other emergency behavioral health centers, in another substantial advancement that happened just this year, were officially recognized as the sites capable of working with patient populations with the highest acuity within the overall crisis continuum by the Substance Abuse and Mental Health Services Administration (SAMHSA). In January, SAMHSA published certified national behavioral health crisis care guidance and definitions, which appreciated the considerable need for multiple levels of crisis care, especially the take-all-comers behavioral emergency sites with low barriers to entry, such as EmPATH units, psychiatric emergency services in hospitals, and high-intensity behavioral health emergency centers in community settings. This imprimatur of the federal government is a huge milestone toward establishing parity of emergency psychiatry interventions with physical medical care, helping to improve access, reimbursement, and solvency of these necessary programs while reducing stigma and improving outcomes. With all this good news, it is a great occasion for Psychiatric Times to do this Special Report on emergency psychiatry. Note: This article originally appeared on Psychiatric Times .