top of page

Child Psychiatrist /Adult Psychiatrist

Search Results

691 results found with an empty search

  • As Psychiatrists, Do We Offer Hope or Do We Offer Death?

    I remember what it was like to be a medical student at a well-known cancer hospital where patients were dying of cancer. In life's final stages, it was not uncommon for physicians to increase the dose of morphine ; it alleviated pain, eased labored breathing, and yes, probably hastened the inevitable for patients who were in their final hours. In these scenarios, no one considered this euthanasia, and no one questioned whether it was the right thing to do. Fast-forward to 2023 when the act of a physician hastening a patient's death has become a controversial topic as criteria have expanded. Like all such topics in our polarized society, people aligned on sides, politics, and religion rush to the head of the room, legislation is proposed, and words take on new meanings. If you're in favor of legalization of clinician assistance in a patient's death, the term is medical assistance in dying (MAID). If you're opposed, the term is the more graphic physician-assisted suicide . The scenario is entirely different from what I saw in my medical school rotations decades ago. It's no longer an issue of easing the pain and discomfort of patients' final hours; the question now is whether, faced with a potentially terminal or progressively debilitating physical illness, a patient has the right to determine when, and how, their life will end, and the medical profession is given a role in this. In many places the bar has been further lowered to incorporate nonterminal conditions, and Belgium and the Netherlands now allow physician-facilitated suicide for psychiatric conditions, a practice that many find reprehensible. In these countries, patients may be provided with medications to ingest, but psychiatrists also administer lethal injections. While Belgium and the Netherlands were the first countries to legalize physician-facilitated death, it could be argued that Canada has embraced it with the most gusto; physician-assisted suicide has been legal there since 2016. Canada already has the largest number of physician-assisted deaths of any nation, with 10,064 in 2021 — an increase of 32% from 2020. The Canadian federal government is currently considering adding serious mental illness as an eligible category. If this law passes, the country will have the most liberal assisted-death policy in the world. The Canadian government planned to make serious mental illness an eligible category in March 2023, but in an eleventh-hour announcement, it deferred its decision until March 2024. In a press release, the government said that the 1-year extension would "provide additional time to prepare for the safe and consistent assessment and provision of MAID in all cases, including where the person's sole underlying medical condition is a mental illness. It will also allow time for the Government of Canada to fully consider the final report of the Special Joint Committee on MAID, tabled in Parliament on February 15, 2023." As a psychiatrist who treats patients with treatment-refractory conditions, I have watched people undergo trial after trial of medications while having psychotherapy, and sometimes transcranial magnetic stimulation or electroconvulsive therapy (ECT). The thing that is sustaining for patients is the hope that they will get better and go on to find meaning and purpose in life, even if it is not in the form they once envisioned. To offer the option of a death facilitated by the very person who is trying to get them better seems so counter to everything I have learned and contradicts our role as psychiatrists who work so hard to prevent suicide . Where is the line, one wonders, when the patient has not responded to two medications or 12? Must they have ECT before we consider helping them end their lives? Do we try for 6 months or 6 years? What about new research pointing to better medications or psychedelics that are not yet available? According to Canada's proposed legislation, the patient must be aware that treatment options exist, including facilitated suicide. Physician-assisted suicide for psychiatric conditions creates a conundrum for psychiatrists. As mental health professionals , we work to prevent suicide and view it as an act that is frequently fueled by depression. Those who are determined to die by their own hand often do. Depression distorts cognition and leads many patients to believe that they would be better off dead and that their loved ones would be better off without them. These cognitive distortions are part of their illness. So, how do we, as psychiatrists, move from a stance of preventing suicide — using measures such as involuntary treatment when necessary — to being the people who offer and facilitate death for our patients? I'll leave this for my Canadian colleagues to contemplate, as I live in a state where assisted suicide for any condition remains illegal. As Canada moves toward facilitating death for serious mental illness, we have to wonder whether racial or socioeconomic factors will play a role. Might those who are poor, who have less access to expensive treatment options and social support, be more likely to request facilitated death? And how do we determine whether patients with serious mental illness are competent to make such a decision or whether it is mental illness that is driving their perception of a future without hope? As psychiatrists, we often struggle to help our patients overcome the stigma associated with treatments for mental illness. Still, patients often refuse potentially helpful treatments because they worry about the consequences of getting care. These include career repercussions and the disapproval of others. When this legislation is finally passed, will our Canadian colleagues offer it as an option when their patient refuses lithium or antipsychotics, inpatient care or ECT?

  • 2025 Is a Landmark Year for Emergency Psychiatry

    Key Takeaways Emergency psychiatry is gaining recognition with the approval of a focused practice designation by the American Board of Medical Specialties. The new designation allows both psychiatrists and emergency medicine physicians to practice in emergency psychiatry settings, addressing staffing challenges. EmPATH units are expanding, providing patient-centric care and reducing psychiatric boarding times in emergency departments. SAMHSA's recognition of emergency behavioral health centers enhances access, reimbursement, and parity with physical medical care. SPECIAL REPORT: EMERGENCY PSYCHIATRY 2025 is shaping up to be one of the most consequential years ever for the burgeoning subspecialty of emergency psychiatry. Not only are new programs and hospital departments opening seemingly on a daily basis, while even more sites begin nouveau development, there has been unprecedented ascent in the academic gravitas and clinical recognition for emergency psychiatry’s role and stature. Perhaps most compelling is the recent approval by the American Board of Medical Specialties (ABMS) of the proposal by the American Board of Emergency Medicine (ABEM) to recognize a focused practice designation (FPD) in emergency behavioral health.1 This new designation is an enormous step toward emergency behavioral health becoming a full-fledged boarded subspecialty, long a dream of many practitioners of emergency psychiatry. According to the ABMS, an FPD “recognizes the value that physicians who focus some or all their practice within a specific area of a specialty and/or subspecialty can provide to improving health care. Focused practice designation enables the ABMS Member Boards to set standards for, assess, and acknowledge additional expertise that physicians gain through clinical experience, and may include formal training.” ABEM worked in tandem with the American Board of Psychiatry and Neurology to support this new designation, which, as a result, will be available to either psychiatrists or emergency medicine physicians. This expansion of the types of physicians who will be able to practice in emergency psychiatry settings in the future is a welcome development, as one of the biggest questions in the past few years, as more and more emergency psychiatry programs have been coming online, has been: How can there be sufficient providers to staff all these new 24/7 sites? Adding emergency medicine physicians to the mix along with psychiatrists might not only address these staffing questions but also help out some practitioners with a major issue in recent years for emergency medicine: burnout. Splitting shifts between the emergency department (ED) and a psychiatric ED might be a great way for some physicians to keep themselves fresh, energetic, and optimistic. With multiple emergency psychiatry fellowships currently available to emergency medicine physicians, the opportunity for them to become trained and credentialed as emergency behavioral health providers already awaits. The timing of the new FPD could not be better, because there will be a huge demand for prescribing behavioral health providers in the coming months as dozens of new Emergency Psychiatric Assessment, Treatment, and Healing (EmPATH) units open across the US. With nearly 50 such programs already operating across the country, it is projected that more than 100 EmPATH units will be in service nationally by 2027. As described by Cooley et al in this Special Report, EmPATH units are soothing hospital-based alternatives to the medical ED where patients can quickly be moved for prompt, appropriate, noncoercive, and patient-centric care, rather than the common approach of boarding with long hours in the ED waiting for an inpatient admission. This EmPATH solution for psychiatric boarding also stabilizes most patients—even those with involuntary status—to the point of discharge home in hours, often less time than these individuals would commonly otherwise be boarding, untreated, in the ED. Speaking of which, everyone working at, constructing, or considering the creation of an EmPATH unit now has a place to share ideas for the first time. The inaugural National EmPATH Summit is taking place May 21 to 22, 2025, in Dallas, Texas, with a packed house of speakers, experts, clinicians, health care architects, and everyone whom one might imagine would participate in the development of an EmPATH unit. It looks like it will be quite the event. EmPATH units and other emergency behavioral health centers, in another substantial advancement that happened just this year, were officially recognized as the sites capable of working with patient populations with the highest acuity within the overall crisis continuum by the Substance Abuse and Mental Health Services Administration (SAMHSA). In January, SAMHSA published certified national behavioral health crisis care guidance and definitions, which appreciated the considerable need for multiple levels of crisis care, especially the take-all-comers behavioral emergency sites with low barriers to entry, such as EmPATH units, psychiatric emergency services in hospitals, and high-intensity behavioral health emergency centers in community settings. This imprimatur of the federal government is a huge milestone toward establishing parity of emergency psychiatry interventions with physical medical care, helping to improve access, reimbursement, and solvency of these necessary programs while reducing stigma and improving outcomes. With all this good news, it is a great occasion for Psychiatric Times to do this Special Report on emergency psychiatry. Note: This article originally appeared on Psychiatric Times .

  • Is Rural Living Better for Mental Health?

    Key points In the past, social psychiatrists were interested in mental health in rural settings. Studies found that rates of mental illness in rural settings were similar to those in urban settings. Although researchers found that social problems contributed to mental illness, they failed to call for action. If you’re not a psychiatric epidemiologist or a mental health historian, it is doubtful that you’ve heard of Stirling County, Nova Scotia. And for good reason. Stirling Country, Nova Scotia, doesn’t exist, at least in the sense that you’ll never find it named on a map. But it is a real place, and a place that played an enormous role in shaping our understanding of the social factors that contribute to mental health and illness. Why haven’t you heard of it? The answer is simple. Stirling County is a pseudonym. It stands in for a county in Nova Scotia that hosted one of the world’s longest and most important epidemiological studies, in this case focusing on mental health. Why the pseudonym? When the study began in 1948, mental illness was deeply stigmatized, even more than today. Unlike the Framingham Heart Study, which focuses on risk factors related to cardiovascular health and began in the same year (and is based in Framingham, Massachusetts), many people in “Stirling County” didn’t want to be associated with mental illness. And even though it is quite easy to identify the real name of the host county (I’ll leave you to figure that out for yourself), researchers were still keeping it a secret very recently. You might ask another question: Why situate a study about mental health in a rural setting? As my last post suggested, most people during the middle of the twentieth century were much more concerned with the threat cities posed to mental health. In a way, there’s your answer. Psychiatrists and social scientists interested in social psychiatry, or the social determinants of mental health, wondered about whether rural settings were better for mental health . Stirling County helped to provide some of the answers. The Stirling County Study was the brainchild of two prototypical social psychiatrists: husband and wife team Dorothea Cross Leighton (1908-1992) and Alexander Leighton (1908-2007). The Leightons met at Johns Hopkins, where they both earned medical degrees, specializing in psychiatry. But they were both intrigued by the social sciences and took advantage of opportunities to do field work in indigenous communities in Alaska, New Mexico, and Arizona. Their time in such places, along with their experiences in World War II, convinced both of them to switch their attention to studying mental health using methods from the social sciences. Following the Second World War, which catalysed psychiatric interest in the impact of social factors on mental health, the Leightons looked for opportunities to lead their epidemiological study. They opted for Nova Scotia in part because Alexander Leighton had spent his summers there ever since he was a boy. The Leightons received one of the first grants from the newly founded National Institute of Mental Health , which, along with other funders, paid for a team of 100 researchers, including psychiatrists, social scientists, historians, and even a photographer. As with other pioneering social psychiatry studies, including those in Manhattan and New Haven, the Leightons delved deeply into the history and social structure of Stirling County. One of the three books the study published, People of Cove and Woodlot (1960), focused exclusively on Stirling County’s historical and social context. Although Stirling County was a rural setting, it was remarkably mixed in terms of geography, ethnicity, and economy. The county was primarily made up of English Canadians, who tended to be Protestant, and French Canadians (Acadians), who were Roman Catholic, along with indigenous people, Black Canadians, and a few other ethnic minorities. It was on the sea and many people worked in the fishery, but others worked in the lumber industry or farming. People’s economic situation also varied considerably, ranging from the comfortably well-off to those living in abject poverty. Did Stirling County’s residents have better mental health than people living in cities? The short answer, to the surprise of many, was no. Statistics revealed that the rates of mental illness in Stirling County were very similar to those found in the Midtown Manhattan Study, which Leighton also ended up running in the late 1950s. Moreover, many of the risk factors in both places were similar: poverty, inequality, social isolation, and community disintegration. Simply being in a crowded city, it seemed, wasn’t problematic in itself. Interestingly, when the Leightons reassessed rates of mental illness in one of the most deprived communities, “The Road,” a decade later, they found that residents’ mental health had improved . While Alexander Leighton argued that this was due to an adult education program and more mixing with wealthier people due to the consolidation of two schools, it is also notable that new employment opportunities had come to the area. I would imagine that these new jobs lifted people out of poverty, gave their lives new meaning, and fomented new social relationships was probably more of a factor. But Alexander Leighton, much like most social psychiatrists of the period and, indeed, just like the architects of Lyndon Johnson’s “War on Poverty,” was not minded to lift people out of poverty by giving them more material resources. Rather, they believed there was something inherently wrong with the poor, that could be taught out of them (thus explaining the adult education programs). Such thinking was one of the greatest shortcomings of social psychiatry and Johnson’s Great Society policy initiative. By the 1970s and 1980s, psychiatry shifted away from social explanations for mental illness and onto neurological and genetic ones, along with psychopharmaceutical treatments. Although the Stirling County Study kept trundling on, most psychiatrists moved on from social psychiatry. These days, as we try to contend with ever-rising rates of mental illness , it is important to reassess such studies and to determine what their real lessons are. Note: This article originally appeared on Psychology Today .

  • Living Alone With Depression, Anxiety May Up Suicide Risk

    TOPLINE: Living alone and having both depression and anxiety was associated with a 558% increase in risk for suicide compared with living with others and without these conditions, a new population-based study showed. METHODOLOGY: Researchers assessed data for more than 3.7 million adults (mean age, 47.2 years; 56% men) from the Korean National Health Insurance Service (NHIS) from 2009 through 2021 to determine the associations among living arrangements, mental health conditions (depression and anxiety), and risk for suicide. Living arrangements were categorized as either living alone (for ≥ 5 years) or living with others. Depression and anxiety were determined using NHIS claims. The primary outcome was death by suicide, identified using national death records; the mean follow-up duration was 11.1 years. Suicide cases were identified on the basis of International Statistical Classification of Diseases and Related Health Problems (10th Revision) codes. TAKEAWAY: Overall, 3% of participants had depression, 6.2% had anxiety, and 8.5% lived alone. The mortality rate was 6.3%, with suicide accounting for 0.3% of all deaths. Compared with individuals living with others and without either depression or anxiety, those living alone and with both conditions had a 558% increased risk for suicide (adjusted hazard ratio [AHR], 6.58; 95% CI, 4.86-8.92; P < .001). Living alone and having depression only was associated with a 290% increased risk for suicide (AHR, 3.91), whereas living alone with anxiety only was associated with a 90% increased risk for suicide (AHR, 1.90). The association between living alone and risk for suicide was greater among middle-aged individuals (age, 40-64 years) with depression (AHR, 6.0) or anxiety (2.6), as well as in men (AHRs, 4.32 and 2.07, respectively). IN PRACTICE: “These findings highlight the importance of considering living arrangements in individuals with depression or anxiety, especially for specific demographic groups, such as middle-aged individuals and men, in suicide risk assessments. Targeted interventions addressing these factors together are crucial to mitigate risk,” the investigators wrote. Note: This article originally appeared on Medscape .

  • Improving Sleep to Treat Resistant Depression in Older Adults

    Key Takeaways Improved or sufficient sleep enhances antidepressant response in TRLLD, while persistent insufficient sleep predicts treatment nonresponse. The OPTIMUM trial found no difference in sleep improvement between treatment arms, emphasizing the importance of addressing sleep disturbances. Behavioral interventions like sleep hygiene and CBT-I are recommended for managing sleep issues in older adults with TRLLD. Sedative hypnotics pose risks, and alternative strategies, including low-dose mirtazapine or doxepin, are suggested for sleep management. Patients with treatment resistant late-life depression (TRLLD) were found 3 times more likely to respond to augmenting or switching antidepressant treatments when sleep also improved or sufficient sleep was maintained, in a post-hoc analysis of a trial comparing interventions. “Sleep-related symptoms that are present during treatment for TRLLD may be modifiable factors that play a role in achieving and maintaining depression response,” observed Michael Mak, MD, Department of Psychiatry, Temerty Faculty of Medicine, University of Toronto, and colleagues. The investigators revisited data from the Optimizing Outcomes of Treatment-Resistant Depression in Older Adults (OPTIMUM) trial which compared pharmacotherapeutic strategies for TRLLD, to ascertain whether treatment outcomes differed among participants with persistent insufficient sleep, worsened sleep, or with improved sleep. Mak and colleagues hypothesized that the analysis would show that (1) most participants with TRLLD exhibit reduced sleep, (2) sleep would improve with each of the pharmacotherapeutic strategies, and (3) that improved sleep would be associated with improvement in depression symptoms, while depression would remain treatment resistant if insufficient sleep persisted or worsened. The OPTIMUM trial treatment arms either augmented the current antidepressant with aripiprazole or bupropion, or switched to bupropion; if symptoms did not remit, investigators augmented with lithium or switched to nortriptyline. Depression symptom severity was measured with Montgomery-Asberg Depression Rating Scale (MADRS), with remission defined as a score less than 10. The MADRS sleep item-4 measured insufficient sleep, comparing duration or depth of sleep during treatment for depression with the pattern when well. Adequacy of sleep is rated on a scale of 0 to 6 with higher scores indicating greater sleep disturbance. In the analysis, a score of greater than 2 at both week 0 and week 10 was classified as persistent insufficient sleep (n=164). Scores that increased over the course of treatment and were greater than 2 at week 10 corresponded to worsening sleep; and a decreased score that was less than or equal to 2 at 10 weeks corresponded to improved sleep. Those with scores of less than or equal to 2 at each visit were categorized as having persistent sufficient sleep and served as the comparator group. Insufficient sleep was reported by 51% of participants (n=323) at the start of the trial.They tended to be younger, had fewer years of education, and had higher severity of depression than those with sufficient sleep. At the end of the initial 10-week switch or augmentation treatment, the number of participants reporting insufficient sleep had fallen to 36%, with no associated difference between treatment arms. Mak et al determined that those with persistent insufficient sleep (25%, n=158) and worsened sleep (10%, n=62) were most likely to remain unresponsive to antidepressant treatment. Those who maintained sufficient sleep (26%, n=164) or had improved sleep (25%, n=158) were 3 times more likely to experience improvement in depression, regardless of the switch or augmentation strategy. Independent predictors of treatment nonresponse included persistent insufficient sleep and worsened sleep. The investigators found that approximately one-third of the participants were using sedative hypnotics for sleep or anxiety. They suggest that the risks associated with these medications are likely to outweigh their benefit, even when used for their approved indications. "As such, the treatment plan should include education about the risks of benzodiazepine use in older adults and healthy sleep behaviors, pharmacologic treatment of insomnia or reduced sleep when appropriate, or referrals to behavioral interventions for sleep," Mak et al urge. Accounting for Sleep in Treating Resistant Depression Having associated sufficient and improved sleep with antidepressant response in TRLLD, the investigators considered the long-established bidirectional relationship of sleep disturbance and depression, and implications for treatment. "In most patients, treating the depression with an evidence-based antidepressant is enough to treat all symptoms including the insomnia," study coauthor Benoit Mulsant, MD, Department of Psychiatry, University of Toronto, explained to Psychiatric Times. The association of sufficient sleep with improvement in depression in each of the treatment arms, regardless of using an "alerting" or less sedating antidepressant like bupropion, is notable, Mulsant observed. "Trying to match patient's symptoms (eg insomnia) with adverse effects of a specific antidepressant (eg sedation) does not work," Mulsant commented, citing previous trials."Patients with depression and insomnia do not do better when randomized to a sedating tricyclic antidepressant than to an activating one like bupropion." However, Mulsant acknowleged that sleep disturbance that precedes and persists after onset of depression can require separate attention. "In a subgroup of patients with depression, insomnia predates depression and does not resolve with resolution of other symptoms, so it is important to assess and treat sleep symptoms with sleep hygiene or CBT-I when needed," he suggested. Lead-author Mak elaborated on the approach for these patients. "If a patient with TRLLD still complains of sleep disturbance/insomnia disorder post-antidepressant treatment, they should trial sleep hygiene therapy and CBT for insomnia if available. The residual insomnia is a substantial risk factor for recurrent depression," Mak warned. In their post-hoc analysis of the OPTIMUM trial, the investigators found that loss of a spouse and lower levels of education were risk factors for having sleep disturbance. For these and others at-risk for or experiencing persistent disrupted sleep, the investigators supported particular attention to sleep in managing their depression. "These patients would still be good candidates for sleep hygiene or CBT-I, if needed, Mulsant indicated. "A first-line antidepressant can be augmented with a medication specifically targeting sleep. A good one for older patients would be mirtazapine at low dosage, like 7.5 or 15 mg, because at higher dosage it becomes a sedating antidepressant that many patients do not tolerate." Mak agreed with the recommendation, adding that patients with mild insomnia may benefit from adding low dose doxepin, 3 to 6 mg, at bedtime to buttress sleep maintenance. "Safety outcomes for low dose doxepin in older adults is reassuring," he commented. Mulsant cautioned that additional evaluation may be warranted in some patients presenting with TRLLD accompanied by poor sleep, fatigue, and cognitive impairment for an undiagnosed sleep apnea or a rarer sleep disorder. Mak concurred. "If an elderly treatment resistant depression patient has co-morbid snoring and/or BMI 35 or above, they should be referred for polysomnograpy—given intermediate to high pre-test probability for obstructive sleep apnea. Male sex, sleepiness or fatigue, presence of hypertension, witnessed apnea, and thick neck makes the risk even worse. Continuous positive airway pressure treatment may improve their mood in the context of obstructive sleep apnea.” Note: This article originally appeared on Psychiatric Times.

  • Frequent Nightmares Linked to Faster Aging, Premature Death

    SAN DIEGO — Frequent distressing dreams are linked to faster biologic aging and an increased risk for premature death, independent of traditional risk factors, new research suggested. Distressing dreams include bad dreams without awakening and/or nightmares with awakening. An analysis of data from more than four large studies in the United States and the United Kingdom found that experiencing distressing dreams at least once a week was significantly associated with aging at both the cellular level and throughout the body, as well as a threefold increased risk for death before age 70. “It’s difficult to prove causation in observational studies, though you can definitely make an association,” lead investigator Abidemi Otaiku, MD, clinical research fellow at Imperial College London, London, England, told Medscape Medical News. He added that if the relationship turns out to be causal, one possible mechanism is that nightmares act as a stressor, negatively affecting the body. Over time, the cumulative effect of frequent bad dreams could lead to the release of cortisol, a stress hormone that may accelerate aging at the cellular level. In addition, disrupted sleep itself is linked to a range of negative outcomes, including a detrimental effect on mental health. “The takeaway message is that people who have more nightmares are aging faster and dying sooner. Nightmares are more important than people realize, and clinicians should ask about them,” said Otaiku, who is also affiliated with the UK Dementia Research Institute. The findings were presented on April 6 at the American Academy of Neurology (AAN) 2025 Annual Meeting. Link to Psychiatric, Neurologic Disorders Otaiku noted that 4%-10% of individuals experience distressing dreams weekly, and 20%-30% experience them every month, and 85% experience them at least once a year. A 2010 study showed that about 45% of the variation in frequency can be explained by genetic factors. These types of dreams have been linked to a variety of mental health concerns, including increased suicide risk. In addition, Otaiku has published recent research that showed a link between distressing dreams and increased risk for Parkinson’s disease (PD) and between these dreams and cognitive decline and risk for dementia. A year later, he reported a significant link between distressing dreams in childhood and an increased risk for cognitive impairment or PD in adulthood. Other studies have linked distressing dreams to conditions such as cardiovascular disease, hypertension, and diabetes. The current study “was created to test the hypothesis that nightmares may contribute to age-related diseases by the cellular aging process. Premature mortality is the ultimate outcome of accelerated aging,” Otaiku said. Accelerated Cellular and Whole-Body Aging The current analysis included data from the Midlife in the United States (MIDUS) study (n = 1660 US adults; 54% men), the Osteoporotic Fractures in Men Study (n = 1427 US adults; 100% men), the Wisconsin Sleep Cohort Study (n = 1109 US adults; 54% men), and the UK Biobank (126,866 UK adults; 40% men). All participants completed baseline questionnaires, which included reporting how often they had trouble sleeping due to bad dreams in the past month. On these responses, participants were categorized into three groups — less than monthly, monthly, or weekly. UK Biobank data provided blood test data on telomere length as an indicator of cellular aging. In the MIDUS study, blood samples were analyzed to derive three epigenetic markers of whole-body aging: DunedinPACE, GrimAge2, and PhenoAge. Mortality data were available for all studies. Participants in the US cohorts were followed for over 19 years, while those in the UK Biobank cohort were followed for more than 2 years. The study’s outcome measures included the rate of cellular aging at baseline, assessed via telomere length; the rate of organismal aging at baseline, based on a composite of three epigenetic markers; and evidence of premature aging in both the pooled US cohort and the UK Biobank cohort. Results showed a significant association between the frequency of distressing dreams and accelerated cellular aging. The telomere length difference Z-score was 0.09 for those with monthly occurrences compared with less than monthly and reached statistical significance at 0.046 for weekly occurrences (P for linear trend < .001). “In other words, the more frequent the nightmares, the shorter the telomeres,” Otaiku said. Distressing dream frequency was also linked to accelerated organismal aging — referring to the overall aging of the body’s systems. The epigenetic aging index difference Z-score was 0.02 for monthly occurrences compared with less than monthly, and a significant 0.52 for weekly occurrences (P for linear trend < .001). More Harmful Than Smoking, Obesity, Hypertension? In the pooled US cohort, 126 premature deaths occurred before age 70 over the 19-year follow-up period. The hazard ratio (HR) for premature mortality was 1.27 among those who experienced monthly distressing dreams, rising to 3.03 for those with weekly occurrences (P for linear trend < .0001). “Those with weekly nightmares had threefold higher risk for premature death,” showing again that the higher the frequency, the higher the adverse outcome was, said Otaiku. In the UK Biobank cohort, 51 premature deaths were recorded over the 2-year follow-up period. The HRs for premature mortality were 1.43 for monthly distressing dreams and 2.65 for weekly occurrences compared with less than monthly (P for linear trend = .004). In addition, 34.2% of the association between distressing dreams and mortality was accounted for by aging, Otaiku noted. After adjusting for genetic, environmental, and lifestyle factors, distressing dreams remained significantly associated with cellular and organismal aging and premature mortality risk (P for linear trend for all, <.05). “Strikingly, the effect size of frequent nightmares was greater than that of current smoking, obesity, and hypertension combined,” Otaiku said. “The associations were independent of and stronger than traditional risk factors for aging and mortality — and are unlikely to be explained by reverse causality,” he added. Otaiku noted that it could be the accelerated aging that explains the link between distressing dreams and later development of neurodegenerative diseases. He added that future studies are now needed into whether treatment of these dreams could reduce the risk for mortality. Awesome, Interesting’ Research During the post-session Q&A, an audience member noted that patients often report more frequent or intense nightmares after starting certain medications and asked whether Otaiku had examined the impact of specific drug classes. Otaiku responded that his study included access to medication data, and he controlled for the use of antidepressants, antipsychotics, and antihypertensives such as beta-blockers. “Even when I controlled for these in my analyses, nightmares were independently linked to these outcomes. Individuals with nightmares do report more psychotropic medication use, but the link between nightmares and these outcomes is independent of their use,” Otaiku said. Session moderator Anne M. Morse, DO, Geisinger Medical Center, Danville, Pennsylvania, described the study as “awesome and so interesting.” “It definitely made me start to think about whether or not there could be any orexigenic link [involved] just because we see nightmare disorders so frequently in narcolepsy and conditions like that,” Morse said. Orexin is a neuropeptide thought to contribute to the regulating and maintaining of sleep/wakefulness states. Note: This article originally appeared on Medscape .

  • No, Your Patients Are Not Wrong: Sometimes Antidepressant Side Effects Do Not Get Better

    Key Takeaways Antidepressant side effects often improve over time, but some patients experience worsening, leading to dropout. STAR*D trial data showed patients who dropped out reported more severe and worsening side effects than completers. Clinicians should consider alternative treatments for patients with severe side effects to prevent dropout. Research is ongoing to identify specific side effects linked to dropout and develop a tool to assess dropout risk. CLINICAL REFLECTIONS When a patient starting antidepressants for major depressive disorder voices their concerns about potential side effects, it is common for clinicians to offer patients the same reassurance that many major health agencies have advised: Stick with the medications, and your side effects should improve with time. For example, the National Institutes of Health (NIH)’s public-facing webpage on mental health medications reads: “The side effects [of antidepressants] are generally mild and tend to go away with time.” Likewise, the Centers for Disease Control and Prevention (CDC) publicly states, “Side effects usually do not get in the way of daily life‚ and they often go away as your body adjusts to the medication.” This perspective that antidepressant side effects will eventually go away is not just exclusive to the United States: The United Kingdom’s National Health Service (NHS) states on their public-facing antidepressants overview page that “the most common side effects of antidepressants are usually mild. Side effects should improve within a few days or weeks of treatment as the body gets used to the medicine.” Nevertheless, many psychiatrists and other mental health clinicians have encountered patients who report the opposite experience. Although many patients experience an improvement in side effects with time, not everyone’s side effects improve. In fact, it is not uncommon to encounter patients who report worsening side effects to the point where some decide to quit treatment. Indeed, the No. 1 self-reported reason for why patients prematurely discontinue antidepressant pharmacotherapy is side effects. One question then arises: Why does such a dichotomy exist between the clinical consensus (as publicly stated by the NIH, CDC, and NHS) that side effects improve with time and the anecdotal experiences of patients who report that their side effects do not go away or, in some cases, even worsen? We noticed that past research examining antidepressant side effects often failed to account for 1 important confounder: dropout. That is, many studies on antidepressant side effects focused on individuals who completed treatment while neglecting perhaps the most interesting group of patients: those who may have dropped out of antidepressant treatment prematurely due to side effects. We conducted a secondary analysis of side effects data from patients in the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) trial, the largest antidepressant trial ever conducted. In the first treatment step of the STAR*D trial, all patients received citalopram for an intended 12 weeks per protocol. During these 12 weeks, patients reported their side effect frequency, intensity, and burden on the Frequency, Intensity, and Burden of Side Effects Rating (FIBSER) scale at weeks 2, 4, 6, 9, and 12. Additionally, patients reported which side effects they experienced in 9 organ/function systems on the Patient-Rated Inventory of Side Effects (PRISE) scale. We wanted to examine how side effect frequency, intensity, and burden on the FIBSER scale changed over the course of citalopram treatment. What we were most interested in was how side effect complaints of patients who dropped out early in treatment differed from those who completed treatment. To answer this question, we used pattern-mixture modeling to model side effect complaints at each time point (weeks 2, 4, 6, 9, and 12) for each potential treatment attrition pattern (dropout at week 2, 4, 6, or 9 or full 12-week treatment completion) while controlling for changes in depressive severity over the course of treatment. What we found does not disagree with the NIH/CDC/NHS consensus that side effects improve over time: Indeed, when examining only data from those who completed the full 12-week treatment, these patients reported decreases in side effect frequency, intensity, and burden over the course of treatment. Yet our findings also validated the experience of those patients who report that their side effects never improve. Specifically, when examining data from patients who dropped out of the trial early, a different pattern of side effects emerged: Patients who dropped out after weeks 2, 4, and 6 reported significantly more severe initial side effect complaints than those who completed treatment. And perhaps even more importantly, patients who dropped out after weeks 4 and 6 further showed a worsening of side effects over the course of treatment. Taken together, what we see in the STAR*D data are several distinct patterns in how patients experience antidepressant side effects. On the one hand, there are many patients—namely, those who complete treatment—who are able to tolerate the side effects of antidepressants. These patients not only report lower severity of side effects after antidepressant initiation but also an improvement of side effects over time. It is likely that these are the patients whom the NIH/CDC/NHS consensus guidelines refer to when they offer the reassurance that side effects will decrease over time. On the other hand, there is a nonnegligible population of patients with a much lower tolerance for side effects. These patients not only report more severe side effects immediately after antidepressant initiation, but many also report experiencing a worsening of side effects over the course of treatment—up to and until the point they drop out. These are the patients whom we and our colleagues see in clinical practice every day: those who attempt to persist with their prescribed antidepressant but ultimately drop out due to the intolerability of their side effect symptoms. What is especially surprising is that this second group of patients—those with intolerance for side effects and for whom side effects do not improve—have gone previously unnoticed in the research literature. Our analysis was not of novel data: The STAR*D trial is famous for being the largest antidepressant trial ever conducted, and the data are publicly available from the NIH. However, it seems possible that previous research on antidepressant side effects—from which the major health agencies of the NIH, CDC, and NHS may have derived their guidelines—has focused primarily on treatment completion and has neglected those who drop out of treatment early. The unfortunate part of this oversight is that this second group of patients is perhaps most affected by side effects. After all, side effects are the No. 1 self-reported reason for why patients prematurely discontinue antidepressant treatment. What do our findings mean for psychiatrists and other mental health clinicians? Past research has shown that patients who prematurely drop out of antidepressant treatment often do not return for any mental health treatment and show a poorer long-term prognosis. Consequently, it is especially important for clinicians to pay attention to patient reports of severe or worsening antidepressant side effects as potential warning signs of attrition. The next time one of your patients reports experiencing antidepressant side effects, instead of universally offering the same assurance from the NIH/CDC/NHS guidelines that their problems ought to eventually improve, it may be worth considering switching to an alternative treatment, such as a different medication or even a nonpharmacological treatment such as psychotherapy. There is another question that remains unanswered: Which side effects are more strongly linked to dropout? It is common for patients to self-report being concerned about some side effects more than others, but less is known about which effects cause patients to drop out. We are currently conducting a study to answer this question using data from the PRISE scale in the STAR*D trial and plan on developing a tool for clinicians that will flag patients for dropout risk based on their side effect profile. This tool could help inform psychiatrists in developing their treatment plans, especially for the patients at highest risk for dropout due to antidepressant side effects. We plan on validating our preliminary results in other data from clinical trials or medical records of antidepressant side effects. If you have access to such data and are interested in collaborating, please contact Colin Xu, PhD, at colinxu@uidaho.edu. Note: This article originally appeared on Psychiatric Times .

  • The Female Orgasm Should Be Considered the Twelfth Body System

    When I spoke at the International Society of Cosmetic Gynecology 2025 World Congress on March 21, I did not mention "cosmetics". Despite the organization’s focus on aesthetics, the unique skills of the members make them particularly suited to correct difficult problems regarding function — both urinary and sexual. These colleagues are adept at correcting conditions ranging anywhere from secondary anorgasmia to clitoral hood phimosis. At least half of their lectures focused on improving function, not aesthetics. I had been invited to speak about the use of formal dynamic systems theory and analysis to improve surgical outcomes. Systems analysis, a framework widely used to improve function in medicine, engineering, and business, can help us understand the complex — a word which most would agree could be used to describe the female orgasm. Understanding Systems in Medicine A system consists of interdependent components working together to produce an effect greater than any one part can produce. Systems medicine, an interdisciplinary approach, seeks to understand and manage complex biological interactions to improve health outcomes. By this definition, all 11 recognized body systems (integumentary, skeletal, muscular, nervous, endocrine, cardiovascular, lymphatic, respiratory, digestive, urinary, and reproductive) function as dynamic networks. Disruptions in one component will limit the function of the entire system. Of the 11, the female orgasm has a component overlap with the reproductive system — but they are not the same. A woman may conceive with anorgasmia, and a woman can also have a strong libido and enjoy multiple orgasms without conceiving a child. One may argue that the reproductive system provides offspring, but without the orgasm system there would be significantly fewer offspring. Yet, conceiving and sexual pleasure are not equal. One may also argue that if we need systems analysis to understand how to breathe and have a bowel movement, we should use system analysis to understand what brings joy and connection and creativity — orgasm. A 2023 study of medical education reported that out of seven medical schools in the Chicago area, only one taught the complete anatomy of the clitoris and how to evaluate female sexual dysfunction. Only one. As medical education starts to catch up with current research and women's legitimate demands for expert attention to their sexual concerns (by at least teaching physicians about comprehensive female anatomy), it may be time to acknowledge that, despite its absence thus far from traditional medical education, the female orgasm is complicated enough to warrant systems analysis, and such analysis first demands an attempt to define the system. If a "female orgasm system" exists, it should meet the same four criteria that define other systems: 1) identifiable components, 2) interdependent interactions, 3) emergent effects beyond any single component, and 4) stability across varied conditions. My efforts over the past 5 years to define the orgasm system and to encourage doctors and therapists to use systems analysis to treat female sexual dysfunction have not been an effort to invent anything; rather, I hope only to point out such a system exists and to offer a starting point for the work of others. Components of the Female Orgasm System To systematically describe female orgasm, we must first define its essential components. Primary Components Brain: The ultimate control center, integrating sensory, hormonal, and psychological inputs Breasts: Responsive to tactile stimulation, contributes to arousal, and affects pituitary function Clitoris: A sensory-dense structure that is integral to orgasm and communicates with the brain through both somatic and autonomic nerves. Labia: Provides protective and sensory functions Genitourinary Complex (GU Complex): Encompasses the vagina, urethra, and pelvic floor with both autonomic and somatic feedback to the arousal centers of the brain Endocrine System: Regulates hormonal influences on arousal and sexual response Spinal Cord and Blood Flow: Essential for neurological transmission, local engorgement, and relay of oxygen and hormones Psychosocial Factors: Emotional, cognitive, and relational influences that modulate the function of the entire body. Secondary Components Each primary structure comprises substructures with specific roles. For instance, the clitoris includes the glans, corpus cavernosum, and spongiosum. The GU complex involves vaginal elasticity and lubrication, while psychosocial factors extend to behavioral and linguistic influences. Feedback Loops in the Orgasm System: A Path to Innovation Dynamic systems operate through reinforcing and balancing feedback loops. In the context of female orgasm: Reinforcing Loops: Positive stimulation (physical or psychological) enhances arousal, further increasing blood flow and sensory feedback, culminating in orgasm. Balancing Loops: Psychological distress, endocrine dysfunction, or neurovascular impairment can counteract this reinforcement, inhibiting orgasmic function. Using systems analysis to consider how the feedback loops of the autonomic nervous system play a crucial role in female orgasm (integrating somatic input from the dorsal clitoral nerve with the autonomic pathways via the cavernous nerves, ganglion innervating the vaginal wall, inferior hypogastric nerve, and the vagus nerve) triggered the idea for the Clitoxin procedure to modulate autonomic input and enhance arousal. Clinical Implications for Treatment: A Systems-Based Diagnostic Framework Understanding an orgasm as a system provides a structured approach to evaluating sexual dysfunction. Consider a patient presenting with dyspareunia and anorgasmia following surgical intervention. First, surgical success does not guarantee orgasmic function. Although anatomical restoration is critical, persistent anorgasmia may stem from endocrine imbalances (eg, hyperprolactinemia, hypothyroidism), vascular limitations, or psychosocial stressors. Rather than relying solely on procedural interventions or sex therapy/counseling, comprehensive assessment and personalized, targeted, systemic corrections can optimize outcomes. Providers also should enhance patient communication and education.A visual model of the orgasm system can aid in counseling patients, emphasizing the multifactorial nature of the sexual response and reducing unrealistic expectations from isolated interventions. Let's Start Recognizing the Female Orgasm as a System When analyzed through the lens of systems medicine, the female orgasm provides a useful framework for refining surgical, medical, and psychosocial therapeutic strategies and for innovating new ideas. Recognizing orgasm as an emergent property of interconnected biological, neurological, and psychosocial factors fosters a more effective and sophisticated patient-centered approach and facilitates communication across specialties. Future research should continue refining this model to improve clinical applications and optimize sexual health outcomes. It is time for the twelfth body system — along with clitoral anatomy — to become part of our medical education. Note: This article originally appeared on Medscape .

  • ADHD Plus Comorbidities May Up Risk for Criminal Behavior

    TOPLINE: Attention-deficit/hyperactivity disorder (ADHD) in adults was associated with a higher risk for criminal behavior, especially in men and those with comorbidities such as alcohol use disorder. METHODOLOGY: This cross-sectional study included 308 patients diagnosed with ADHD (mean age, 34 years; 66% men) who were evaluated at a clinic in Italy between 2019 and 2024. Structured and semi-structured interviews along with standardized diagnostic tools, including the Adult ADHD Self-Report Scale and Diagnostic Interview for ADHD in Adults , were used. Crime-related and legal information was obtained through interviews with the participants. Factors such as sociodemographic characteristics, clinical variables, and criminal behavior were analyzed. TAKEAWAY: In all, 8% of participants were criminal offenders, and 92% of these were men. Prescription patterns revealed a significantly higher use of antipsychotics (61.4%) and antiepileptics (48.7%) among participants who committed crimes than among nonoffenders. Male sex was a significant predictor of criminal behavior (odds ratio [OR], 5.5; P = .03), as was alcohol use disorder (OR, 4.8; P = .001), in fourth model regression analysis. Oppositional defiant disorder was strongly associated with criminal behavior (OR, 5.3; P < .001). Combined ADHD presentation and unemployment were also potential risk factors for criminal behavior, but the associations were not statistically significant. IN PRACTICE: “The data from our study suggest that ADHD interacts with sociodemographic, emotional, and behavioral aspects, profoundly influencing the risk of encountering legal issues,” the authors wrote. “Specific screening programs for individuals with ADHD and their comorbidities can help identify at-risk individuals early and reduce their involvement with the judicial system," they added. SOURCE: This study was led by Martina Nicole Modesti, Department of Psychology, Faculty of Medicine and Psychology, Sapienza University of Rome, Rome, Italy. It was published online on March 5 in International Journal of Law and Psychiatry. LIMITATIONS: The cross-sectional design of the study limited the ability to establish causal relationships between the variables analyzed and criminal behavior. The disproportionate subsample sizes of offenders and nonoffenders may have affected the generalizability of the findings. Any ongoing pharmacological treatments may have been potential confounding factors. In addition, reliance on clinical interviews for criminal history data may have introduced recall or concealment biases. The potential effect of dynamic environmental factors, such as family or working conditions, was not considered in the study. DISCLOSURES: The investigators reported having no relevant conflicts of interest. This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. Note: This article originally appeared on Medscape .

  • Do Weight-Loss Medications Affect Mental Health?

    Key points GLP-1 receptor agonists have been associated with multiple adverse psychiatric events. The drugs have been associated with depression, anxiety, insomnia, and suicidality. These correlations do not prove causality. More research on the topic is underway. In late 2017, the FDA approved the first and best-known GLP-1 receptor agonist, which you’ll probably know as “Ozempic.” At first it was only recognized as appropriate to treat Type 2 diabetes, but just over four years later, a 2 mg dose of the same drug received FDA approval for weight management. These medications then surged in popularity—according to a health tracking poll by KFF, as of May 2024 as many as 1 out of every 8 American adults had taken a GLP-1 drug. But by fall 2023, the system through which physicians notify the FDA of “adverse events” had already accumulated almost 500 reports of possible mental health side effects attributed to GLP-1 drugs. Anxiety, depression, and even suicidal ideation in patients had been reported by doctors across the country. NPR reported in September 2024 that in 96 of these adverse events, the patients in question had experienced suicidal thoughts; five of these patients died. To be clear, the FDA’s database is not designed for cause-and-effect reasoning, so it’s not possible to assign full responsibility for these serious psychological conditions to GLP-1 drugs. But a simultaneous groundswell on social media seemed focused on the same phenomena, as a 2023 study by the National Institute of Health documented (Arillotta et al., 2023). Data were collected from several social media websites, including TikTok, YouTube, and Reddit, and the resulting information was broken down via the use of an AI spreadsheet analysis platform called Numerous. Most social media comments pertaining to GLP-1 receptor agonists and psychological symptoms focused on “sleep-related issues, including insomnia,” but also called out “anxiety…depression…and mental health issues in general.” Again, some apparent links between weight loss medications and psychological side effects had been established, but without any clear causal link between them. Perhaps, for example, people who elect to use GLP-1 drugs are already more likely to experience anxiety or depression. Social media users also noted that the drugs also appeared to be losing their effectiveness over time, which could result in disappointment or in feelings of being “stuck"; thus, the researchers here may have found it hard to tease apart the drug’s potentially depressogenic consequences from the results of the drug’s failure to have its hoped-for effect. The next step was to perform another, bigger study, with more patients and more adverse events, collected over a larger period of time. Sure enough, a subsequent study of the FDA’s Adverse Event Reporting System was conducted, using a much larger sample. Last year, Chen, W. et al. reviewed 181,238 adverse event (AE) reports for psychological symptoms, then segregated 8,240 AEs as useful and relevant. This study found a “significant association” between GLP-1 drugs and “the development of specific psychiatric AEs.” Chen et al's list included “eight categories of psychiatric AEs, namely, nervousness, stress, eating disorder, fear of injection, sleep disorder due to general medical condition—insomnia type, binge eating, fear of eating, and self-induced vomiting.” The study even provided a median time to onset, which was 31 days (although it varied across the drugs)—suggesting that, among those people who reported negative psychological outcomes, most experienced their symptoms about a month after first taking the drugs. Later studies found even more serious effects. In the journal Nature, Kornelius et al. published a paper in 2024 identifying a “significant association between GLP-1…treatment and [a] 98% increased risk of any psychiatric disorders. Notably, patients on GLP-1 RAs exhibited a 195% higher risk of major depression, a 108% increased risk for anxiety, and a 106% elevated risk for suicidal behavior.” This study focused on psychiatric conditions in patients with obesity, and included over 162,000 patients split into matched pairs (meaning: one control subject, one experimental subject). Not all GLP-1 drugs had the same psychiatric effects: Ozempic was associated with an “approximately 2.4-fold increase in risk” of suicidal ideation, when compared to Wegovy and Saxenda. Plus, different types of patients had different drug outcomes. As Kornelius et al. (2024) wrote, “Females had a higher risk of anxiety and suicidal ideations or attempts compared to males. Younger participants (18-49 years) exhibited a higher risk of suicidal ideations or attempts, while older participants (≥ 70 years) had a lower overall risk of psychiatric diseases. Additionally, racial differences were observed, with Black patients showing a higher risk of suicidal ideations or attempts compared to White and Asian patients.” The study urged physicians to consider their patients’ psychiatric histories before prescribing drugs like Ozempic. But just as the potentially bad news about GLP-1 receptor agonists had begun to create a clear picture, other studies complicated it by finding that sometimes, these drugs can have antidepressant effects. In the American Journal of Geriatric Psychiatry, Chen, X et al (2024) write that these drugs induced “significant reductions in the depression rating scales compared to control treatments,” and were found to “alleviate depressive symptoms” in patients with Type 2 diabetes. And when using only the results of randomized controlled trials—which many people say produce the most reliable type of scientific data—GLP-1 receptor agonists showed a significant positive effect on depressive symptoms in adults. So as of now, the jury is still out, and a final understanding of the psychological effects of the current crop of weight loss drugs has not yet been achieved. Research on the risks and benefits of GLP-1s is still being conducted, and thousands of new social media comments are building up every day. Taking these drugs may well be associated with a risk of psychological harm—but even so, your physician may also be able to provide evidence of medical risks inherent in maintaining a heavier-than-average weight. In any case, all weight-related decisions are extremely personal and can be very complex; it’s unfortunate that for now, the potential risks inherent in GLP-1 use may only complicate the picture further. Note: This article originally appeared on Psychology Today .

  • Esketamine Combo Bests SSRIs for Resistant Depression in Head-to-Head Trial

    Esketamine combined with a serotonin-norepinephrine reuptake inhibitor (SNRI) for treatment-resistant depression (TRD) was linked to significantly lower rates of several adverse outcomes than esketamine plus a selective serotonin reuptake inhibitor (SSRI), new research showed. Prior research has suggested that esketamine combined with either antidepressant is effective for TRD. But whether an add-on SNRI would yield better results than add-on SSRI was unclear due to a lack of head-to-head comparisons. The retrospective cohort study of more than 55,000 participants with TRD showed that adding an SNRI to esketamine nasal spray was associated with significantly lower rates of all-cause mortality, hospitalization, and depression relapse than using add-on SSRI. However, esketamine plus SSRI was linked to a lower incidence of suicide attempts. While both treatment combinations were linked with reduced outcomes, “notable differences exist between them,” study investigator Antonio Del Casale, MD, PhD, Sapienza University of Rome, Rome, Italy, and colleagues wrote. “These findings emphasize the critical role of selecting the appropriate antidepressant partner for esketamine and tailoring treatment to an individual patient profile,” they added. The results were published online on April 2 in JAMA Psychiatry . Lower Relapse Rate For the study, researchers assessed data collected in electronic medical records across 20 countries. They included 55,480 participants with TRD, half of whom were treated with esketamine plus an SNRI (58.6% women; mean age, 45.9 years) and the other half with esketamine plus an SSRI (57.7% women; mean age, 46 years). SNRIs used were desvenlafaxine, duloxetine, levomilnacipran, milnacipran, or venlafaxine. SSRIs used were citalopram, escitalopram, fluoxetine, fluvoxamine, paroxetine, sertraline, or vilazodone. Results showed that in the overall study population, relapse rates for depression (17.8%), all-cause mortality (7.2%), hospitalization (0.1%), and suicide attempts (0.4%) were low throughout the 5-year observation period. However, the group receiving esketamine plus an SNRI vs add-on SSRI had a significantly lower relapse rate (14.8% vs 21.2%; risk ratio [RR], 1.43) and lower rates of all-cause mortality (5.3% vs 9.1%; RR, 1.72) and hospitalization (0.1% vs 0.2%; RR, 3.01; all P < .001) Although low in both groups, incidence of nonfatal suicide attempts was slightly but significantly lower in the group receiving esketamine plus an SSRI (0.3% vs 0.5%, P = .04). Further survival analysis showed a 91.4% 5-year survival probability for the esketamine plus SNRI group vs 86.9% for the esketamine plus SSRI group (P < .001). “Across the study sample, esketamine combined with either an SSRI or an SNRI demonstrated consistently low risks across all outcomes,” the researchers wrote. Still, there were differences, and the study showed that “choice of antidepressant combined with esketamine can significantly impact clinical outcomes in TRD,” they added. The investigators reported no relevant financial relationships. Note: This article originally appeared on Medscape .

  • Remote Support Cut Risk for Relapse in Addiction Treatment

    TOPLINE: Remote interventions for alcohol or drug use disorder led to a reduction in relapse rates by 39% and the mean number of days of substance use when supplementing in-person care, but effectiveness varied when remote interventions replaced in-person care, a study found. METHODOLOGY: Researchers conducted a systematic review and meta-analysis of 34 randomised controlled trials involving 6461 participants and evaluated 42 remote interventions for adults diagnosed with alcohol or drug use disorder. Included studies were obtained from 29 databases and were published between 2004 and 2023 in Organisation for Economic Co-operation and Development countries. Participants had a mean age between 20 and 51 years, and the majority of them were male and Caucasian. Interventions were delivered via the internet, text messages, apps, phone calls, interactive voice response, and a computer game and were grouped on the basis of whether they supplemented or replaced/partially replaced in-person care. Outcomes were relapse and the difference in the number of days of alcohol or drug use. The risk of bias was evaluated at the outcome level. TAKEAWAY: Patients receiving remote interventions along with in-person care showed a lower risk for relapse (odds ratio [OR], 0.61; P = .001) and a reduction in the number of days of alcohol or drug use (standardised mean difference [SMD], −0.18; P = .001) compared with those receiving in-person care alone. In studies fully replacing or partially replacing in-person care, remote interventions showed a lower risk for relapse (OR, 0.51; P = .001) but demonstrated a statistically insignificant reduction in the number of days of alcohol or drug use (SMD, −0.08; P = .301). About 69% of outcomes were at a high risk of bias owing to self-reported measures and missing data, and 25% of outcomes had some concerns. IN PRACTICE: "Remote interventions delivered as a supplementary component of in-person alcohol/drug treatment appear to be an effective approach to reducing the likelihood of relapse and days of alcohol/drug use. The evidence is not conclusive on replacing, or partially replacing in-person treatment with remote interventions, but it does not appear to lead to worse outcomes," the authors wrote. SOURCE: This study was led by Irene Kwan, Evidence for Policy & Practice information Centre, Social Research Institute, University College London, London, United Kingdom. It was published online on March 24 in Addiction. LIMITATIONS: Most included studies were from the United States, limiting generalisability. High attrition rates may have influenced the findings. Blinding was challenging due to the nature of the interventions, and preregistered protocols with analysis plans were often lacking. Variability in outcome measures hindered comparisons across studies, and limited follow-up data prevented the assessment of long-term intervention effects. DISCLOSURES: This study was funded through the National Institute for Health and Care Research Policy Research Programme contract with the Evidence for Policy & Practice information Centre at University College London. The authors reported having no conflicts of interest. Note: This article originally appeared on Medscape .

bottom of page