title
stringlengths
1
827
uuid
stringlengths
36
36
pmc_id
stringlengths
5
8
search_term
stringclasses
44 values
text
stringlengths
8
8.58M
Virtual Reality Uterine Resectoscopic Simulator: Face and Construct Validation and Comparative Evaluation in an Educational Environment
950e068f-bc7c-4102-8326-d8a9579165d3
3148859
Gynaecology[mh]
Hysteroscopy is a surgical skill that provides the gynecologist and patient with the opportunity to experience improved diagnostic accuracy, and clinically effective, minimally invasive surgery for the treatment of a variety of disorders including symptomatic endometrial polyps, submucosal leiomyomas, and selected Müllerian fusion/absorption defects. Unfortunately, since its introduction into the medical literature by Pantaleone, from Italy, in 1869, hysteroscopy and hysteroscopically directed procedures have been reluctantly incorporated into the practice of gynecology, despite the development of highly effective instruments for endoscopic visualization and performance of intrauterine surgery. Training of residents, fellows, and practicing gynecologists has been hampered by a number of obstacles, including a lack of suitably trained mentors, limited access to appropriate equipment, restrictions in resident training hours, and the absence of readily available systems to support training outside of the operating theater. A number of systems and simulators have been proposed, but there still seems to be a relative paucity of postgraduate training programs with robust hysteroscopic education curricula. The existence of microprocessors capable of managing large volumes of data throughput has provided the opportunity for the design of virtual reality (VR) surgical simulators that create a near realistic operating environment. Such simulators have been demonstrated to be effective in other surgical disciplines and for other techniques, by reducing both the need for “in-OR” training time and activity associated with adverse events. – The use of VR simulation can provide a realistic operating environment that provides neither surgical risk to a patient nor the consumption of valuable operating room resources and time. If properly designed, such a system can provide the opportunity for objective measurement of selected surgical performance outcomes. Recently, a prototypical Virtual Reality Hysteroscopic Simulator (VRHS) was developed by Immersion Medical (CAE Montreal PQ, Canada), which provides an opportunity to evaluate the utility of such a system for training in hysteroscopic surgery ( ) . The device comprises a realistic resectoscope with an attachable “camera” and distal sensors, a pelvis with a mechanical external cervical os, and actual electrosurgical foot pedals, all attached to a personal computer loaded with proprietary software. Following assembly, the “resectoscopic system” is inserted through a mechanical external os, thereby allowing the distally based sensors to activate the software, in a fashion that provides a realistic, interactive screen image. Virtual inflow and outflow can be regulated, and the operator-controlled motions of the element and the distal optic are transmitted to the screen to simulate an actual intrauterine environment with a loop electrode that can be manipulated and activated by the “surgeon.” Activation of the foot pedals allows the surgeon to cut strips of virtual “tissue” from the target leiomyoma. The device has 2 integrated software-based skill development and testing exercises that include an introductory “Manipulation Module” ( ) , which requires the operator to sequentially use the electrode to touch 4 randomly located targets within the endometrial cavity; and a “Myoma Resection Module” ( ) , which allows the trainee to perform resectoscopic loop resection of a posterior submucosal leiomyoma. Uterine perforation is simulated if the trainee directs the hysteroscope or electrode in a fashion that could traverse the myometrium. The pilot study had 2 hypotheses: The device will show face validity in that experts will complete the skill testing in less time than novice hysteroscopists. Novice hysteroscopists will show significant improvement in their times to successfully complete both skills following a prescribed training program (construct validation). The Kaiser Permanente Regional Research Board approved the study. All procedures were performed in the Simulation Center of the Kaiser Foundation Hospital's Los Angeles Medical Center. The study was designed in 2 phases. Phase 1 was the face validation component that was designed to compare a group of novice hysteroscopists with a group of surgeons with expertise in hysteroscopy and resectoscopic procedures. Phase 2 was designed to measure construct validity, by using the data from the novices enrolled in Phase 1 as a baseline for comparing outcomes following a prescribed training program using the VHRS. The post “training” data were also compared with the data from the 3 “experts” obtained in Phase 1. The investigators identified 14 subjects from the attending staff and residents of a community Obstetrics & Gynecology residency education program, 3 “expert” hysteroscopists and 11 who were defined as “novices.” Each of the hysteroscopists had more than a decade of extensive experience with diagnostic and operative hysteroscopy, as well as resectoscopic surgery. All participated actively in resident surgical teaching, but none had any exposure to the VRHS. Novice hysteroscopists had no experience with hysteroscopy and comprised both medical students and first-year residents. For Phase 1, all of the subjects were oriented to the VHRS in a single session that included a test “run” to familiarize themselves with each of the 2 exercises. Then, each subject performed a recorded “baseline” run through each exercise, first with the Manipulation Module and then the Myoma Resection Module. The exercise times, target outcomes, and measurable errors were recorded using the internal recording system and compared. A maximum time of one minute was allowed for the Manipulation Module, while 3 minutes were allotted for the Myoma Resection Module. For Phase 2, the 11 novice hysterscopists were provided hysteroscopic education that included training in manipulation of the electrode, the lens, and other relevant aspects of resectoscopic surgery. Two “runs” were supervised and then the trainee was left alone, over a period of 2 weeks, to complete a total or 9 postbaseline runs through both tests. Following completion of the ninth session, a final proctored run through each of the 2 exercises was performed, recorded, and then compared to the baseline for the trainee and, collectively, to the expert scores obtained in Phase 1. Statistical comparisons were performed using paired t tests for the Myoma Manipulation module, while the Wilcoxon rank sum test was used to compare percentage resection of myomas for the Myoma Resection Module. Because this was considered a pilot study, no formal power analysis was performed to determine sample size. Phase 1 Manipulation Module (Baseline) All subjects participated in the manipulation test. In the 60-second exercise, each of the experts successfully touched all 4 targets ( ) , without perforation, with a mean exercise time of 33 seconds ( ) . For the 11 novices, the median number of targets touched was 2 (range, 0 to 4) ( ) , one perforation occurred, and the mean exercise time was 57 seconds ( ) . These differences were significant. Myoma Resection Module (Baseline) All 11 subjects completed the baseline of the myoma resection module, with the experts removing a mean of 97.3% (range, 97% to 98%) of the myoma within the 3-minute exercise time. The novices removed a mean of 66.1% of the virtual myoma (range, 12% to 92%) and one perforation occurred. The differences between “novices” and “experts” were significant ( ) . Phase 2 Manipulation Module (Run 10) Seven of the novices made it back within the allotted time for the recorded 10th performance; the other 4 did not complete the program in the prescribed time and, therefore were not included in the analysis. The mean number of targets successfully touched was 4 ( ) . The mean time spent on the Manipulation module was 23 seconds ( ) . The differences from baseline were significant and were similar to those of the “experts” at baseline. Myoma Resection Module (Run 10) For the 7 novices who completed the 10th run, baseline resection volume was 65.3% (range, 30 to 92), while at run 10 it was 89% (range, 64 to 99), with one perforation ( ) . These differences approached significance, but there was a singular virtual perforation by one of the novices during run 10. The differences in percentage resection between “novices” and “experts” were still significant, but the differences were greatly reduced. Manipulation Module (Baseline) All subjects participated in the manipulation test. In the 60-second exercise, each of the experts successfully touched all 4 targets ( ) , without perforation, with a mean exercise time of 33 seconds ( ) . For the 11 novices, the median number of targets touched was 2 (range, 0 to 4) ( ) , one perforation occurred, and the mean exercise time was 57 seconds ( ) . These differences were significant. Myoma Resection Module (Baseline) All 11 subjects completed the baseline of the myoma resection module, with the experts removing a mean of 97.3% (range, 97% to 98%) of the myoma within the 3-minute exercise time. The novices removed a mean of 66.1% of the virtual myoma (range, 12% to 92%) and one perforation occurred. The differences between “novices” and “experts” were significant ( ) . All subjects participated in the manipulation test. In the 60-second exercise, each of the experts successfully touched all 4 targets ( ) , without perforation, with a mean exercise time of 33 seconds ( ) . For the 11 novices, the median number of targets touched was 2 (range, 0 to 4) ( ) , one perforation occurred, and the mean exercise time was 57 seconds ( ) . These differences were significant. All 11 subjects completed the baseline of the myoma resection module, with the experts removing a mean of 97.3% (range, 97% to 98%) of the myoma within the 3-minute exercise time. The novices removed a mean of 66.1% of the virtual myoma (range, 12% to 92%) and one perforation occurred. The differences between “novices” and “experts” were significant ( ) . Manipulation Module (Run 10) Seven of the novices made it back within the allotted time for the recorded 10th performance; the other 4 did not complete the program in the prescribed time and, therefore were not included in the analysis. The mean number of targets successfully touched was 4 ( ) . The mean time spent on the Manipulation module was 23 seconds ( ) . The differences from baseline were significant and were similar to those of the “experts” at baseline. Myoma Resection Module (Run 10) For the 7 novices who completed the 10th run, baseline resection volume was 65.3% (range, 30 to 92), while at run 10 it was 89% (range, 64 to 99), with one perforation ( ) . These differences approached significance, but there was a singular virtual perforation by one of the novices during run 10. The differences in percentage resection between “novices” and “experts” were still significant, but the differences were greatly reduced. Seven of the novices made it back within the allotted time for the recorded 10th performance; the other 4 did not complete the program in the prescribed time and, therefore were not included in the analysis. The mean number of targets successfully touched was 4 ( ) . The mean time spent on the Manipulation module was 23 seconds ( ) . The differences from baseline were significant and were similar to those of the “experts” at baseline. For the 7 novices who completed the 10th run, baseline resection volume was 65.3% (range, 30 to 92), while at run 10 it was 89% (range, 64 to 99), with one perforation ( ) . These differences approached significance, but there was a singular virtual perforation by one of the novices during run 10. The differences in percentage resection between “novices” and “experts” were still significant, but the differences were greatly reduced. In this pilot study, the VHRS appears to have face validity in that it is able to distinguish between novices and experts, by our definition. Furthermore, the results of this study support the notion that this virtual reality simulation system has construct validity, in that it was able to quantitatively distinguish the experts from the novices, and novices were able, with practice, to improve scores so that they approached those of the experts. Although the results of the study suggest that the manipulation module may be predictive of the performance of myomectomy, further studies would be necessary to confirm this impression. Furthermore, practice on such a system might be expected to reduce the operating room time spent by novices in the early part of the “learning curve,” a process that adds to the cost of training and/or the cost of care. However, this study was not designed to evaluate this outcome, and, consequently such a conclusion cannot be made at this time. Simulator training using tissue simulations has been shown to be effective at improving objective structured assessing technique (OSAT) scores in residents at one month, which deteriorates by 6 months. The process of setting up such time intensive laboratories is not conducive to repetitive training and practice, thereby undermining the value of such systems. Virtual reality systems, at least theoretically, can be available to busy residents on a daily or nightly basis, and, theoretically provide the opportunity for more sustained retention of learned skills. Face validity simply implies that a given test appears to be able to measure a given skill. This VR system is 1 of 2 hysteroscopic VR systems that has been evaluated for face validity. The other hysteroscopic VR simulator, HystSim (VirtaMed, Zurich, Switzerland), has also undergone construct validity evaluation and has been demonstrated to have value in diagnostic hysteroscopy and in manipulation of the instrument, but a surgical intervention such as resectoscopic myomectomy was not tested. Although it is anticipated that training on the current system would reduce the additional time spent in the operating room involved with training residents at operative hysteroscopy and resectoscopy, a well-designed study involving operating room performance would be necessary to prove this hypothesis. Several aspects of resectoscopic surgery are not assessed by this system. The competencies that were not tested included resectoscope assembly, use of foreoblique (angled) lenses, fluid management, and specimen removal. Many of these aspects could be added to the training provided by the VR system in the context of a comprehensive hysteroscopy and resectoscopic surgery education program. The impact of the training aspect of the study is limited somewhat by the number of subjects who did not return for the follow-up testing. While the sample size appeared to be adequate, it was difficult to organize residents to return within a predetermined period of time to complete the follow-up testing. Nonetheless, our data on the impact of training nearly reached significance, particularly since one of the residents who was performing well on the 7th through 9th sessions perforated the uterus on the 10th. As a result of this pilot study, we feel that we have demonstrated potential value for this system for the early training of novices in hysteroscopic surgery. Further studies with larger sample sizes that include evaluation of impact on both resource and clinical outcomes, including risks and complications, will be necessary to determine the ultimate value of these systems.
Pain Reconceptualisation after Pain Neurophysiology Education in Adults with Chronic Low Back Pain: A Qualitative Study
b94c5480-7db7-4c83-ba7b-6282ca9623b6
6157134
Physiology[mh]
Pain neurophysiology education (PNE) has become a commonly used educational intervention for patients with chronic pain. PNE is a cognitive behavioural-based intervention in that it aims to reduce inappropriate beliefs and maladaptive behaviours, in order to decrease pain and disability, by explaining the biology of pain to the patient . A growing body of literature supports its effectiveness [ – ]. Patients with chronic pain, fuelled by health care professionals, often hold strong biomedical model beliefs that their pain is due to tissue damage [ – ]. A number of conceptual models have proposed that such inappropriate beliefs can lead to the development/maintenance of chronic pain. Within the fear-avoidance model, when pain is perceived as threatening, catastrophic thinking can result in pain-related fear and anxiety, leading to avoidance behaviour, disability, and a vicious cycle of chronic pain . Additionally, as proposed within the model of misdirected problem-solving, inappropriate beliefs about tissue damage housed within a medical model framework can lead patients with chronic pain to repetitively seek solutions to remove their pain, moving from one treatment to the next, stuck within a perseverance loop . Each unsuccessful solution amplifies the condition and can prevent the patient from reframing their efforts away from an arguably unachievable goal of pain cessation to one of pursuing a valued life in the presence of pain . A primary mechanism by which PNE purports to work is by helping patients better understand their pain and issues around its causes, correcting inappropriate beliefs—reconceptualising their pain . Reconceptualisation can be defined by four key concepts: (i) pain does not provide a measure of the state of the tissues; (ii) pain is modulated by many factors across somatic, psychological, and social domains; (iii) the relationship between pain and tissue becomes less predictable as pain persists; and (iv) pain can be conceptualised as a conscious correlate of the implicit perception that tissue is in danger . In theory, pain reconceptualisation should reduce the commonly perceived fear that pain is a clear signal of tissue damage by dispelling the notion that pain is an accurate indication of the state of tissue. Reduction of this fear may lead to reduced pain-related fear, distress, and disability; improved physical and mental health ; an escape from the perseverance loop identified within the misdirected problem-solving model ; and potentially reduced levels of pain . Only a few studies have been carried out exploring the phenomenon of reconceptualisation as a key mechanism of PNE. Evidence that PNE improves participants' knowledge of pain neurophysiology and reduces fear avoidance and pain catastrophising has been used to imply that reconceptualisation is a key factor [ , , , ]. However, the narrow scope of the outcome measures (using structured questionnaires) in these studies provides limited insight into the complex phenomenon of pain reconceptualisation, and a validated questionnaire for the measurement of reconceptualisation has not been developed. At this stage of the development of evidence, qualitative methodology is better suited to studying pain reconceptualisation as it allows for an indepth exploration of multifaceted phenomenon such as reconceptualisation. Our previous studies have found that patients with chronic pain often hold conflicting views about the cause/nature of their pain. Qualitative methods can help to reveal and explore these conflicting complex beliefs to an extent that quantitative methods cannot . Two recent qualitative studies completed by our group identified the level of pain reconceptualisation following a single 2-hour session of PNE in patients with chronic pain as “partial and patchy” . However, where degrees of reconceptualisation were evident, we also saw clinical improvements, supporting the idea that reconceptualisation is a central mechanism of PNE's effect. A notable finding was the importance of relevance of PNE to the individual's specific experience as opposed to being relevant to a more general experience of living with pain . The participants included in these two studies were from a range of pain conditions including multisite pain, lower back pain (with and without leg pain), thoracic pain, throat pain, complex regional pain syndrome, neck pain, and upper limb pain. A key factor which may impact upon relevance to the patient is their pain condition and how they perceive PNE fits with their symptoms. Poor perceived fit between symptoms and PNE may reduce perceived relevance for the patient. “ For me personally I didn't think it was any good for the symptoms that I have… it was for more for people with different parts of the body pain and not the one I have ” . Thus, looking at the experience of PNE for specific pain populations may be important. In Robinson et al. , four participants out of a total of 10 demonstrated some evidence of reconceptualisation following PNE. All four had multisite pain. In contrast, two of the four participants with chronic low back pain (CLBP) reported that PNE was not relevant to them, they perceived no benefit, and showed no signs of reconceptualisation. Within educational theory, conceptual change requires a dissatisfaction with one's current understanding of a concept . For many, perhaps most people, there is a strong belief that back pain can be readily aligned with the medical/tissue injury model . This gives rise to the possibility that they may be more accepting of a biomedical explanation and thus less open to reconceptualisation than people with multisite pain or painful conditions that defy the logic of a medical model explanation. It may also be that they are less likely to have encountered an alternative explanation for their pain beyond the medical model. This corresponds with observations we made from previous work , where a participant with CRPS, a condition that fits poorly with the medical model, demonstrated pain reconceptualisation following PNE and showed clear signs of an awareness and understanding of pain hypersensitivity before receiving PNE. Chronic low back pain (CLBP) is a particularly important pain subgroup to focus upon as it is one of the most common pain conditions globally and it is the largest single cause of disability-adjusted life years (2,313 per 100,000 population) in the UK . The National Institute for Health and Care Excellence (NICE) estimates that back pain costs the UK economy over 2.1 billion annually . Considering the potential importance of the person's pain condition with respect to perceived relevance, reconceptualisation, and ultimately the effectiveness of PNE, there is a need to explore pain reconceptualisation in people with CLBP following PNE. In doing so, new approaches to tailoring and enhancing this education specifically for patients with CLBP may be identified. Thus, the aim of this study was to investigate the extent, and nature, of people's reconceptualisation of their CLBP following PNE. 2.1. Design We used the approach of theoretical thematic analysis with a focus towards deductive analysis to explore the applicability of the themes we had found in our previous work on people with chronic pain in general to a group with CLBP only. Due to the heterogeneity of this study sample, we felt that it was important to be open to exploring the data for any additional/new themes that may emerge. To reflect this, we also used inductive analysis. 2.2. Recruitment and Sample Participants were recruited from a single site—an NHS pain clinic in the North East of England. We aimed to recruit a convenience sample of 10–12 participants. While no formal guidelines exist with respect to sample size estimation for qualitative studies, it has been proposed that in studies where the aim is to understand common perceptions and experiences, twelve interviews should be sufficient . Patients were eligible for inclusion if they had been referred to PNE as part of their usual care, were ≥18 years of age, and if their primary complaint was chronic (>6 months duration) lower back pain (±leg symptoms) of a neuro/musculoskeletal origin. All referrals were made by consultants in pain management following assessment. None of the participants required spinal or orthopaedic surgery. Patients were excluded from the study if their level of English was not judged suitable enough to take part in an interview or if their pain was not primarily associated with the musculoskeletal system such as neurological conditions. To limit any feeling of coercion, patients of the interviewer (RK) were also excluded from taking part in the study. Patients with the primary complaint of LBP who had been referred to PNE as part of their usual care were sent a brief information sheet regarding the study. Following this, the patient was contacted by a research assistant and asked if they would like to receive more information regarding the study. If they did, this information was sent to them and they were contacted to see if they would like to participate. Data were collected between September and November 2014. This study was approved by NRES Committee Yorkshire and The Humber – Sheffield (REC Reference number: 14/YH/0153). Written informed consent was obtained from all participants before they entered the study. On completion of data collection, all data were fully anonymised. 2.3. Intervention All participants in this study received PNE as part of their routine usual NHS care. The PNE session was heavily based upon the manual Explain pain . The PNE session was delivered in a group setting of 10–12 patients with chronic pain. The patients within the groups were heterogeneous with respect to their clinical condition; however, only people with CLBP were recruited into this study. Thus, the PNE delivered was not back pain specific. The intervention was delivered by two experienced, pain specialist physiotherapists who have worked within the pain setting for >5 years each, had undertaken postgraduate training in pain, and attended Explain Pain courses delivered by the Neuro Orthopaedic Institute. Published service evaluation data have shown that patients with chronic pain who receive PNE at this clinic demonstrate average increases in pain knowledge in keeping with increases reported in the literature . 2.4. Data Collection Participants underwent a semistructured interview one week prior to PNE. The interview script is provided in Supplementary Material ( ). The pre-PNE interview focused on beliefs about the nature, cause, and experiences of their pain. Three weeks after PNE, participants were reinterviewed by the same researcher using the same semi-structured approach. Participants were asked the same questions as in the first interview but were also asked to reflect on any change in their understanding of their pain. All interviews took place in the hospital in a private room lasting approximately one hour, with only the interviewer and participant present. They were audio recorded and transcribed verbatim for thematic analysis. 2.5. Analysis The primary analysis of the data was conducted by RK using NVivo software (version 10), following the guidelines for theoretical thematic analysis outlined by Braun and Clarke . Each transcript was read multiple times and statements were coded according to their meaning. Coded statements were grouped together into four a priori themes that we found in our previous work —degrees of reconceptualisation, personal relevance, importance of prior beliefs, and perceived benefit of PNE. We also provided for the emergence of themes that did not fit with the above. To ensure dependability, all views were treated equally. Three weeks following the second interview, RK telephoned all participants to verify that he had an accurate interpretation of the participants account. Only 8 participants could be contacted. During the telephone conversation, extracts from the interview were described to the participant to assess/verify if the researcher had made an appropriate interpretation of the interview comments. In all cases, the participants agreed with the interpretation of the account. Therefore, no amendments were made. The average duration of the telephone conversation was 12 minutes. Following this process, a second researcher (HE) read all the transcripts to ensure the themes were logical and rooted in the data. To increase credibility, the results were circulated throughout the rest of the research team for further refinement and to be collected into a coherent account. Evidence for or against the a priori themes was sought from participants' subjective accounts and changes were explored by comparing participants' pre- and post-PNE interviews. 2.6. Reflexivity Reflexivity relates to the amount of influence the researcher—consciously or unconsciously—has on the outcome of the study and can be defined as “ a continuous process of reflection by the researcher on their values, preconceptions, behaviour, or presence and those of the participant which can affect interpretation of responses ” . Therefore, disclosure of the researchers' standpoints allows the reader to consider how this might have impacted on the findings. To this end, four of the researchers (RK, VR, JW, and CR) have experience of delivering PNE. RK and VR have extensive experience in pain management (6 and 11 years' full-time physiotherapists in pain management, resp.), regularly deliver PNE as part of their clinical practice, and have undertaken professional training to do so. It is their (RK, VR, JW, and CR) belief that PNE is a clinically useful intervention; however, they have no vested interest in the outcome of this study. DM and HE do not have experience of delivering PNE clinically. Their involvement is from a research method's perspective. They support the potential underlying theory of reconceptualisation and remain open to the theories being shaped by evidence. We used the approach of theoretical thematic analysis with a focus towards deductive analysis to explore the applicability of the themes we had found in our previous work on people with chronic pain in general to a group with CLBP only. Due to the heterogeneity of this study sample, we felt that it was important to be open to exploring the data for any additional/new themes that may emerge. To reflect this, we also used inductive analysis. Participants were recruited from a single site—an NHS pain clinic in the North East of England. We aimed to recruit a convenience sample of 10–12 participants. While no formal guidelines exist with respect to sample size estimation for qualitative studies, it has been proposed that in studies where the aim is to understand common perceptions and experiences, twelve interviews should be sufficient . Patients were eligible for inclusion if they had been referred to PNE as part of their usual care, were ≥18 years of age, and if their primary complaint was chronic (>6 months duration) lower back pain (±leg symptoms) of a neuro/musculoskeletal origin. All referrals were made by consultants in pain management following assessment. None of the participants required spinal or orthopaedic surgery. Patients were excluded from the study if their level of English was not judged suitable enough to take part in an interview or if their pain was not primarily associated with the musculoskeletal system such as neurological conditions. To limit any feeling of coercion, patients of the interviewer (RK) were also excluded from taking part in the study. Patients with the primary complaint of LBP who had been referred to PNE as part of their usual care were sent a brief information sheet regarding the study. Following this, the patient was contacted by a research assistant and asked if they would like to receive more information regarding the study. If they did, this information was sent to them and they were contacted to see if they would like to participate. Data were collected between September and November 2014. This study was approved by NRES Committee Yorkshire and The Humber – Sheffield (REC Reference number: 14/YH/0153). Written informed consent was obtained from all participants before they entered the study. On completion of data collection, all data were fully anonymised. All participants in this study received PNE as part of their routine usual NHS care. The PNE session was heavily based upon the manual Explain pain . The PNE session was delivered in a group setting of 10–12 patients with chronic pain. The patients within the groups were heterogeneous with respect to their clinical condition; however, only people with CLBP were recruited into this study. Thus, the PNE delivered was not back pain specific. The intervention was delivered by two experienced, pain specialist physiotherapists who have worked within the pain setting for >5 years each, had undertaken postgraduate training in pain, and attended Explain Pain courses delivered by the Neuro Orthopaedic Institute. Published service evaluation data have shown that patients with chronic pain who receive PNE at this clinic demonstrate average increases in pain knowledge in keeping with increases reported in the literature . Participants underwent a semistructured interview one week prior to PNE. The interview script is provided in Supplementary Material ( ). The pre-PNE interview focused on beliefs about the nature, cause, and experiences of their pain. Three weeks after PNE, participants were reinterviewed by the same researcher using the same semi-structured approach. Participants were asked the same questions as in the first interview but were also asked to reflect on any change in their understanding of their pain. All interviews took place in the hospital in a private room lasting approximately one hour, with only the interviewer and participant present. They were audio recorded and transcribed verbatim for thematic analysis. The primary analysis of the data was conducted by RK using NVivo software (version 10), following the guidelines for theoretical thematic analysis outlined by Braun and Clarke . Each transcript was read multiple times and statements were coded according to their meaning. Coded statements were grouped together into four a priori themes that we found in our previous work —degrees of reconceptualisation, personal relevance, importance of prior beliefs, and perceived benefit of PNE. We also provided for the emergence of themes that did not fit with the above. To ensure dependability, all views were treated equally. Three weeks following the second interview, RK telephoned all participants to verify that he had an accurate interpretation of the participants account. Only 8 participants could be contacted. During the telephone conversation, extracts from the interview were described to the participant to assess/verify if the researcher had made an appropriate interpretation of the interview comments. In all cases, the participants agreed with the interpretation of the account. Therefore, no amendments were made. The average duration of the telephone conversation was 12 minutes. Following this process, a second researcher (HE) read all the transcripts to ensure the themes were logical and rooted in the data. To increase credibility, the results were circulated throughout the rest of the research team for further refinement and to be collected into a coherent account. Evidence for or against the a priori themes was sought from participants' subjective accounts and changes were explored by comparing participants' pre- and post-PNE interviews. Reflexivity relates to the amount of influence the researcher—consciously or unconsciously—has on the outcome of the study and can be defined as “ a continuous process of reflection by the researcher on their values, preconceptions, behaviour, or presence and those of the participant which can affect interpretation of responses ” . Therefore, disclosure of the researchers' standpoints allows the reader to consider how this might have impacted on the findings. To this end, four of the researchers (RK, VR, JW, and CR) have experience of delivering PNE. RK and VR have extensive experience in pain management (6 and 11 years' full-time physiotherapists in pain management, resp.), regularly deliver PNE as part of their clinical practice, and have undertaken professional training to do so. It is their (RK, VR, JW, and CR) belief that PNE is a clinically useful intervention; however, they have no vested interest in the outcome of this study. DM and HE do not have experience of delivering PNE clinically. Their involvement is from a research method's perspective. They support the potential underlying theory of reconceptualisation and remain open to the theories being shaped by evidence. Out of 12 participants initially recruited, only 11 provided a pre- and postinterview. One participant did not provide a postinterview (Participant 6). This individual did not supply a reason for this and we did not have ethical approval to approach her to find out why she did not attend ( ). Of the 12 participants, 7 were female and 5 were male. All participants were diagnosed with low back pain of greater than 6 months duration. The average (range) duration of pain was 10 years and 4 months (8 months–26 years). The average (range) age of participants was 48 years (25–72 years). Of the 12 participants, 3 were unemployed, 6 were employed, and 3 were retired. Participants ranged from having no qualifications to holding a BSc (Hons) degree. A summary of how each participant was analysed against the a priori themes is shown in . Additional themes, beyond those identified a priori, did not emerge from the data. 3.1. Theme 1: Degrees of Reconceptualisation No evidence for reconceptualisation was found in the accounts of Participants 9, 11, and 12. Following PNE, their explanations of the current cause of their pain were expressed exclusively in biomedical language, as was the case before PNE. “When they done the MRI, when they done that, they discovered I had this impingement in my spine.” (P9 pre) “The reason why I'm in pain? Because of my impingement...” (P9 post) We observed evidence of reconceptualisation in the accounts of P1, 2, 3, 4, 7, 8, and 10. This evidence took various forms: language that no longer discussed pain in purely biomedical terms, the use of neurophysiological terms in a way that was not evident in the interviews before PNE, new language about the links between pain and emotions. P10's shift from an entirely biomedical view of her pain to becoming open to the idea that such an explanation may not be sufficient is illustrative. “…I won't have that made as an excuse for this because there's something real happening in my back. I think there's something wrong with my discs.” (P10 pre) “…there might not be [a structural] explanation for it…as it was explained in the session last week, it might not be structural.” (P10 post) For P1, 2, 3, 7, and 8, we considered the evidence for reconceptualisation as partial and patchy because the language consistent with reconceptualisation was accompanied by the language that was consistent with a biomedical understanding of pain. For example, in her interview before PNE, P1's response to being asked about the cause of her back pain was “Sclerosis…I know I've got disc degeneration.” (P1 pre) After PNE, she introduced neurophysiological language using the phrase “new nerve” in relation to neuroplasticity. “…it is the new nerve in sending the messages up…” (P1 post) while still describing the current cause of her pain in structural terms as before PNE. “I know I've got sclerosis of my lower back…whether the arthritis is starting to affect it more I don't know.” (P1 post) Participant 4, however, showed strong signs of reconceptualisation that exceeded partial reconceptualisation. He demonstrated the clearest change from pre- to post-PNE with respect to his explanation of his pain and his appreciation of the role of psychosocial factors on his pain. Both showed a clear shift away from the medical model. Prior to PNE, the participant believed that the most likely cause of his back pain was a fracture that had shown up in an MRI scan based on consultations with two different health care professionals. “He showed me on the thing (MRI scan) with his finger, that looks like a stress fracture to your back.” (P4 pre) “He (the health care professional) said, and he believed that I've probably like fractured a couple of bones in my body.” (P4 pre) After PNE, P4's explanation of his current pain was uniformly expressed in neurophysiological language with an absence of the biomedical language that had dominated the interview before PNE. “…any slight jarring, or anything like that, and it sends my back into spasm, which is like just basically creating a protective shell and it's so used to doing it it's on hypersensitive and I think that's generally why my pain is, and it's just not switching off…(Interviewer: What causes that hypersensitivity?) …I think that's all those too much chemicals in my body.” (P4 post) Also, he showed a clear change in understanding of the link between pain and mood from tenuous “…I won't completely reject it…” (P4 pre) to a full acceptance of the links. “…the psychology…and stuff like that is massive and knowing how your brain works and stuff like that is huge…” (P4 post) Participant 14 was a unique case. With a university-level educational background in biology, P14 had developed a clear understanding of pain mechanisms consistent with reconceptualisation as seen in his interview before PNE. “…I've had possibly a few back problems…and my back has picked up on this, if you like the nerve has picked up on this, it's sent the signals to the brain, the brain's sent it back down and it probably happens over two or three months.” (P14 pre) That understanding did not change after PNE but was reinforced. 3.2. Theme 2: Personal Relevance Even though he already had a clear understanding of pain mechanisms, P14 did find the session relevant to his own condition. “it all it did was to completely reaffirm the way that I was actually going or the way I'd actually thought before I came but you did it did help to if you like allay any I was going to say fears but it's not so much fears it's more concerns that I had in many ways, I'm going round the twist.” (P14 post) Of the 7 participants in whose accounts we observed evidence of reconceptualisation, we counted 5 as having applied that reconceptualisation to their own particular circumstances—P1, 2, 3, 4, and 7. In other words, their new understanding had personal relevance. Typically, this was noted by clear use of the first person singular such as “…basically the cause of my pain, my pain is sort of constant…” (P4 post) and by clear statements discussing the relevance of the session. “…at the time things that she was explaining did make sense and how, you know, things just triggered and how it all moves around your body and your mind and everything…I could relate to it, I could relate to it.” (P7 post) In contrast P8's account of reconceptualisation was more theoretical and related to a more general experience of living with pain, and when he described his own condition, the language was explicitly biomedical. Participants 9, 11, and 12 showed no clear evidence of relevance and indeed Participant 11 made it clear that she saw PNE as just another of the many things she was open to trying to help with her pain. “If you offered another session to me I'd still go, whether it was 100% relevant to me or not, I'll take anything that's going, I won't knock anything.” (P11 post) Participant 12 also reported a lack of relevance. His problems were pain and numbness in his legs following back surgery that had reduced pain in his back and he lamented the lack of a particular focus on his personal circumstances in the session. “…I didn't get the chance to explain what my problems were…it was about pain in general but it wasn't targeted at myself or anybody specific, it was just like everybody.” (P12 post) 3.3. Theme 3: Importance of Prior Beliefs Before PNE, all three participants in whom we found no reconceptualisation (P9, 11, and 12) believed that their current pain was caused by biomechanical factors and did not show any signs of dissatisfaction with this belief. The beliefs of Participants 9 and 12 were passive in that they had not really given other potential causative factors consideration while Participant 11 was actively opposed to any alternative explanation—indeed, she had walked out of a previous consultation when the clinician enquired about social issues. “…all she wanted to know about was my personal life and I walked out because I said I'm not here about anything other than a crash…” (P11 pre) Participant 8, whose reconceptualisation was general rather than personal, had a steadfast belief that his current pain was caused by damage to his facet joints. For the other six participants in whom we did find some reconceptualisation and relevance (P1, 2, 3, 4, 7, and 10), all apart from Participant 1 stated prior beliefs which demonstrated either a dissatisfaction with their existing biomedical explanation of the current pain. “…the only thing I've been told as well it's probably mechanical…I'm not convinced that it is mechanical, it's not the same kind of pain as on the left side…” (P3 pre) and/or an openness to a more biopsychosocial/neurophysiological sensitisation explanation consistent with PNE. “I think I've got a lot of nerve, I know I've got a lot nerve damage…I think it's just those nerve endings suddenly coming alive again…I presume it's just that message going to my brain saying you're in pain, that's all I'm thinking you know, I don't know if that's correct.” (P7 pre) 3.4. Theme 4: Perceived Benefit of PNE Neither Participant 8 nor Participant 10 described any clinical benefit from their PNE session. In the case of P8, rather than showing a clinical benefit after PNE, he discussed scenarios that were at odds with the aims of PNE. Most marked were statements about restricting movement and activity because of the potential damage to structures in his back. While he offered an explanation for back pain linked to neurophysiology following PNE, “… a build-up of the gateways being open permanently…allowing sensation to override…” he clearly continued to link his pain with tissue damage. “I think it's telling me be careful…because you don't want to aggravate an injury or a potential injury or something's going to happen if you continue with that activity.” (P8 post) The context of this was that he was comfortable with the facet joint diagnosis that he had received and its plausibility was enhanced because he had experienced benefit from a stretching regime that he could rationalise in terms of that diagnosis. That ties in with his general rather than personal reconceptualisation. P14 reported clinical benefit mainly in terms of reinforcement of his current understanding “…all it did was to completely reaffirm the way that I was actually going or the way that I'd actually thought before I came to you…” (P14 post) and clarification of some concerns that were causing him confusion. “…it did help to, if you like, allay any, I was going to say fears, but it's not so much fears, it's more concerns that I had in many ways, I'm going round the twist.” (P14 post) The remaining participants who we considered to have showed various degrees of partial reconceptualisation (P1, 2, 3, 4, and 7) all spoke about benefits from PNE. These described improved understanding about their pain and its management; “It made a lot of sense as to why even though especially over the last three or four years and all they've been doing is upping the painkillers why I'm not getting the relief that I thought I would be getting off them.” (P3 post) an increased ability to cope with pain; “…I suppose it's the acceptance what I've got out from this session is like to trying to accept the fact that you've got the pain for life and it's how that pain is managed is what makes life more manageable in itself.” (P2 post) and functional improvements. “…when I was walking quite briskly I just slowed down. I thought, oh calm down you've got plenty of time to get there…where before I would have just carried on…” (P7 post) Here, P7 describes how her new understanding of her pain influenced her walking in a form of activity pacing to carry on functioning while still experiencing pain. Those who did not show signs of reconceptualisation under our criteria (Participants 9, 11, and 12) showed neither personal relevance nor clinical benefit. “It was more interesting than useful.” (P11 post) Participant 2 provided the first example in the literature of evidence of an adverse effect from PNE in that she found the session to be upsetting. She explained how the PNE instructor had given an example of someone who injured his back falling off a ladder and then found his pain triggered when he saw a ladder. From that example, Participant 2 recognised how she associated her back pain with childbirth and that now the presence of her child was acting as a trigger for her pain. “They made a reference to a person who had chronic back pain after having fallen off a ladder and every time they saw a ladder or had to go anywhere near a ladder it triggered the pain, made it worse, and although that's nothing like my situation it made me worry because my back pain is related to childbirth that the effects my pain was having on my family… I was upset to think that my pain was sometimes worse when my daughter was being more demanding and although that scenario that was given that person could spend a good quality of their life avoiding the situation, avoiding using a ladder, avoiding going near a ladder, I don't want to and couldn't even if I did want to avoid the situation of being a parent…I mean it was just that the pain could be associated to the cause and knowing the cause of my pain was my daughter initially though it wasn't her fault.” (P2 post) No evidence for reconceptualisation was found in the accounts of Participants 9, 11, and 12. Following PNE, their explanations of the current cause of their pain were expressed exclusively in biomedical language, as was the case before PNE. “When they done the MRI, when they done that, they discovered I had this impingement in my spine.” (P9 pre) “The reason why I'm in pain? Because of my impingement...” (P9 post) We observed evidence of reconceptualisation in the accounts of P1, 2, 3, 4, 7, 8, and 10. This evidence took various forms: language that no longer discussed pain in purely biomedical terms, the use of neurophysiological terms in a way that was not evident in the interviews before PNE, new language about the links between pain and emotions. P10's shift from an entirely biomedical view of her pain to becoming open to the idea that such an explanation may not be sufficient is illustrative. “…I won't have that made as an excuse for this because there's something real happening in my back. I think there's something wrong with my discs.” (P10 pre) “…there might not be [a structural] explanation for it…as it was explained in the session last week, it might not be structural.” (P10 post) For P1, 2, 3, 7, and 8, we considered the evidence for reconceptualisation as partial and patchy because the language consistent with reconceptualisation was accompanied by the language that was consistent with a biomedical understanding of pain. For example, in her interview before PNE, P1's response to being asked about the cause of her back pain was “Sclerosis…I know I've got disc degeneration.” (P1 pre) After PNE, she introduced neurophysiological language using the phrase “new nerve” in relation to neuroplasticity. “…it is the new nerve in sending the messages up…” (P1 post) while still describing the current cause of her pain in structural terms as before PNE. “I know I've got sclerosis of my lower back…whether the arthritis is starting to affect it more I don't know.” (P1 post) Participant 4, however, showed strong signs of reconceptualisation that exceeded partial reconceptualisation. He demonstrated the clearest change from pre- to post-PNE with respect to his explanation of his pain and his appreciation of the role of psychosocial factors on his pain. Both showed a clear shift away from the medical model. Prior to PNE, the participant believed that the most likely cause of his back pain was a fracture that had shown up in an MRI scan based on consultations with two different health care professionals. “He showed me on the thing (MRI scan) with his finger, that looks like a stress fracture to your back.” (P4 pre) “He (the health care professional) said, and he believed that I've probably like fractured a couple of bones in my body.” (P4 pre) After PNE, P4's explanation of his current pain was uniformly expressed in neurophysiological language with an absence of the biomedical language that had dominated the interview before PNE. “…any slight jarring, or anything like that, and it sends my back into spasm, which is like just basically creating a protective shell and it's so used to doing it it's on hypersensitive and I think that's generally why my pain is, and it's just not switching off…(Interviewer: What causes that hypersensitivity?) …I think that's all those too much chemicals in my body.” (P4 post) Also, he showed a clear change in understanding of the link between pain and mood from tenuous “…I won't completely reject it…” (P4 pre) to a full acceptance of the links. “…the psychology…and stuff like that is massive and knowing how your brain works and stuff like that is huge…” (P4 post) Participant 14 was a unique case. With a university-level educational background in biology, P14 had developed a clear understanding of pain mechanisms consistent with reconceptualisation as seen in his interview before PNE. “…I've had possibly a few back problems…and my back has picked up on this, if you like the nerve has picked up on this, it's sent the signals to the brain, the brain's sent it back down and it probably happens over two or three months.” (P14 pre) That understanding did not change after PNE but was reinforced. Even though he already had a clear understanding of pain mechanisms, P14 did find the session relevant to his own condition. “it all it did was to completely reaffirm the way that I was actually going or the way I'd actually thought before I came but you did it did help to if you like allay any I was going to say fears but it's not so much fears it's more concerns that I had in many ways, I'm going round the twist.” (P14 post) Of the 7 participants in whose accounts we observed evidence of reconceptualisation, we counted 5 as having applied that reconceptualisation to their own particular circumstances—P1, 2, 3, 4, and 7. In other words, their new understanding had personal relevance. Typically, this was noted by clear use of the first person singular such as “…basically the cause of my pain, my pain is sort of constant…” (P4 post) and by clear statements discussing the relevance of the session. “…at the time things that she was explaining did make sense and how, you know, things just triggered and how it all moves around your body and your mind and everything…I could relate to it, I could relate to it.” (P7 post) In contrast P8's account of reconceptualisation was more theoretical and related to a more general experience of living with pain, and when he described his own condition, the language was explicitly biomedical. Participants 9, 11, and 12 showed no clear evidence of relevance and indeed Participant 11 made it clear that she saw PNE as just another of the many things she was open to trying to help with her pain. “If you offered another session to me I'd still go, whether it was 100% relevant to me or not, I'll take anything that's going, I won't knock anything.” (P11 post) Participant 12 also reported a lack of relevance. His problems were pain and numbness in his legs following back surgery that had reduced pain in his back and he lamented the lack of a particular focus on his personal circumstances in the session. “…I didn't get the chance to explain what my problems were…it was about pain in general but it wasn't targeted at myself or anybody specific, it was just like everybody.” (P12 post) Before PNE, all three participants in whom we found no reconceptualisation (P9, 11, and 12) believed that their current pain was caused by biomechanical factors and did not show any signs of dissatisfaction with this belief. The beliefs of Participants 9 and 12 were passive in that they had not really given other potential causative factors consideration while Participant 11 was actively opposed to any alternative explanation—indeed, she had walked out of a previous consultation when the clinician enquired about social issues. “…all she wanted to know about was my personal life and I walked out because I said I'm not here about anything other than a crash…” (P11 pre) Participant 8, whose reconceptualisation was general rather than personal, had a steadfast belief that his current pain was caused by damage to his facet joints. For the other six participants in whom we did find some reconceptualisation and relevance (P1, 2, 3, 4, 7, and 10), all apart from Participant 1 stated prior beliefs which demonstrated either a dissatisfaction with their existing biomedical explanation of the current pain. “…the only thing I've been told as well it's probably mechanical…I'm not convinced that it is mechanical, it's not the same kind of pain as on the left side…” (P3 pre) and/or an openness to a more biopsychosocial/neurophysiological sensitisation explanation consistent with PNE. “I think I've got a lot of nerve, I know I've got a lot nerve damage…I think it's just those nerve endings suddenly coming alive again…I presume it's just that message going to my brain saying you're in pain, that's all I'm thinking you know, I don't know if that's correct.” (P7 pre) Neither Participant 8 nor Participant 10 described any clinical benefit from their PNE session. In the case of P8, rather than showing a clinical benefit after PNE, he discussed scenarios that were at odds with the aims of PNE. Most marked were statements about restricting movement and activity because of the potential damage to structures in his back. While he offered an explanation for back pain linked to neurophysiology following PNE, “… a build-up of the gateways being open permanently…allowing sensation to override…” he clearly continued to link his pain with tissue damage. “I think it's telling me be careful…because you don't want to aggravate an injury or a potential injury or something's going to happen if you continue with that activity.” (P8 post) The context of this was that he was comfortable with the facet joint diagnosis that he had received and its plausibility was enhanced because he had experienced benefit from a stretching regime that he could rationalise in terms of that diagnosis. That ties in with his general rather than personal reconceptualisation. P14 reported clinical benefit mainly in terms of reinforcement of his current understanding “…all it did was to completely reaffirm the way that I was actually going or the way that I'd actually thought before I came to you…” (P14 post) and clarification of some concerns that were causing him confusion. “…it did help to, if you like, allay any, I was going to say fears, but it's not so much fears, it's more concerns that I had in many ways, I'm going round the twist.” (P14 post) The remaining participants who we considered to have showed various degrees of partial reconceptualisation (P1, 2, 3, 4, and 7) all spoke about benefits from PNE. These described improved understanding about their pain and its management; “It made a lot of sense as to why even though especially over the last three or four years and all they've been doing is upping the painkillers why I'm not getting the relief that I thought I would be getting off them.” (P3 post) an increased ability to cope with pain; “…I suppose it's the acceptance what I've got out from this session is like to trying to accept the fact that you've got the pain for life and it's how that pain is managed is what makes life more manageable in itself.” (P2 post) and functional improvements. “…when I was walking quite briskly I just slowed down. I thought, oh calm down you've got plenty of time to get there…where before I would have just carried on…” (P7 post) Here, P7 describes how her new understanding of her pain influenced her walking in a form of activity pacing to carry on functioning while still experiencing pain. Those who did not show signs of reconceptualisation under our criteria (Participants 9, 11, and 12) showed neither personal relevance nor clinical benefit. “It was more interesting than useful.” (P11 post) Participant 2 provided the first example in the literature of evidence of an adverse effect from PNE in that she found the session to be upsetting. She explained how the PNE instructor had given an example of someone who injured his back falling off a ladder and then found his pain triggered when he saw a ladder. From that example, Participant 2 recognised how she associated her back pain with childbirth and that now the presence of her child was acting as a trigger for her pain. “They made a reference to a person who had chronic back pain after having fallen off a ladder and every time they saw a ladder or had to go anywhere near a ladder it triggered the pain, made it worse, and although that's nothing like my situation it made me worry because my back pain is related to childbirth that the effects my pain was having on my family… I was upset to think that my pain was sometimes worse when my daughter was being more demanding and although that scenario that was given that person could spend a good quality of their life avoiding the situation, avoiding using a ladder, avoiding going near a ladder, I don't want to and couldn't even if I did want to avoid the situation of being a parent…I mean it was just that the pain could be associated to the cause and knowing the cause of my pain was my daughter initially though it wasn't her fault.” (P2 post) This study aimed to explore the extent, and nature, of patients' reconceptualisation of their CLBP following PNE. The study investigated if the findings from our previous studies on reconceptualisation with PNE for people with chronic pain were sufficient to describe the experience of people specifically with CLBP. We found that the a priori themes—degrees of reconceptualisation, personal relevance, importance of prior beliefs, and perceived benefit of PNE—were all clearly identifiable within the data and did indeed provide a good description of participants' accounts. Our finding of partial and patchy reconceptualisation, whereby participants showed a range of degrees of reconceptualisation including none, is similar to what we found previously . Our earlier observation of the importance of prior beliefs applies here as well. This time, however, we found strong signs of reconceptualisation in one participant, P4. What was interesting was that his prior beliefs were not notably dissimilar to that of others. The role of prior beliefs of participants within our study were in keeping with the four steps to accommodate a new scientific concept outlined by Posner et al. : (1) dissatisfaction with current beliefs, (2) the new concept making sense to the person, (3) plausibility of the new concept, and (4) a belief that the new concept will be of practical help to the person. Broadly, those who showed no signs of reconceptualisation showed no signs of dissatisfaction with their existing biomedical explanation for their pain while the majority of those who did show signs of reconceptualisation were open to the neural sensitisation explanation of pain within PNE as plausible/relevant/potentially helpful. P4 shows that it is possible to achieve advanced reconceptualisation after one session. However, for most, it seems that more sessions would be required. P14's report of clinical benefit further highlights the importance of the availability of follow-up education. This was someone who had already acquired a high level of reconceptualisation and was functioning at a high level but was suitably troubled to seek help from a pain clinic. His expressed need was clarification of some issues that were causing him problems. Another finding in this study that we had not come across before was the distress experienced by P2. She reported the distress as happening during the session and it was evident at the time of the interview three weeks after. We do not have any insight into how long if at all the distress continued into the longer term. This is the first reporting of an adverse event associated with PNE in the literature. The participant was offered the opportunity to discuss their feelings with a clinical psychologist; however, they did not think this was necessary and therefore declined the offer. The lack of long-term follow-up is a limitation of this study. Pain management is an ongoing process and this is an important gap in knowledge. As highlighted by the needs expressed by P14 for education despite having a long history of managing his pain successfully, it would be foolish to think that people would never need further education and advice. The lack of data saturation could also be viewed as a limitation of this study . However, this study did not attempt to achieve data saturation. The need for data saturation in all qualitative studies has not been established and it has been proposed that using saturation as a generic marker of qualitative research quality is misplaced . The sample size employed in this study is in keeping with previous recommendations for studies which aim to understand common perceptions and experiences within a homogenous group . As we have previously demonstrated , relevance was once again seen as catalytic in the clinical impact of PNE. Interestingly, in Participant 8, we found an example of a participant who had misinterpreted the information to reinforce their maladaptive beliefs and behaviour having come across this in one of our previous studies . This may reflect a form of confirmation bias that has been noted in the learning of scientific concepts . Again, this reinforces the need for follow-up education and support. A strength of the study was the use of interviews before and after the PNE session, which allows greater insight into changes in beliefs than would be obtained by only interviewing people after PNE. The coherence of the themes between our previous work and the current findings lends confidence to the certainty of this evidence . That said, at this stage, the findings are still subject to the limitations of qualitative research as outlined in our last study with the findings being illustrative rather than representative with limitations determined by the delivery of PNE by way of a single session, the close proximity between the post-PNE interviews and the delivery of the session, and the restriction of the sample to people whose first language is English. 4.1. Recommendations for Future Research Important further work is needed to develop a method, probably using a questionnaire, to allow quantification of reconceptualisation so that a statistical approach can be used to produce more representative findings. This would require careful preliminary work to develop such a questionnaire with appropriate validity and reliability of a potentially mercurial construct. A useful starting point could be the pain neurophysiology quiz which has been developed and revised as a method of assessing change in knowledge of pain physiology information . Also, further work is required to extend the qualitative approach used here to explore the delivery issues stated above. Given the importance of the personal relevance of the information provided to the patient in PNE identified in this study and our previous work , PNE may be most effective when the information is tailored to the individual. This would be in keeping with Moseley who found that PNE was clinically more effective, though less cost-effective, when delivered in a one-to-one compared to a group setting . Future work should explore if PNE delivered in a homogenous patient group setting (e.g., a group of patients with CLBP) facilitating a more tailored group approach would maximise both clinical and cost-effectiveness. Patient group-specific PNE curricula are already available for a range of specific pain groups including people with CLBP . Another clinical approach to facilitate tailoring of the material, to enhance relevance, could be to have the educating therapist undertake a thorough examination of the patient prior to delivering PNE. The examination could be used as a way of identifying individual patient issues (e.g., anxieties, fears, and misconceptions) that could be specifically targeted during the education session. Again, future work should explore if this would enhance the effectiveness of PNE. PNE may be most effective when delivered in combination with other interventions, such as exercise, compared with when it is delivered in isolation as in this study. It would be interesting to explore qualitatively the extent and nature of patients' pain reconceptualisation following PNE delivered as part of a comprehensive multimodal package of care. Finally, health care professional's beliefs about pain can influence their clinical management of their patients. PNE has been shown to enhance health care students' understanding of pain and increase their likelihood of making appropriate recommendations for patients in practice. . However, that work has been of a quantitative nature and there is a need to further explore health care professional student's experience of PNE and the extent and nature of their pain reconceptualisation qualitatively. Important further work is needed to develop a method, probably using a questionnaire, to allow quantification of reconceptualisation so that a statistical approach can be used to produce more representative findings. This would require careful preliminary work to develop such a questionnaire with appropriate validity and reliability of a potentially mercurial construct. A useful starting point could be the pain neurophysiology quiz which has been developed and revised as a method of assessing change in knowledge of pain physiology information . Also, further work is required to extend the qualitative approach used here to explore the delivery issues stated above. Given the importance of the personal relevance of the information provided to the patient in PNE identified in this study and our previous work , PNE may be most effective when the information is tailored to the individual. This would be in keeping with Moseley who found that PNE was clinically more effective, though less cost-effective, when delivered in a one-to-one compared to a group setting . Future work should explore if PNE delivered in a homogenous patient group setting (e.g., a group of patients with CLBP) facilitating a more tailored group approach would maximise both clinical and cost-effectiveness. Patient group-specific PNE curricula are already available for a range of specific pain groups including people with CLBP . Another clinical approach to facilitate tailoring of the material, to enhance relevance, could be to have the educating therapist undertake a thorough examination of the patient prior to delivering PNE. The examination could be used as a way of identifying individual patient issues (e.g., anxieties, fears, and misconceptions) that could be specifically targeted during the education session. Again, future work should explore if this would enhance the effectiveness of PNE. PNE may be most effective when delivered in combination with other interventions, such as exercise, compared with when it is delivered in isolation as in this study. It would be interesting to explore qualitatively the extent and nature of patients' pain reconceptualisation following PNE delivered as part of a comprehensive multimodal package of care. Finally, health care professional's beliefs about pain can influence their clinical management of their patients. PNE has been shown to enhance health care students' understanding of pain and increase their likelihood of making appropriate recommendations for patients in practice. . However, that work has been of a quantitative nature and there is a need to further explore health care professional student's experience of PNE and the extent and nature of their pain reconceptualisation qualitatively. This study aimed to explore the extent, and nature, of patients' reconceptualisation of their CLBP following PNE using a set of a priori themes developed from previous research with heterogeneous samples of pain patients. We found that patients with CLBP who received PNE underwent varying levels of reconceptualisation, that the degree of reconceptualisation was influenced by previous beliefs and how relevant the information was deemed by the patient. Furthermore, the degree of reconceptualisation appeared to be related to the perceived benefit reported by the patient. No new themes beyond the a priori themes emerged. The findings were in keeping with our previous work, which included chronic pain participants from a range of clinical groups including multisite pain, back pain, and complex regional pain syndrome. The applicability of the four a priori themes, developed in previous heterogeneous pain samples, indicates that the key experiences of PNE for those with back pain are similar to those identified within samples of patients consisting of heterogeneous pain groups.
Revealing culturable fungal microbiome communities from the Arabian Peninsula desert representing a unique source of biochemicals for drug discovery and biotechnology
eba8a158-fd6b-40e0-a734-1a650f047b88
11914873
Microbiology[mh]
Microbes thrive in extreme environments, such as deserts, arctics, and deep-sea ecosystems by evolving unique survival mechanisms adapting to harsh conditions. These adaptations often include the production of specialized metabolites that play essential roles in competition, defense, and stress tolerance. For example, Penicillium chrysogenum , initially isolated from desert soil, produces a variety of antimicrobial compounds, including the β-lactam antibiotic penicillin Similarly, Aspergillus terreus produces lovastatin, a statin used to lower cholesterol, while Streptomyces hygroscopicus , which has been isolated from alkaline soils, secretes rapamycin, a potent immunosuppressant. Beyond their pharmaceutical potential, fungi are also exploited at an industrial level for the production of various compounds. For example, Aspergillus niger produces over 70% of global citric acid demand, while Trichoderma reesei produces cellulases crucial for biofuel production and textile processing. Fusarium venenatum is used to produce mycoprotein, a sustainable meat substitute, and Aspergillus oryzae is essential in fermenting traditional Asian foods such as soy sauce and miso. These examples showcase the potential of fungal metabolites across various pharmaceuticals and industrial sectors. Among challenging habitats is the arid desert, with extreme temperatures, UV radiation, and limited nutrients. These conditions drive the evolution of novel metabolic pathways in these extremophilic organisms, representing a largely untapped source of bioactive compounds with potential in drug discovery and biotechnology. In this study, we aimed to characterize the culturable fungal communities of four desert plants native to the Arabian Peninsula deserts. These plants included Panicum turgidum , Halocnemum strobilaceum , Haloxylon persicum , and Arnebia hispidissima. These plants have not been explored extensively for their fungal microbiomes, despite the essential role these microbes play in enhancing plant resilience and overall health. P. turgidum is a desert xerophyte that thrives in arid regions, including the Arabian Peninsula and parts of North Africa and Asia. This plant is characterized by its extensive drought resistance, with a significant role as a nurse plant in desert ecosystems. Its rhizosphere is profoundly colonized by Arbuscular Mycorrhizal Fungi (AMF), establishing a crucial symbiotic relationship that enhances nutrient uptake, water retention, and protection against pathogens. Recent studies have shown that AMF inoculation can improve drought tolerance in Panicum turgidum , mitigating oxidative stress and promoting chlorophyll production H. strobilaceum is well-adapted to saline and hypersaline habitats, such as salt marshes and alkali flats, making it native to regions like the Red Sea and Mediterranean. , Compounds derived from H. strobilaceum have demonstrated strong antimicrobial, antioxidant, and antibiofilm activities. For instance, recent research identified two promising compounds, one an alkaloid effective against various pathogens, and the other specifically inhibiting biofilm formation by Pseudomonas aeruginosa. Furthermore, the ethyl acetate extract of this plant has shown anticancer activity against common cancer cell lines in Egypt, including prostate (PC-3), lung (A-549), and breast (MCF-7) cancer. H. persicum , another well-known desert xerophyte, survives in arid regions due to its high resilience to drought stress, supported by its production of intrinsic compounds and metabolites. Found across western Asia, this plant significantly contributes to soil health and seed bank diversity within desert ecosystems. A study highlighted that the rhizospheres of H. persicum are rich in archaea and fungi, which play a critical role in nutrient cycling and promoting plant growth under harsh conditions. A. hispidissima inhabits arid and semi-arid regions in India and northern Africa and is widely distributed in the UAE. This plant is recognized for its medicinal properties, particularly its extract containing shikonin, which exhibits significant anti-cancer properties through mechanisms targeting cancer cell death. Additionally, it has been traditionally used in Indian medicine for treating various infections due to its potent antimicrobial properties. The successful biosynthesis of silver nanoparticles using root extracts of A. hispidissima has been reported, showcasing their potent antioxidant and antimicrobial activities against several pathogens. This study aims to explore the fungal communities associated with these resilient desert plants, identifying fungal species and their biological significance for potential applications in pharmaceuticals and biotechnology. Collection of plant samples In a prior study, we explored culturable bacterial communities from both the rhizosphere (R) and endosphere (E) of four native desert plants of the Arabian Peninsula: Halocnemum strobilaceum (HS), Panicum turgidum (PT), Haloxylon persicum (HP), and Arnebia hispidissima (AH). This study extends those findings by isolating and evaluating fungal communities associated with these plants. The methodologies for plant collection and sample preservation have been detailed previously. , In brief, five replicates from each plant were gathered from different sites near the UAE, which varied in soil properties. Collected samples, comprising both roots and surrounding rhizosphere soils, were obtained in October 2022, during which daytime temperatures ranged between 34 and 40 °C with minimal rainfall. Fungal epiphyte isolation A modified root-washing technique, based on the method published by Banno et al, was employed to isolate epiphytic fungi. Briefly, the roots were immersed in sterile water and shaken at 250 rpm for 30 minutes, a step that was repeated three times. Root washes were pooled, serially diluted (up to 10 5 ), and 200 μL of each dilution was spread on three types of media: potato dextrose agar (PDA, HiMedia, India # MH096), Sabouraud dextrose agar (SDA, HiMedia, India # MV063), and yeast maltose agar (YMA, HiMedia, India # M1967). All media have been prepared according to the manufacturer protocol. To prevent bacterial growth, all media were supplemented with chloramphenicol (200 μg/L). Plates were incubated at 25 °C for 5 days. Fungal colonies, selected based on their morphology, were subcultured multiple times (3–5 repetitions) until pure strains were obtained. For long-term storage, fungal spores and mycelia were suspended in 25% glycerol and stored at -20 °C. Fungal endophyte isolation The protocol for isolating endophytic fungi involved surface-sterilizing the root tissues, followed by similar culturing techniques as for epiphytes. Root surface sterilization was conducted by first sonication in autoclaved water for 5 minutes to remove soil particles, followed by a series of ethanol (95% for 3 minutes) and sodium hypochlorite (3% NaOCl for 5 minutes) treatments. Between these steps, the roots were thoroughly rinsed with sterile water. Sterilization success was confirmed by rolling sterilized roots on PDA plates and incubating them at 25 °C and 37 °C; no microbial growth was observed. The sterilized roots were then sectioned and ground in a sterile mortar, after which the tissue was plated on PDA, SDA, and YMA media. Subculturing and maintenance followed the same protocol as for the epiphytic fungi. Molecular identification of fungal isolates DNA extraction from fungal isolates was performed using a commercial DNA extraction kit (Norgen Bioteck, Canada). Briefly, the fungi were cultured in PD broth for 3 days, after which the mycelia were collected and ground in a sterile mortar. DNA extraction proceeded according to the manufacturer’s instructions, and the quality and quantity of DNA were assessed using gel electrophoresis and a nanodrop spectrophotometer, respectively. Taxonomic identification was carried out using ITS primers: ITS1 (5′-TCCGTAGGTGAACCTGCGG-3′) and ITS4 (5′-TCCTCCGCTTATTGATATGC-3′). PCR amplification involved an initial denaturation at 95°C for 5 minutes, followed by 35 cycles of 1-minute denaturation at 94°C, 30-second annealing at 55°C, and a 2-minute extension at 72°C, concluding with a final 10-minute extension at 72°C. PCR products were purified with the QIAquick gel purification kit (Qiagen, Germany) and sequenced. Sequences were aligned using BLAST against the NCBI database and deposited in. Biological assays All fungal isolates were cultured in 3 L Fernbach flasks containing PDA medium and incubated at 28 °C for 10 days. After this period, the culture was extracted by shaking with ethyl acetate (3 times the volume), and this extraction was repeated three times. The extract was then filtered and concentrated to dryness using a rotary evaporator (BUCHI R100, Schweiz). The resulting residue was dissolved in DMSO at a stock concentration of 100 μg/μL, which was subsequently used to evaluate various biological activities. Antioxidant activities To determine the antioxidant activity of the fungal extracts, we evaluated their free radical scavenging capacity colorimetrically using DPPH as a source of free radicals. A freshly prepared DPPH solution (50 μg/mL) was combined with a serial dilution of each fungal extract (ranging from 100 to 1 μg/mL), shaken, and allowed to incubate in the dark for 30 minutes at 20 °C. The absorbance was recorded at 517 nm using a spectrophotometer (OmegaStar, Germany). The percentage of radical scavenging activity was calculated using the formula: Radical Scavenging Activity ( % ) = [ 1 − ( Abs ( 51 7 nm ) of the sample / Abs ( 51 7 nm ) of the control ) ] × 100 . Total phenolic content The total phenolic content was determined using the Folin-Ciocalteu method. In brief, 1 μL of the extract (100 μg/μL) was combined with 100 μL of the Folin-Ciocalteu reagent (10% v/v) and incubated for 15 minutes. Following this, 2 μL of sodium carbonate (7%, w/v) was added to the mixture to neutralize it. The samples were then stored in the dark for 2 hours. Absorbance was recorded at 765 nm using a spectrophotometer. A calibration curve was established with gallic acid concentrations ranging from 5 to 200 μg/mL, and total phenolic content was expressed as micrograms of gallic acid equivalents (μg GAE) per milligram of EA extract. Total flavonoid content The total flavonoid content was evaluated using the aluminum chloride colorimetric method. Briefly, 5 μL of the extract (100 μg/μL) was combined with 100 μL of aluminum chloride (3% w/v) and 100 μL of potassium acetate (1 M). The samples were incubated at 25°C for 30 minutes. The absorbance was then measured at 420 nm, with the solvent used for dissolving the extract serving as the blank control. Quercetin (5–200 μg/mL) was utilized to generate the calibration curve. The results for total flavonoid content were reported as micrograms of quercetin equivalents (μg QE) per milligram of EA extract. Antimicrobial activities of the extracts To investigate the antimicrobial properties of the fungal extracts, we employed an agar diffusion assay against human and plant pathogens. The bacterial pathogens included Gram positive indicator Staphylococcus aureus (ATCC 25923) and Pseudomonas aeruginosa (BAA-1744). The fungal pathogens included in this study are C. albicans (ATCC 18804) and Fusarium gramineraum (MYA-4620). Each pathogen was cultured under optimal conditions for growth. We first conducted agar diffusion method and positive extracts were assessed for their minimum inhibitory concentration (MIC) using broth dilution method. For agar diffusion test, a 100 μL of an overnight culture of each pathogen was evenly spread onto agar plates (utilizing media appropriate for each pathogen’s growth). Sterile glass pipettes were then used to create wells in the agar, and 20 μL of 10 μg/μL of each extract was introduced into these wells and the plates were incubated aerobically at 37 °C for 24 hours. After incubation, the plates were examined for zones of inhibition. Positive cultures were processed to determine the MIC. To measure MIC, a single colony of each pathogen was initially cultured for 24 hours in its specific broth medium and subsequently diluted in the same medium at a 1:10,000 ratio, following McFarland Standards. Thereafter, 196 μL of the pathogen suspension was added to each well of a microplate, followed by the addition of 4 μL of serially diluted fungal extracts. Positive controls included amoxicillin (5 μM), and ciprofloxacin (2 μM), and amphotericin B (10 μM). The plates were incubated for 24 hours at the optimum growth condition for each pathogen, after which the optical density at 600 nm (OD600) was recorded using a microplate reader. All concentrations were tested in triplicate, and the experiment was independently replicated. The percentage of growth inhibition was calculated as previously described. To determine the minimum bactericidal or fungicidal concentration (MBC) of the fungal extracts, 10 μL of the inhibited samples from the MIC assay (where no growth was observed) were plated onto suitable agar plate and incubated at the best growth condition for each pathogen then inspected for colony formation. Control groups included samples treated with antibiotic or antifungal compound and untreated cultures. Cytotoxicity assay Two human lung cancer cell lines, A549 and H292, were utilized to evaluate the cytotoxic properties of the fungal extracts. The cells were cultured in RPMI medium supplemented with 10% FBS, 1% penicillin-streptomycin solution, and incubated at 37 °C in a 5% CO2 environment. The in vitro cytotoxic activity of the extracts was assessed against the A549 and H292 cells by measuring the formation of insoluble formazan salt, which occurs via the reduction of 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide (MTT) by NAD(P)H-dependent cellular oxidoreductase enzymes, directly correlating to the number of viable cells remaining after extract treatment. Tumor cells (30 × 10 3 cells per well) were seeded into 36-well culture plates and incubated for 24 hours at 37 °C in a humidified CO 2 incubator. After this incubation, the cells were treated with 10 μg/mL of each extract for 24 hours. Control groups included cells treated with 0.1% DMSO and untreated cells. Following treatment, 20 μL of MTT solution was added to each well, and the plates were incubated for an additional 4 hours at 37 °C in a 5% CO 2 incubator. After incubation, the plates were centrifuged for 20 minutes at 3000 rpm, and the formazan crystals formed were dissolved using 100 μL of DMSO. Absorbance was then measured at 570 nm using a microplate reader. Statistical analysis Data analysis and graphical representation were carried out using one-way ANOVA in GraphPad Prism V9 (GraphPad Software, La Jolla, CA, USA). Similar function could be performed by Microsoft Excel. All experiments were performed in triplicate, and results are presented as means ± standard error of the mean (SEM). Data for statistics are shown in Tables 1-4, together with the raw data. In a prior study, we explored culturable bacterial communities from both the rhizosphere (R) and endosphere (E) of four native desert plants of the Arabian Peninsula: Halocnemum strobilaceum (HS), Panicum turgidum (PT), Haloxylon persicum (HP), and Arnebia hispidissima (AH). This study extends those findings by isolating and evaluating fungal communities associated with these plants. The methodologies for plant collection and sample preservation have been detailed previously. , In brief, five replicates from each plant were gathered from different sites near the UAE, which varied in soil properties. Collected samples, comprising both roots and surrounding rhizosphere soils, were obtained in October 2022, during which daytime temperatures ranged between 34 and 40 °C with minimal rainfall. A modified root-washing technique, based on the method published by Banno et al, was employed to isolate epiphytic fungi. Briefly, the roots were immersed in sterile water and shaken at 250 rpm for 30 minutes, a step that was repeated three times. Root washes were pooled, serially diluted (up to 10 5 ), and 200 μL of each dilution was spread on three types of media: potato dextrose agar (PDA, HiMedia, India # MH096), Sabouraud dextrose agar (SDA, HiMedia, India # MV063), and yeast maltose agar (YMA, HiMedia, India # M1967). All media have been prepared according to the manufacturer protocol. To prevent bacterial growth, all media were supplemented with chloramphenicol (200 μg/L). Plates were incubated at 25 °C for 5 days. Fungal colonies, selected based on their morphology, were subcultured multiple times (3–5 repetitions) until pure strains were obtained. For long-term storage, fungal spores and mycelia were suspended in 25% glycerol and stored at -20 °C. The protocol for isolating endophytic fungi involved surface-sterilizing the root tissues, followed by similar culturing techniques as for epiphytes. Root surface sterilization was conducted by first sonication in autoclaved water for 5 minutes to remove soil particles, followed by a series of ethanol (95% for 3 minutes) and sodium hypochlorite (3% NaOCl for 5 minutes) treatments. Between these steps, the roots were thoroughly rinsed with sterile water. Sterilization success was confirmed by rolling sterilized roots on PDA plates and incubating them at 25 °C and 37 °C; no microbial growth was observed. The sterilized roots were then sectioned and ground in a sterile mortar, after which the tissue was plated on PDA, SDA, and YMA media. Subculturing and maintenance followed the same protocol as for the epiphytic fungi. DNA extraction from fungal isolates was performed using a commercial DNA extraction kit (Norgen Bioteck, Canada). Briefly, the fungi were cultured in PD broth for 3 days, after which the mycelia were collected and ground in a sterile mortar. DNA extraction proceeded according to the manufacturer’s instructions, and the quality and quantity of DNA were assessed using gel electrophoresis and a nanodrop spectrophotometer, respectively. Taxonomic identification was carried out using ITS primers: ITS1 (5′-TCCGTAGGTGAACCTGCGG-3′) and ITS4 (5′-TCCTCCGCTTATTGATATGC-3′). PCR amplification involved an initial denaturation at 95°C for 5 minutes, followed by 35 cycles of 1-minute denaturation at 94°C, 30-second annealing at 55°C, and a 2-minute extension at 72°C, concluding with a final 10-minute extension at 72°C. PCR products were purified with the QIAquick gel purification kit (Qiagen, Germany) and sequenced. Sequences were aligned using BLAST against the NCBI database and deposited in. All fungal isolates were cultured in 3 L Fernbach flasks containing PDA medium and incubated at 28 °C for 10 days. After this period, the culture was extracted by shaking with ethyl acetate (3 times the volume), and this extraction was repeated three times. The extract was then filtered and concentrated to dryness using a rotary evaporator (BUCHI R100, Schweiz). The resulting residue was dissolved in DMSO at a stock concentration of 100 μg/μL, which was subsequently used to evaluate various biological activities. To determine the antioxidant activity of the fungal extracts, we evaluated their free radical scavenging capacity colorimetrically using DPPH as a source of free radicals. A freshly prepared DPPH solution (50 μg/mL) was combined with a serial dilution of each fungal extract (ranging from 100 to 1 μg/mL), shaken, and allowed to incubate in the dark for 30 minutes at 20 °C. The absorbance was recorded at 517 nm using a spectrophotometer (OmegaStar, Germany). The percentage of radical scavenging activity was calculated using the formula: Radical Scavenging Activity ( % ) = [ 1 − ( Abs ( 51 7 nm ) of the sample / Abs ( 51 7 nm ) of the control ) ] × 100 . The total phenolic content was determined using the Folin-Ciocalteu method. In brief, 1 μL of the extract (100 μg/μL) was combined with 100 μL of the Folin-Ciocalteu reagent (10% v/v) and incubated for 15 minutes. Following this, 2 μL of sodium carbonate (7%, w/v) was added to the mixture to neutralize it. The samples were then stored in the dark for 2 hours. Absorbance was recorded at 765 nm using a spectrophotometer. A calibration curve was established with gallic acid concentrations ranging from 5 to 200 μg/mL, and total phenolic content was expressed as micrograms of gallic acid equivalents (μg GAE) per milligram of EA extract. The total flavonoid content was evaluated using the aluminum chloride colorimetric method. Briefly, 5 μL of the extract (100 μg/μL) was combined with 100 μL of aluminum chloride (3% w/v) and 100 μL of potassium acetate (1 M). The samples were incubated at 25°C for 30 minutes. The absorbance was then measured at 420 nm, with the solvent used for dissolving the extract serving as the blank control. Quercetin (5–200 μg/mL) was utilized to generate the calibration curve. The results for total flavonoid content were reported as micrograms of quercetin equivalents (μg QE) per milligram of EA extract. To investigate the antimicrobial properties of the fungal extracts, we employed an agar diffusion assay against human and plant pathogens. The bacterial pathogens included Gram positive indicator Staphylococcus aureus (ATCC 25923) and Pseudomonas aeruginosa (BAA-1744). The fungal pathogens included in this study are C. albicans (ATCC 18804) and Fusarium gramineraum (MYA-4620). Each pathogen was cultured under optimal conditions for growth. We first conducted agar diffusion method and positive extracts were assessed for their minimum inhibitory concentration (MIC) using broth dilution method. For agar diffusion test, a 100 μL of an overnight culture of each pathogen was evenly spread onto agar plates (utilizing media appropriate for each pathogen’s growth). Sterile glass pipettes were then used to create wells in the agar, and 20 μL of 10 μg/μL of each extract was introduced into these wells and the plates were incubated aerobically at 37 °C for 24 hours. After incubation, the plates were examined for zones of inhibition. Positive cultures were processed to determine the MIC. To measure MIC, a single colony of each pathogen was initially cultured for 24 hours in its specific broth medium and subsequently diluted in the same medium at a 1:10,000 ratio, following McFarland Standards. Thereafter, 196 μL of the pathogen suspension was added to each well of a microplate, followed by the addition of 4 μL of serially diluted fungal extracts. Positive controls included amoxicillin (5 μM), and ciprofloxacin (2 μM), and amphotericin B (10 μM). The plates were incubated for 24 hours at the optimum growth condition for each pathogen, after which the optical density at 600 nm (OD600) was recorded using a microplate reader. All concentrations were tested in triplicate, and the experiment was independently replicated. The percentage of growth inhibition was calculated as previously described. To determine the minimum bactericidal or fungicidal concentration (MBC) of the fungal extracts, 10 μL of the inhibited samples from the MIC assay (where no growth was observed) were plated onto suitable agar plate and incubated at the best growth condition for each pathogen then inspected for colony formation. Control groups included samples treated with antibiotic or antifungal compound and untreated cultures. Two human lung cancer cell lines, A549 and H292, were utilized to evaluate the cytotoxic properties of the fungal extracts. The cells were cultured in RPMI medium supplemented with 10% FBS, 1% penicillin-streptomycin solution, and incubated at 37 °C in a 5% CO2 environment. The in vitro cytotoxic activity of the extracts was assessed against the A549 and H292 cells by measuring the formation of insoluble formazan salt, which occurs via the reduction of 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide (MTT) by NAD(P)H-dependent cellular oxidoreductase enzymes, directly correlating to the number of viable cells remaining after extract treatment. Tumor cells (30 × 10 3 cells per well) were seeded into 36-well culture plates and incubated for 24 hours at 37 °C in a humidified CO 2 incubator. After this incubation, the cells were treated with 10 μg/mL of each extract for 24 hours. Control groups included cells treated with 0.1% DMSO and untreated cells. Following treatment, 20 μL of MTT solution was added to each well, and the plates were incubated for an additional 4 hours at 37 °C in a 5% CO 2 incubator. After incubation, the plates were centrifuged for 20 minutes at 3000 rpm, and the formazan crystals formed were dissolved using 100 μL of DMSO. Absorbance was then measured at 570 nm using a microplate reader. Data analysis and graphical representation were carried out using one-way ANOVA in GraphPad Prism V9 (GraphPad Software, La Jolla, CA, USA). Similar function could be performed by Microsoft Excel. All experiments were performed in triplicate, and results are presented as means ± standard error of the mean (SEM). Data for statistics are shown in Tables 1-4, together with the raw data. In this study, we aimed to profile the culturable fungal communities associated with four plants native to the Arabian Peninsula desert. We isolated a total of 12 unique fungal species, identified taxonomically through sequence alignments, and assessed their derived extracts for various biological activities ( ). From Panicum turgidum , we isolated five fungal species identified as Mucor sp. (PT-F1), Aspergillus sp. (PT-F2), Colletotrichum sp. (PT-F3), Alternaria sp. (PT-F4), and Chaetomium sp. (PT-F5). Additionally, three isolates were obtained from Halocnemum strobilaceum , two of which were identified as Aspergillus species (HS-F1 and HS-F2), while the other was Fusarium sp. (HS-F3). From Haloxylon persicum , we identified two unique fungi: Plectosphaerella sp. (HP-F1) and Aspergillus sp. (HP-F2). Lastly, two fungi were isolated from Arnebia hispidissima , namely Curvularia sp. (AH-F1) and Fusarium sp. (AH-F2). Overall, Aspergillus species were the most prevalent, representing 33% of the total recovered fungi across all plant samples. To assess the biological activities of the isolated fungi, we scaled up the fermentation of each species and prepared crude ethyl acetate extracts. These extracts were then subjected to six assays: 1) antioxidant activities, 2) total phenolic content, 3) total flavonoid content, 4) antibacterial assay, 5) antifungal assay, and 6) anticancer cytotoxicity screening. All assessed extracts demonstrated comparable antioxidant activities, with the highest levels observed in Aspergillus species PT-F2, HS-F1, and HP-F2 ( ). In contrast, the lowest antioxidant activities were recorded for PT-F1 ( Mucor sp. ), AH-F1 ( Curvularia sp. ), and AH-F2 ( Fusarium sp. ). Regarding total phenolic content, Curvularia sp. exhibited the richest extract, followed by Aspergillus species isolated from Panicum turgidum. The lowest phenolic content was found in HS-F2 ( Aspergillus sp. ), HS-F3 ( Fusarium sp. ), and HP-F1 ( Plectosphaerella sp. ). The highest flavonoid content was reported for HS-F1 ( Aspergillus sp. ) and AH-F1 ( Curvularia sp. ), followed by PT-F2 ( Aspergillus sp. ) and PT-F5 ( Chaetomium sp. ). To assess the antimicrobial potential of each extract against bacterial and fungal indicators, we conducted two experiments. The first was an agar diffusion test to identify any possible activities of the crude extracts. Extracts that demonstrated positive results based on the diameter of the inhibition zone were subsequently subjected to a broth dilution test to determine their minimum inhibitory concentration (MIC). Data presented in revealed significant antibacterial activity, particularly against the Gram-positive indicator strain Staphylococcus aureus , with a total of nine isolates exhibiting positive inhibitory effects. The most potent extracts, characterized by the lowest MIC values, were derived from Panicum turgidum (PT-F1, PT-F2) and Arnebia hispidissima (AH-F1), followed by Halocnemum strobilaceum (HS-F2, HS-F3) and PT-F5, HP-F2. In contrast, activity against Pseudomonas aeruginosa was limited, as indicated by the high MIC values for all extracts. The lowest MIC values were recorded for PT-F1 ( Mucor sp. ), PT-F5 ( Chaetomium sp. ), and AH-F1 ( Curvularia sp. ). In terms of antifungal activities, eight fungal extracts showed positive effects against the tested fungal species ( ). The most active extracts in inhibiting both Fusarium graminearum and Candida albicans were PT-F4, followed by PT-F1, HS-F3, HS-F1, and HP-F1. The highest MIC values were observed for PT-F5 ( Chaetomium sp. ) and HP-F2 ( Aspergillus sp. ). To assess the preliminary cytotoxic activities of the extracts, we conducted the MTT assay. All extracts demonstrated cytotoxic activity, with the lowest observed for PT-F2 ( Aspergillus sp. ), as shown in . In this study, we identified 12 fungal species for the first time in the examined desert plants and explored their biochemical characteristics. These fungal isolates belong to eight genera, with Aspergillus and Fusarium being the most prevalent. Species from the genus Aspergillus have been previously isolated from desert plants, such as Opuntia versicolor in the Sonoran Desert. A study conducted in Saudi Arabia, investigating the cultivable fungal communities in the Arabian Peninsula’s desert soils, also identified Aspergillus as the most widespread fungal genus. Similarly, another study identified Aspergillus species, specifically A. niger and A. flavus , among the fungal communities in Saudi Arabia’s Sabkha desert marshes. , Aspergillus fumigatus is known to inhabit many desert plants in arid and semi-arid environments across regions like India, Pakistan, the Mexican desert, and Iran. Additionally, Fusarium species are dominant as endophytic fungi in various desert plants, including Cyathea gigantea , Calotropis procera , Withania somnifera , and Aloe vera. – Alternaria species has been isolated from desert plants in both Jordanian and Saudi Arabian deserts and from soil in Al-Kharj, Saudi Arabia. Ecological significance of isolated fungi to desert habitats Fungi play a crucial role in desert ecosystems by supporting plant health, enhancing nutrient cycling, and providing resilience against environmental stressors. The unique metabolic capabilities of the fungal species identified in this study highlight their importance in these arid environments. Previous research supports the unique metabolic capabilities of these fungal species and highlights their crucial role in promoting plant health. The genus Aspergillus is regarded as a prevalent endophytic fungus with xerophytic characteristics, allowing it to thrive under the arid conditions and water scarcity typical of desert environments. Aspergillus niger has been shown to provide a biological shield for host plants by protecting them from pests and pathogens and enhancing their resilience to biotic and abiotic stresses. Another fungus identified in this study, Mucor mucedo (commonly known as pin mold), is a saprophyte with a broad ecological tolerance and a worldwide distribution. It colonizes decaying organic matter and is capable of rapid growth in environments with limited nutrients. Mucor mucedo can also survive extreme environmental conditions, such as freezing temperatures, UV radiation, and desiccation. Its role in decomposing organic matter yields essential nutrients that support plant development. In this study, we also identified Colletotrichum spaethianum , a known endophytic fungus. Species within the Colletotrichum genus are primarily found in tropical and temperate environments. Some Colletotrichum species exhibit beneficial activities, such as C. magna , which helps plants combat infections caused by F. oxysporum and C. orbiculare. Moreover, C. gloeosporioides produces colletotric acid, a potent antifungal compound. Chaetomium globosum , known as a saprophyte and occasionally an endophyte, has been demonstrated to protect plants against the toxic effects of heavy metals such as copper. Aspergillus terreus , another species identified in this study, produces metabolites such as phenols, flavonoids, and indole-acetic acid, which stimulate plant growth. Research on tomato plants shows that A. terreus enhances shoot and root length, as well as overall chlorophyll content. Another study reports that filtrates of A. terreus , free from spores and mycelia, significantly reduce spore formation of the plant pathogen Pythium aphanidermatum , thereby improving plant growth and protection. Plectosphaerella cucumerina is another significant fungus identified. Its cell wall contains molecules classified as microbe-associated molecular patterns (MAMPs), which can bind to pattern recognition receptors (PRRs) in Arabidopsis thaliana , triggering the plant’s defense mechanisms. Additionally, P. cucumerina promotes host plant growth by inducing the expression of genes involved in carbohydrate and amino acid synthesis, which enhances the host plant’s growth and can impart similar benefits when transplanted into other plants. , Bioactivity of fungal extracts and previous isolated compounds Our investigation into the crude extracts from various cultured fungal species revealed a rich spectrum of biological activities, including antimicrobial, antioxidant, and anticancer properties. This is supported by previous studies. For instance, A. niger , isolated from desert soils in Saudi Arabia, exhibited significant antioxidant activities. Furthermore, extracts of A. niger and A. flavus displayed potent antimicrobial and anticancer activities. Additionally, Aspergillus species from Phragmites australis leaves showed antibacterial effects against Klebsiella sp. , E. coli , and S. aureus , along with antibiofilm activities. The extracts also demonstrated cytotoxicity on the breast cancer cell line MCF-7, with an IC50 of 8 μg/μl. Detailed studies have shown that A. niger extract can induce cell cycle arrest and apoptosis, revealing a composition of diverse hydrocarbons, phthalates, and phenolic derivatives. Various metabolites from A. terreus have exhibited antibacterial, anticancer, and antioxidant activities. , Notably, Asperteramide A showed potent antibacterial activity against Klebsiella pneumoniae , MRSA , Acinetobacter baumannii , Enterococcus faecalis , and ESBL -producing E. coli. Furthermore, tetracyclic acid A , isolated from A. terreus , has emerged as a valuable anticancer agent by stimulating heat shock responses in tumor cells. Crude extracts from Aspergillus fumigatus , isolated from mangrove plants in the Sundarbans, demonstrated potent antibacterial activity against both Gram-positive and Gram-negative strains, including E. coli , Micrococcus luteus , Pseudomonas aeruginosa , and Staphylococcus aureus. Remarkably, an enzyme produced by thermotolerant A. fumigatus , known as MGL, exhibited anticancer activity in Hep-G2 and HCT116 cell lines. Fusarium species isolated from Cinnamomum kanehirae , Selaginella pallescens , and Tripterygium wilfordii displayed antimicrobial activities against methicillin-resistant S. aureus and Candida albicans. – A recent study indicated that the ethyl acetate extract of Fusarium species possesses the highest antibacterial, antioxidant, and anticancer potential. In this study, we identified Colletotrichum species with antibacterial and antioxidant activities. Previous studies noted the presence of two Colletotrichum sp. in Andrographis paniculate , demonstrating their ability to produce antimicrobial and antioxidant compounds. Additionally, Colletotrichum acutatum , isolated from Angelica sinensis , produces molecules exhibiting antimalarial, antioxidant, antibacterial, anti-proliferative, and antibiofilm activity. Extracts from Colletotrichum sp. CG1-7, isolated from Arrabidaea chica , displayed antioxidant activity comparable to quercetin. Furthermore, two metabolites derived from Colletotrichum sp., phthalide and isocoumarins, have shown potential as antioxidant agents and in inhibiting cancer cell growth in the HepG2 cell line. Another compound, palmitoylethanolamide (PEA), isolated from C. gloeosporioides , demonstrated potential anticancer activity against human breast cancer cells via apoptosis induction. Interestingly, the production of valuable antimicrobial and antioxidant compounds from various Colletotrichum species is significantly enhanced by light spectrum treatment. Additionally, we reported on the activities of extracts from Alternaria species, including potent antifungal and anticancer activities. Previous data show that extracts of A. alternata exhibit antibacterial activity. Compounds isolated from A. alternata include alternariol, tenuazonic acid, levofuraltadone, and kigelinone, which have antibacterial and/or anticancer activities. , Another fungus identified in our study is Chaetomium , exhibiting antioxidant and anticancer activities. Bioactive metabolites from Chaetomium globosum have been reported, displaying anticancer, antimicrobial, anti-inflammatory, antiviral, and antioxidant activities. , Notably, chrysophanol alkaloid from Chaetomium shows anticancer activities against multiple cancer types, along with its antimicrobial properties. – Another compound, Chetomin, produced by the Chaetomium genus, has been noted for its ability to block hypoxic-inducible transcription, thereby suppressing tumor growth. We also identified Plectosphaerella species with antifungal, anticancer and antioxidant activities. Previous reports indicate that P. cucumerina extracts exhibit antibiofilm and anti-virulence activity against Pseudomonas aeruginosa , likely due to the presence of emodin and patulin compounds. Moreover, studies have shown the potential of P. cucumerina in providing protection against nematodes. , Lastly, we reported activities of Curvularia species, including antibacterial activities against both Gram positive and negative pathogens. Previous research indicates that extracts from C. lunata possess antimicrobial and antioxidant activities, with low cytotoxic effects against the ATCC-CCL-81 cell line. Furthermore, Curvularia species are known for producing metabolites with antibacterial, antioxidant, and anticancer activities. For instance, Curvularia sp. G6-32, isolated from Sapindus saponaria , generates epoxyquinone, noted for its antioxidant potential. Regarding anticancer properties, Curvularia australiensis FC2AP, isolated from Aegle marmelos leaves, produces flavonoids that exhibit anti-cervical cancer and anti-inflammatory effects. Additionally, Curvularia sp. from Terminalia laxiflora generates bioactive peptides, shown to suppress tumor growth and inhibit angiogenesis. Desert microbiomes: Implications for climate change solutions Desert fungi hold significant potential to mitigate climate change impacts due to their unique adaptations to extreme heat, drought, and salinity. Colletotrichum species, for instance, serve as eco-friendly protectors against abiotic drought stress, which can severely damage plants and crops. Colletotrichum alatae secretes a heteropolysaccharide rich in β-glucan, enhancing drought resilience and optimizing rice cultivation in severely drought-affected areas. Furthermore, Colletotrichum species have been identified as drought-tolerant fungi, significantly contributing to plant growth in arid environments. Chaetomium globosum is recognized for its salt tolerance, promoting plant resilience in saline conditions. Reports indicate that Chaetomium globosum can enhance the survival and growth of salt-sensitive crops under drought stress. Other fungi, such as Aspergillus fumigatus , thrive at extreme temperatures and demonstrate potential in protecting sensitive plants like wheat from drought. Research has also highlighted the ability of C. lunata inoculum to improve resistance to salt and drought in rice, thus enhancing overall plant growth. Desert fungi exhibit various environmental applications. For example, A. niger produces novel xylose transporters that efficiently convert lignocellulosic biomass into eco-friendly biofuels. Mucor mucedo is increasingly recognized for its ability to degrade hydrocarbons. A study found that immobilizing M. mucedo on corncob particles enhanced its efficacy in remediating pyrene-contaminated agricultural soil. Further investigation into M. mucedo revealed that exopolymer substances (EPS) play a crucial role in degrading polycyclic aromatic hydrocarbons, suggesting its application in environmental cleanup efforts. Additionally, C. lunata has been shown to enhance bioremediation of hydrocarbon-contaminated soil in conjunction with the plant Luffa aegyptiaca , facilitating the degradation of accumulated hydrocarbons. The Plectosphaerella cucumerina AR1 strain has also demonstrated the ability to degrade nicosulfuron, an herbicide commonly applied to maize crops, which contributes to groundwater and surface stream contamination. Fungi play a crucial role in desert ecosystems by supporting plant health, enhancing nutrient cycling, and providing resilience against environmental stressors. The unique metabolic capabilities of the fungal species identified in this study highlight their importance in these arid environments. Previous research supports the unique metabolic capabilities of these fungal species and highlights their crucial role in promoting plant health. The genus Aspergillus is regarded as a prevalent endophytic fungus with xerophytic characteristics, allowing it to thrive under the arid conditions and water scarcity typical of desert environments. Aspergillus niger has been shown to provide a biological shield for host plants by protecting them from pests and pathogens and enhancing their resilience to biotic and abiotic stresses. Another fungus identified in this study, Mucor mucedo (commonly known as pin mold), is a saprophyte with a broad ecological tolerance and a worldwide distribution. It colonizes decaying organic matter and is capable of rapid growth in environments with limited nutrients. Mucor mucedo can also survive extreme environmental conditions, such as freezing temperatures, UV radiation, and desiccation. Its role in decomposing organic matter yields essential nutrients that support plant development. In this study, we also identified Colletotrichum spaethianum , a known endophytic fungus. Species within the Colletotrichum genus are primarily found in tropical and temperate environments. Some Colletotrichum species exhibit beneficial activities, such as C. magna , which helps plants combat infections caused by F. oxysporum and C. orbiculare. Moreover, C. gloeosporioides produces colletotric acid, a potent antifungal compound. Chaetomium globosum , known as a saprophyte and occasionally an endophyte, has been demonstrated to protect plants against the toxic effects of heavy metals such as copper. Aspergillus terreus , another species identified in this study, produces metabolites such as phenols, flavonoids, and indole-acetic acid, which stimulate plant growth. Research on tomato plants shows that A. terreus enhances shoot and root length, as well as overall chlorophyll content. Another study reports that filtrates of A. terreus , free from spores and mycelia, significantly reduce spore formation of the plant pathogen Pythium aphanidermatum , thereby improving plant growth and protection. Plectosphaerella cucumerina is another significant fungus identified. Its cell wall contains molecules classified as microbe-associated molecular patterns (MAMPs), which can bind to pattern recognition receptors (PRRs) in Arabidopsis thaliana , triggering the plant’s defense mechanisms. Additionally, P. cucumerina promotes host plant growth by inducing the expression of genes involved in carbohydrate and amino acid synthesis, which enhances the host plant’s growth and can impart similar benefits when transplanted into other plants. , Our investigation into the crude extracts from various cultured fungal species revealed a rich spectrum of biological activities, including antimicrobial, antioxidant, and anticancer properties. This is supported by previous studies. For instance, A. niger , isolated from desert soils in Saudi Arabia, exhibited significant antioxidant activities. Furthermore, extracts of A. niger and A. flavus displayed potent antimicrobial and anticancer activities. Additionally, Aspergillus species from Phragmites australis leaves showed antibacterial effects against Klebsiella sp. , E. coli , and S. aureus , along with antibiofilm activities. The extracts also demonstrated cytotoxicity on the breast cancer cell line MCF-7, with an IC50 of 8 μg/μl. Detailed studies have shown that A. niger extract can induce cell cycle arrest and apoptosis, revealing a composition of diverse hydrocarbons, phthalates, and phenolic derivatives. Various metabolites from A. terreus have exhibited antibacterial, anticancer, and antioxidant activities. , Notably, Asperteramide A showed potent antibacterial activity against Klebsiella pneumoniae , MRSA , Acinetobacter baumannii , Enterococcus faecalis , and ESBL -producing E. coli. Furthermore, tetracyclic acid A , isolated from A. terreus , has emerged as a valuable anticancer agent by stimulating heat shock responses in tumor cells. Crude extracts from Aspergillus fumigatus , isolated from mangrove plants in the Sundarbans, demonstrated potent antibacterial activity against both Gram-positive and Gram-negative strains, including E. coli , Micrococcus luteus , Pseudomonas aeruginosa , and Staphylococcus aureus. Remarkably, an enzyme produced by thermotolerant A. fumigatus , known as MGL, exhibited anticancer activity in Hep-G2 and HCT116 cell lines. Fusarium species isolated from Cinnamomum kanehirae , Selaginella pallescens , and Tripterygium wilfordii displayed antimicrobial activities against methicillin-resistant S. aureus and Candida albicans. – A recent study indicated that the ethyl acetate extract of Fusarium species possesses the highest antibacterial, antioxidant, and anticancer potential. In this study, we identified Colletotrichum species with antibacterial and antioxidant activities. Previous studies noted the presence of two Colletotrichum sp. in Andrographis paniculate , demonstrating their ability to produce antimicrobial and antioxidant compounds. Additionally, Colletotrichum acutatum , isolated from Angelica sinensis , produces molecules exhibiting antimalarial, antioxidant, antibacterial, anti-proliferative, and antibiofilm activity. Extracts from Colletotrichum sp. CG1-7, isolated from Arrabidaea chica , displayed antioxidant activity comparable to quercetin. Furthermore, two metabolites derived from Colletotrichum sp., phthalide and isocoumarins, have shown potential as antioxidant agents and in inhibiting cancer cell growth in the HepG2 cell line. Another compound, palmitoylethanolamide (PEA), isolated from C. gloeosporioides , demonstrated potential anticancer activity against human breast cancer cells via apoptosis induction. Interestingly, the production of valuable antimicrobial and antioxidant compounds from various Colletotrichum species is significantly enhanced by light spectrum treatment. Additionally, we reported on the activities of extracts from Alternaria species, including potent antifungal and anticancer activities. Previous data show that extracts of A. alternata exhibit antibacterial activity. Compounds isolated from A. alternata include alternariol, tenuazonic acid, levofuraltadone, and kigelinone, which have antibacterial and/or anticancer activities. , Another fungus identified in our study is Chaetomium , exhibiting antioxidant and anticancer activities. Bioactive metabolites from Chaetomium globosum have been reported, displaying anticancer, antimicrobial, anti-inflammatory, antiviral, and antioxidant activities. , Notably, chrysophanol alkaloid from Chaetomium shows anticancer activities against multiple cancer types, along with its antimicrobial properties. – Another compound, Chetomin, produced by the Chaetomium genus, has been noted for its ability to block hypoxic-inducible transcription, thereby suppressing tumor growth. We also identified Plectosphaerella species with antifungal, anticancer and antioxidant activities. Previous reports indicate that P. cucumerina extracts exhibit antibiofilm and anti-virulence activity against Pseudomonas aeruginosa , likely due to the presence of emodin and patulin compounds. Moreover, studies have shown the potential of P. cucumerina in providing protection against nematodes. , Lastly, we reported activities of Curvularia species, including antibacterial activities against both Gram positive and negative pathogens. Previous research indicates that extracts from C. lunata possess antimicrobial and antioxidant activities, with low cytotoxic effects against the ATCC-CCL-81 cell line. Furthermore, Curvularia species are known for producing metabolites with antibacterial, antioxidant, and anticancer activities. For instance, Curvularia sp. G6-32, isolated from Sapindus saponaria , generates epoxyquinone, noted for its antioxidant potential. Regarding anticancer properties, Curvularia australiensis FC2AP, isolated from Aegle marmelos leaves, produces flavonoids that exhibit anti-cervical cancer and anti-inflammatory effects. Additionally, Curvularia sp. from Terminalia laxiflora generates bioactive peptides, shown to suppress tumor growth and inhibit angiogenesis. Desert fungi hold significant potential to mitigate climate change impacts due to their unique adaptations to extreme heat, drought, and salinity. Colletotrichum species, for instance, serve as eco-friendly protectors against abiotic drought stress, which can severely damage plants and crops. Colletotrichum alatae secretes a heteropolysaccharide rich in β-glucan, enhancing drought resilience and optimizing rice cultivation in severely drought-affected areas. Furthermore, Colletotrichum species have been identified as drought-tolerant fungi, significantly contributing to plant growth in arid environments. Chaetomium globosum is recognized for its salt tolerance, promoting plant resilience in saline conditions. Reports indicate that Chaetomium globosum can enhance the survival and growth of salt-sensitive crops under drought stress. Other fungi, such as Aspergillus fumigatus , thrive at extreme temperatures and demonstrate potential in protecting sensitive plants like wheat from drought. Research has also highlighted the ability of C. lunata inoculum to improve resistance to salt and drought in rice, thus enhancing overall plant growth. Desert fungi exhibit various environmental applications. For example, A. niger produces novel xylose transporters that efficiently convert lignocellulosic biomass into eco-friendly biofuels. Mucor mucedo is increasingly recognized for its ability to degrade hydrocarbons. A study found that immobilizing M. mucedo on corncob particles enhanced its efficacy in remediating pyrene-contaminated agricultural soil. Further investigation into M. mucedo revealed that exopolymer substances (EPS) play a crucial role in degrading polycyclic aromatic hydrocarbons, suggesting its application in environmental cleanup efforts. Additionally, C. lunata has been shown to enhance bioremediation of hydrocarbon-contaminated soil in conjunction with the plant Luffa aegyptiaca , facilitating the degradation of accumulated hydrocarbons. The Plectosphaerella cucumerina AR1 strain has also demonstrated the ability to degrade nicosulfuron, an herbicide commonly applied to maize crops, which contributes to groundwater and surface stream contamination. The Arabian Peninsula desert is home to diverse microbial communities that offer significant applications in drug discovery, as well as industrial and environmental interventions related to bioremediation and climate change solutions. Given the unique adaptations of these microorganisms, a comprehensive profiling of the desert microbiome, encompassing both bacterial and fungal communities, should be a research priority. This effort will not only enhance our understanding of desert ecosystems but also unlock the potential of these microbes for sustainable applications. WM designed the study, collected the plants, isolated fungi, performed DNA extraction, PCR experiments, performed large scale fermentation and extraction, conducted biological assays, performed statistical and data analysis, designed and developed figures, and wrote the manuscript. RG conducted cytotoxicity experiment. NA contributed to isolation and purification of individual isolates, antimicrobial assays, data analysis and discussion. TAI designed the study, experimental protocols and analyzed data. All authors reviewed, edited, and approved the final version of the manuscript. Ethical approval and consent were not required.
A novel method for assessment of human midpalatal sutures using CBCT-based geometric morphometrics and complexity scores
cd0e6d01-30ad-458d-8cf3-e88195bd175a
10415503
Suturing[mh]
Craniofacial sutures act as active growth sites, absorb mechanical stresses and protect the brain . These functions impact the overall skull shape and also sutural morphology and complexity . In case of abnormal craniofacial development, sutures may be surgically or orthopaedically distracted to manipulate growth and treat dentofacial deficiencies . Thus, the maxillary midpalatal suture plays an important role in maxillary development and growth. Furthermore, the outcome of treating skeletal maxillary malformations depends on the morphology and complexity of this suture [ – ]. Midpalatal suture morphology may vary considerably between sexes and across developmental stages . This variability complicates treatment of skeletal maxillary malformations . More specifically, findings suggest that pronounced sutural interdigitation hinders transverse expansion success . Hence, analysis of the patient’s sutural characteristics provides insights into the midpalatal suture’s morphology and degree of interdigitation and may support medical decision-making and enhance treatment success . Previous work assessing midpalatal sutural morphology in humans has mainly focused on histological and radiographic observations. Histological research has described the midpalatal suture as a butt joint [ , , ], which evolves into other, more complex, highly interdigitated joint types during ontogeny and in reaction to mechanical stress [ , , – ]. Histologic and micro-radiographic frontal sections have shown the suture as undulating in the juvenile period, whereas it assumes a sinuous course with increasing interdigitation in the adolescent period . Histological research has shown that midpalatal suture morphology develops highly interdigitated patterns over time [ , , ] and that morphology varies considerably between age groups [ , , ]. In recent years, radiographic approaches have used cone-beam computed tomography (CBCT) to visualise palatal suture morphology. More specifically, Angelieri et al. morphologically classified midpalatal sutures into stages from A to E as observed on CBCT [ , , ]. Another study detected variations in sutural morphology using flat-panel volume computed tomography of animals . However, no quantitative analysis based on objective metrics has yet been presented in the literature. Consequently, the comparability of sutural shapes can be compromised, and more studies are needed to determine how CBCT can be used to describe sutural features . To improve the objectiveness and comparability and to account for several morphological sutural features simultaneously, geometric morphometrics (GMM) and complexity scores may be implemented to evaluate suture morphology and complexity, respectively. GMM is an approach to statistically evaluate shapes based on landmark coordinates. Firstly, to make shapes comparable, GMM removes size, rotation and translation from the landmark’s configurations. Secondly, principal component analysis (PCA) is performed. As a statistical method, PCA extracts relevant information from datasets containing various variables and summarises this information in a new, smaller set of variables: the principal components. Thereby, the dimensionality of the data is reduced, and the overall variability of shapes becomes detectable . Focusing on complexity as one aspect of morphology, complexity scores mathematically integrate interdigitation characteristics such as amplitude, number of interdigitation and looping patterns into a single score . Among available complexity scores, the windowed short-time Fourier transform (STFT) with a power spectrum density (PSD) calculation appears most promising to comprehensively capture sutural complexity. However, the PSD complexity score has been calculated only for sutures of diverse mammalian taxa on X-ray microtomography and has not been applied to CBCT of human midpalatal sutures. This study aimed to demonstrate that the combination of GMM and sutural complexity scores may constitute a novel and comprehensive CBCT-based sutural analysis in humans. We retrospectively analysed midpalatal sutures on CBCT by applying GMM. This study was the first to calculate a sutural complexity score based on human CBCT. Data collection This retrospective study was based on CBCTs from a sample of consecutive patients treated in a German orthodontic and maxillofacial surgery clinic between January 2020 and July 2022. For the included patients, maxillary CBCT examinations were performed based on one the following indications: position of severely impacted teeth, bone dimensions prior to implant placement or implant site dimensions. Only patients with already existing CBCTs were included in this study. Because of the retrospective design of the study, no ethical approval was required. However, every patient gave written consent for the use of their medical data for scientific purposes related to this specific study. The data files were processed anonymously. The exclusion criteria were previous rapid maxillary expansion, cleft lip and palate, and impaired bone metabolism due to medication or artefacts in the same plane as the midpalatal suture (e.g. the transpalatal arch). In addition, patients with already fused midpalatal sutures (stage E suture fusion according to Angelieri) were excluded because their midpalatal suture outline cannot be traced and their parasutural bone density is the same as that of other palatal regions . Notably, the midpalatal suture may remain open throughout life , which allowed us to include patients with higher age in the sample. To avoid interobserver bias, one author conducted the data analysis. Method Cone-beam computed tomography The CBCTs were generated using Orthophos® XG 3D (Dentsply Sirona, Bensheim, Germany). The radiation dose ranged from 91 mGy*cm 2 to 781 mGy*cm 2 . Volume sizes varied from 5 × 5.5 cm through 8 × 8 cm to 11 × 10 cm. The acquisition time was 4300 ms for 42 patients and 2500 ms for six patients. Data preparation Data preparation comprised several steps, as depicted in Fig. . First, the CBCTs were converted into anonymised DICOM files and exported using RadiAnt DICOM Viewer . Second, the exported files were reconstructed in Avizo v.9.3 software (FEI, Hillsboro, OR, USA) and Geomagic Wrap (3D Systems) so that only the cranial segments were displayed in the three-dimensional isosurface, based on case-specific thresholds to optimise sutural traceability. Third, the midpalatal suture was photographed using uniform positioning to control the degree of parallax in the R package ‘rgl’ . To check for robustness, the data were additionally prepared using the open-source software 3DSlicer as an alternative to Avizo and Geomagic. These steps produced two-dimensional digital photographs of the midpalatal suture, which were used for the subsequent analysis. Geometric morphometric methods GMM can capture morphological structure, overall phenotype and three-dimensional profiles of sutures without information loss during shape analysis [ , , ]. For the GMM, we followed White et al.: first, we manually positioned two-dimensional landmarks on the digital photographs. Based on a sliding algorithm to consistently place 500 semi-landmarks per suture, we resampled them the semi-landmarks using the R package ‘Stereomorph’ to avoid loss of morphological complexity . Furthermore, to reduce differences between individuals and align the positions of their landmarks, generalised Procrustes superimposition was performed. Procrustes superimposition transforms raw landmarks into shape coordinates by centring (translating), resizing and rotating the landmarks. Thereby, the landmarks are converted into so-called semi-landmarks [ , – ]. The superimposed semi-landmarks were then analysed using PCA. In this study, PCA was implemented to identify the main components of shape variation across the sample. As a result, the samples can be mapped into a common coordinate system, the morphospace, where sutural morphology can be analysed individually or compared among individuals [ – ]. Complexity analysis For the complexity analysis, we implemented the windowed short-time Fourier transform with a power spectrum density calculation to the Procrustes superimposed semi-landmarks. The PSD complexity score is well suited to capture the characteristics of interdigitations, loop patterns and amplitude . A score was computed for each suture by averaging the squared windowed short-time Fourier transform coefficients over each frequency across the local transforms and summing the averages at each harmonic, following the method proposed by Allen et al. . Low PSD complexity scores suggest a straight outline, whereas increasing values indicate a progression towards interdigitated sutural outlines with pronounced loops and amplitudes . We computed the complexity score using the R packages ‘e1071’ and ‘stft’ . The analysis was based on White et al., where further details on the methodology and codes for the implementation in R are provided . Statistical analysis A Shapiro–Wilk test suggested that the data were normally distributed, so the complexity scores were analysed using analysis of variance (ANOVA) to determine differences among the age quartiles and sex groups. For detecting differences among the specific age groups, pairwise comparisons based on the t tests with Holm-Bonferroni corrections were conducted. Differences were considered significant if the corresponding p values were lower than 0.05. Intra-rater reliability was evaluated for the landmark placement in R as the same author acquired all the landmarks again after a period of one month. Reliability was indicated by the intra-class correlation coefficient (ICC). This retrospective study was based on CBCTs from a sample of consecutive patients treated in a German orthodontic and maxillofacial surgery clinic between January 2020 and July 2022. For the included patients, maxillary CBCT examinations were performed based on one the following indications: position of severely impacted teeth, bone dimensions prior to implant placement or implant site dimensions. Only patients with already existing CBCTs were included in this study. Because of the retrospective design of the study, no ethical approval was required. However, every patient gave written consent for the use of their medical data for scientific purposes related to this specific study. The data files were processed anonymously. The exclusion criteria were previous rapid maxillary expansion, cleft lip and palate, and impaired bone metabolism due to medication or artefacts in the same plane as the midpalatal suture (e.g. the transpalatal arch). In addition, patients with already fused midpalatal sutures (stage E suture fusion according to Angelieri) were excluded because their midpalatal suture outline cannot be traced and their parasutural bone density is the same as that of other palatal regions . Notably, the midpalatal suture may remain open throughout life , which allowed us to include patients with higher age in the sample. To avoid interobserver bias, one author conducted the data analysis. Cone-beam computed tomography The CBCTs were generated using Orthophos® XG 3D (Dentsply Sirona, Bensheim, Germany). The radiation dose ranged from 91 mGy*cm 2 to 781 mGy*cm 2 . Volume sizes varied from 5 × 5.5 cm through 8 × 8 cm to 11 × 10 cm. The acquisition time was 4300 ms for 42 patients and 2500 ms for six patients. Data preparation Data preparation comprised several steps, as depicted in Fig. . First, the CBCTs were converted into anonymised DICOM files and exported using RadiAnt DICOM Viewer . Second, the exported files were reconstructed in Avizo v.9.3 software (FEI, Hillsboro, OR, USA) and Geomagic Wrap (3D Systems) so that only the cranial segments were displayed in the three-dimensional isosurface, based on case-specific thresholds to optimise sutural traceability. Third, the midpalatal suture was photographed using uniform positioning to control the degree of parallax in the R package ‘rgl’ . To check for robustness, the data were additionally prepared using the open-source software 3DSlicer as an alternative to Avizo and Geomagic. These steps produced two-dimensional digital photographs of the midpalatal suture, which were used for the subsequent analysis. Geometric morphometric methods GMM can capture morphological structure, overall phenotype and three-dimensional profiles of sutures without information loss during shape analysis [ , , ]. For the GMM, we followed White et al.: first, we manually positioned two-dimensional landmarks on the digital photographs. Based on a sliding algorithm to consistently place 500 semi-landmarks per suture, we resampled them the semi-landmarks using the R package ‘Stereomorph’ to avoid loss of morphological complexity . Furthermore, to reduce differences between individuals and align the positions of their landmarks, generalised Procrustes superimposition was performed. Procrustes superimposition transforms raw landmarks into shape coordinates by centring (translating), resizing and rotating the landmarks. Thereby, the landmarks are converted into so-called semi-landmarks [ , – ]. The superimposed semi-landmarks were then analysed using PCA. In this study, PCA was implemented to identify the main components of shape variation across the sample. As a result, the samples can be mapped into a common coordinate system, the morphospace, where sutural morphology can be analysed individually or compared among individuals [ – ]. Complexity analysis For the complexity analysis, we implemented the windowed short-time Fourier transform with a power spectrum density calculation to the Procrustes superimposed semi-landmarks. The PSD complexity score is well suited to capture the characteristics of interdigitations, loop patterns and amplitude . A score was computed for each suture by averaging the squared windowed short-time Fourier transform coefficients over each frequency across the local transforms and summing the averages at each harmonic, following the method proposed by Allen et al. . Low PSD complexity scores suggest a straight outline, whereas increasing values indicate a progression towards interdigitated sutural outlines with pronounced loops and amplitudes . We computed the complexity score using the R packages ‘e1071’ and ‘stft’ . The analysis was based on White et al., where further details on the methodology and codes for the implementation in R are provided . Statistical analysis A Shapiro–Wilk test suggested that the data were normally distributed, so the complexity scores were analysed using analysis of variance (ANOVA) to determine differences among the age quartiles and sex groups. For detecting differences among the specific age groups, pairwise comparisons based on the t tests with Holm-Bonferroni corrections were conducted. Differences were considered significant if the corresponding p values were lower than 0.05. Intra-rater reliability was evaluated for the landmark placement in R as the same author acquired all the landmarks again after a period of one month. Reliability was indicated by the intra-class correlation coefficient (ICC). The CBCTs were generated using Orthophos® XG 3D (Dentsply Sirona, Bensheim, Germany). The radiation dose ranged from 91 mGy*cm 2 to 781 mGy*cm 2 . Volume sizes varied from 5 × 5.5 cm through 8 × 8 cm to 11 × 10 cm. The acquisition time was 4300 ms for 42 patients and 2500 ms for six patients. Data preparation comprised several steps, as depicted in Fig. . First, the CBCTs were converted into anonymised DICOM files and exported using RadiAnt DICOM Viewer . Second, the exported files were reconstructed in Avizo v.9.3 software (FEI, Hillsboro, OR, USA) and Geomagic Wrap (3D Systems) so that only the cranial segments were displayed in the three-dimensional isosurface, based on case-specific thresholds to optimise sutural traceability. Third, the midpalatal suture was photographed using uniform positioning to control the degree of parallax in the R package ‘rgl’ . To check for robustness, the data were additionally prepared using the open-source software 3DSlicer as an alternative to Avizo and Geomagic. These steps produced two-dimensional digital photographs of the midpalatal suture, which were used for the subsequent analysis. GMM can capture morphological structure, overall phenotype and three-dimensional profiles of sutures without information loss during shape analysis [ , , ]. For the GMM, we followed White et al.: first, we manually positioned two-dimensional landmarks on the digital photographs. Based on a sliding algorithm to consistently place 500 semi-landmarks per suture, we resampled them the semi-landmarks using the R package ‘Stereomorph’ to avoid loss of morphological complexity . Furthermore, to reduce differences between individuals and align the positions of their landmarks, generalised Procrustes superimposition was performed. Procrustes superimposition transforms raw landmarks into shape coordinates by centring (translating), resizing and rotating the landmarks. Thereby, the landmarks are converted into so-called semi-landmarks [ , – ]. The superimposed semi-landmarks were then analysed using PCA. In this study, PCA was implemented to identify the main components of shape variation across the sample. As a result, the samples can be mapped into a common coordinate system, the morphospace, where sutural morphology can be analysed individually or compared among individuals [ – ]. For the complexity analysis, we implemented the windowed short-time Fourier transform with a power spectrum density calculation to the Procrustes superimposed semi-landmarks. The PSD complexity score is well suited to capture the characteristics of interdigitations, loop patterns and amplitude . A score was computed for each suture by averaging the squared windowed short-time Fourier transform coefficients over each frequency across the local transforms and summing the averages at each harmonic, following the method proposed by Allen et al. . Low PSD complexity scores suggest a straight outline, whereas increasing values indicate a progression towards interdigitated sutural outlines with pronounced loops and amplitudes . We computed the complexity score using the R packages ‘e1071’ and ‘stft’ . The analysis was based on White et al., where further details on the methodology and codes for the implementation in R are provided . A Shapiro–Wilk test suggested that the data were normally distributed, so the complexity scores were analysed using analysis of variance (ANOVA) to determine differences among the age quartiles and sex groups. For detecting differences among the specific age groups, pairwise comparisons based on the t tests with Holm-Bonferroni corrections were conducted. Differences were considered significant if the corresponding p values were lower than 0.05. Intra-rater reliability was evaluated for the landmark placement in R as the same author acquired all the landmarks again after a period of one month. Reliability was indicated by the intra-class correlation coefficient (ICC). The applied exclusion criteria produced a final sample size of 48 patients. The sample was divided by the patient’s age, yielding in four groups comprising the four quartiles, as shown in Table . Geometric morphometrics: principal component analysis The PCA of the Procrustes superimposed two-dimensional semi-landmarks revealed that 13 principal components (PC) were sufficient to summarise more than 95% of the total morphological variance of the midpalatal suture. PC1 accounted for 39% of the variance; PC2 for 18%. As visualised in Fig. , PC1 was associated with trends related to the overall suture outline: lower values were associated with a convex form, whereas the higher PC1 values indicated a concave sutural outline. Lower PC2 values seemed to indicate a major loop to the left in the posterior part of the suture, whereas samples with higher PC2 values did not exhibit this loop in the posterior part. The morphological analysis detected no clear patterns regarding number of interdigitations related to PC1 or PC2. Mapping patients into morphospace along PC1 and PC2 showed that individuals with comparatively lower age tended to be clustered in the centre of the morphospace, as shown in Fig. . In contrast, older individuals were more likely to be located in the periphery of the morphospace. The relatively younger patients exhibited largely similar sutural morphological characteristics as expressed by the two main principal components. In contrast, older patients displayed a higher degree of variation regarding their sutural morphology. Hence, these results suggest a higher morphological variation along PC1 and PC2 with increasing age. Regarding the patient’s sex, no clear pattern was observed. Power spectrum density complexity scores To determine sutural complexity, a PSD complexity score was computed for each sample, with higher values indicating higher complexity. The complexity analysis revealed an average PSD score of 1.465 with a standard deviation of 0.010 across the whole sample. For males, the average PSD complexity score was 1.466, whereas it was 1.464 for females. With progressing age, the PSD complexity score increased to values of 1.459 (group 1), 1.460 (group 2), 1.466 (group 3), and 1.474 (group 4), indicating increasing complexity. The ANOVA demonstrated a significant effect of age on midpalatal suture complexity ( p < 0.0001, degrees of freedom (DF) = 1, F = 21.346). In contrast, the variable sex did not significantly impact the complexity score ( p = 0.588, DF = 1, F = 0.298). Furthermore, the interaction of age and sex did not significantly influence complexity ( p = 0.848, DF = 1, F = 0.037). An intergroup t test to assess differences among the age groups demonstrated differences that were significant at the 5% level among age group 1 and 4 ( p = 0.001), 1 and 3 ( p = 0.014), 2 and 3 ( p = 0.030) and 2 and 4 ( p = 0.002). After Holm-Bonferroni correction, the same differences remained significant, as shown in Fig. . The results remained robust when outliers were excluded. The ICC was above 0.9, indicating a high intra-rater reliability. To assess the impact of the software used, the whole analysis, from reconstruction to statistical analysis, was repeated in open-source software (3D Slicer) as an alternative to segmentation and reconstruction in Avizo and Geomagic. The results remained robust to software changes. The PCA of the Procrustes superimposed two-dimensional semi-landmarks revealed that 13 principal components (PC) were sufficient to summarise more than 95% of the total morphological variance of the midpalatal suture. PC1 accounted for 39% of the variance; PC2 for 18%. As visualised in Fig. , PC1 was associated with trends related to the overall suture outline: lower values were associated with a convex form, whereas the higher PC1 values indicated a concave sutural outline. Lower PC2 values seemed to indicate a major loop to the left in the posterior part of the suture, whereas samples with higher PC2 values did not exhibit this loop in the posterior part. The morphological analysis detected no clear patterns regarding number of interdigitations related to PC1 or PC2. Mapping patients into morphospace along PC1 and PC2 showed that individuals with comparatively lower age tended to be clustered in the centre of the morphospace, as shown in Fig. . In contrast, older individuals were more likely to be located in the periphery of the morphospace. The relatively younger patients exhibited largely similar sutural morphological characteristics as expressed by the two main principal components. In contrast, older patients displayed a higher degree of variation regarding their sutural morphology. Hence, these results suggest a higher morphological variation along PC1 and PC2 with increasing age. Regarding the patient’s sex, no clear pattern was observed. To determine sutural complexity, a PSD complexity score was computed for each sample, with higher values indicating higher complexity. The complexity analysis revealed an average PSD score of 1.465 with a standard deviation of 0.010 across the whole sample. For males, the average PSD complexity score was 1.466, whereas it was 1.464 for females. With progressing age, the PSD complexity score increased to values of 1.459 (group 1), 1.460 (group 2), 1.466 (group 3), and 1.474 (group 4), indicating increasing complexity. The ANOVA demonstrated a significant effect of age on midpalatal suture complexity ( p < 0.0001, degrees of freedom (DF) = 1, F = 21.346). In contrast, the variable sex did not significantly impact the complexity score ( p = 0.588, DF = 1, F = 0.298). Furthermore, the interaction of age and sex did not significantly influence complexity ( p = 0.848, DF = 1, F = 0.037). An intergroup t test to assess differences among the age groups demonstrated differences that were significant at the 5% level among age group 1 and 4 ( p = 0.001), 1 and 3 ( p = 0.014), 2 and 3 ( p = 0.030) and 2 and 4 ( p = 0.002). After Holm-Bonferroni correction, the same differences remained significant, as shown in Fig. . The results remained robust when outliers were excluded. The ICC was above 0.9, indicating a high intra-rater reliability. To assess the impact of the software used, the whole analysis, from reconstruction to statistical analysis, was repeated in open-source software (3D Slicer) as an alternative to segmentation and reconstruction in Avizo and Geomagic. The results remained robust to software changes. This study is the first to apply a sutural complexity score to human CBCTs. More specifically, it combined GMM with a complexity score to analyse midpalatal sutures, a surface of orthodontic interest. As GMM is based on statistical calculations and the complexity score provides a single value for each sample, this approach contributes to improving the interpretability, objectiveness, and comparability of suture analysis. The methodology proposed in this study is relevant for clinicians and researchers. For clinicians, the analysis can contribute to supporting treatment planning with respect to surgical or non-surgical corrections of maxillary transverse discrepancies . As age constitutes one of several influencing factors, a complexity score that can assess sutural complexity for each patient individually offers an additional indicator for predicting the success of sutural distractions. For researchers, the proposed analysis provides a way forward towards robust and comprehensive shape analysis for CBCTs of humans that may also be used for analysis of other shapes of interest. GMM is advantageous compared with linear measurements for analysing sutural shapes because linear measurements can be biased due to an arbitrary focus on certain sutural parts, while other sections are neglected, resulting in an incomplete shape analysis . In contrast, GMM considers the shape as a whole and removes the effects of scale, translation and rotation. As a result, the pure shapes, which are mapped onto the same coordinate system using landmarks, are comparable, and any differences between sutures are truly caused by differences in their shapes. Challenges related to GMM include the landmark number, placement criteria and landmark homology between samples . We addressed these challenges by resampling 500 semi-landmarks for each suture and sliding them along the curve to match their positions with the reference configuration based on the principle of minimising the Procrustes distance . Another limitation relates to the interpretation of the principal components because patterns might not be uniquely identifiable and their identification might require clinical experience . Lastly, in our study, neither PC1 nor PC2 seemed to capture progression of the sutural interdigitations. This suggests that another method, such as the PSD complexity score, is required to capture such characteristics as a supplement to GMM. By quantifying midpalatal suture complexity in a complexity score, this study expanded the limited research on the complexity of this suture and showed that if it is based on a windowed STFT with PSD, the complexity score can be applied to CBCT datasets of human sutures. Our result demonstrated that sutural complexity increased with higher patient age, which is in line with histological, histomorphometric and radiological research approaches, all of which have demonstrated that the midpalatal suture develops highly interdigitated patterns over time [ , , ]. Advancing previous approaches, the PSD complexity score expresses several characteristics such as amplitude, number of interdigitation and looping patterns in one score . According to a recent study comparing all available complexity scores in mammals, the STFT with PSD calculation captured the above-mentioned characteristics better than other available scores: owing to its Fourier foundation, it was robust as the statistical transformations captured discrete and non-stationary characteristics . The PSD complexity score is of orthodontic interest because it can support the categorisation of patients with respect to suture complexity and therefore inform treatment planning. However, calculating the complexity score involves several steps of data preparation and analysis. This requires knowledge of and access to the relevant software. Nevertheless, this study indicates a high potential of calculating PSD complexity scores for suture evaluation and comparisons. The conducted analysis investigated age and sex as determinants of sutural morphology. Our results indicated that age significantly increased complexity, which contradicts the results of Korbmacher et al., who conclude that interdigitation is not age-dependent . Yet, Korbmacher et al. calculated complexity based on linear measurements using the sutural interdigitation index . Although such a sutural interidigitation index captures the number of interdigitations well, the PSD score applied in our study additionally captures interdigitation amplitude . More precisely, the PSD score differentiates between shallower lobes and fewer deeper lobes, in contrast to the linear length measurements . These differences in the results based on different complexity metrics suggest that age might increase sutural complexity due to increased interdigitation amplitudes. Other factors that were not considered in this study may possibly impact suture morphology, too. For example, Cheronet et al. concluded that structural factors such as the position along the cranial vault and adjacent sutures play an essential role for midpalatal suture morphology. Another structural factor not considered in our study, but related to rapid maxillary expansion success is the age-progressive bone obliteration of the midpalatal, pterygopalatinal and pterygomaxillary suture . Also, extrinsic parameters such as mechanical forces or genetic factors may possibly have an impact . Another limitation of our study relates to the mean sample age; due to greater radiation risk for young children our access to data for these age groups was limited. In our study, the youngest age group ranged from 9 to 14 years. In this context, Kinzinger et al. detect structural changes affecting the outcome of the rapid maxillary expansion, occurring after 10 to 12 years of age . Also, it is unclear to which extent the CBCT resolution affects the complexity score due to compromised traceability of the suture outline. To improve suture outline traceability, we set case-specific thresholds, which is associated with a risk of receiving different palate shapes. Future research should further investigate the potential and clinical relevance of complexity scores based on human CBCTs using larger sample sizes. Research could focus on whether the score can support decision-making in orthodontic treatment, such as the decision to surgically or non-surgically correct maxillary transverse discrepancies. Future work could analyse morphological shape variations, reveal shape patterns and calculate complexity scores of patients with craniosynostosis based on the methodologies applied in this paper. This study performed a geometric morphometric analysis and a complexity analysis of human midpalatal sutures. Applied to CBCTs, the methodologies revealed shape variation and quantified complexity. Thereby, our study proves the applicability of complexity scores to human CBCTs of palatal sutures, contributing to comprehensive sutural assessments.
Clinical assessment of brain adaptation following multifocal intraocular lens implantation
baf19146-f1d1-471b-9e58-6a9bcbab49e8
11887361
Ophthalmologic Surgical Procedures[mh]
In cataract surgery, multifocal intraocular lenses (IOLs) have been widely adopted as a treatment for presbyopia, replacing conventional monofocal IOLs. This enables patients to achieve both distance and near vision, significantly improving their postoperative quality of life . However, for patients to adapt to multifocal IOLs, the brain must undergo "brain adaptation" to process new visual information . The speed of brain adaptation varies among individuals, and quantifying this speed is clinically significant. The Mini-Mental State Examination (MMSE) is a screening test widely used to evaluate cognitive function and detect early dementia . It comprehensively assesses orientation to time and place, memory, attention, calculation, language ability, and visuospatial cognition. The MMSE involves tasks such as recalling the current date and location, memorizing and reproducing three words, performing simple calculations, and copying geometric shapes. The test can typically be completed within 10 min, and a score below 23 indicates possible dementia. This study aimed to retrospectively evaluate the speed of brain adaptation following multifocal IOL implantation using the MMSE test as a clinical tool. The ethics committee of Nishi Eye Hospital approved this retrospective study, which followed the tenets of the Declaration of Helsinki. Subjects This study included 24 eyes of patients who underwent cataract surgery with multifocal intraocular lens (IOL) implantation. The types of multifocal IOLs and the number of cases are summarized in Table . The IOL types included FineVision PodF (BVI/PhysIOL), PanOptix (Alcon), Intensity (Hanita Lenses), Tecnis Synergy (J&J), and Vivity (Alcon). Grouping Patients were divided into two groups based on postoperative corrected distance visual acuity (CDVA) and the presence of visual disturbances. Near visual acuity was measured at a distance of 30 cm, ensuring consistency across all participants. Group A comprised 14 eyes of patients with CDVA better than LogMAR 0 and no visual disturbances at one week postoperatively, with a mean age of 62 ± 10 years. Group B included 10 eyes of patients whose CDVA did not reach LogMAR 0.1 and who experienced persistent visual disturbances at one month postoperatively, with a mean age of 76 ± 5.6 years. Surgical technique Phacoemulsification cataract surgery was performed by three skillful surgeons, using the Centurion® Vision System (Alcon) under topical anesthesia, supplemented by sub-Tenon's anesthesia using xylocaine (lidocaine). After disinfecting the conjunctival sac and surrounding skin, a 2.8 mm corneal incision was created on-axis at the corneal limbus, accompanied by two side-port incisions. A viscoelastic material was injected, and a continuous curvilinear capsulorhexis (CCC) with a 5.0 mm diameter was created. The nucleus was emulsified and aspirated using the phacoemulsification technique, with the "crack method" employed for nucleus division when necessary. The residual cortex was fully removed using an irrigation/aspiration (I/A) technique. The multifocal IOL was inserted into the capsular bag using a specialized injector system and appropriately positioned. Corneal wounds were closed via hydration. For cases involving FineVision and Intensity IOLs, femtosecond laser assisted cataract surgery (FLACS) was performed using the Catalys® Precision Laser System (J&J), involving a 4.8 mm capsulorhexis and cruciate lens fragmentation prior to proceeding with the standard phacoemulsification and IOL insertion techniques. Postoperative management Postoperatively, all patients were prescribed a regimen of levofloxacin 0.5% eye drops administered three times per day, fluorometholone 0.1% eye drops three times per day, and diclofenac sodium eye drops one to three times per day, depending on the severity of inflammation. Evaluation criteria The evaluation included multiple parameters. The MMSE test scores were recorded out of 30 points, and the completion time in seconds was also measured. The MMSE test was conducted postoperatively, at least one week after surgery, to assess cognitive function in a standardized manner. To minimize the influence of visual acuity on MMSE performance, all questions were read aloud by an examiner, and participants responded verbally. The only item requiring visual input was a simple figure-copying task, which had minimal impact on the total test duration. Visual function was assessed by preoperative and postoperative visual acuity, refraction, astigmatism, higher-order aberrations (HOAs), and pupil size. Pupil diameter measurements were performed under mesopic and photopic conditions. Inclusion criteria Patients included in this study met the following criteria. Tear break-up time (BUT) was ≥ 10 s with no corneal fluorescein staining. No posterior capsule opacification was present at one month postoperatively. Significant ocular pathologies such as glaucoma or macular disorders were absent. Additionally, patients exhibited a clear cornea and no significant anterior chamber inflammation at one week postoperatively. There were no significant vitreous opacities, and hearing impairments that could affect MMSE test performance were absent. Statistical analysis T-tests were used to compare means between groups, with p -values < 0.05 considered statistically significant. This study included 24 eyes of patients who underwent cataract surgery with multifocal intraocular lens (IOL) implantation. The types of multifocal IOLs and the number of cases are summarized in Table . The IOL types included FineVision PodF (BVI/PhysIOL), PanOptix (Alcon), Intensity (Hanita Lenses), Tecnis Synergy (J&J), and Vivity (Alcon). Patients were divided into two groups based on postoperative corrected distance visual acuity (CDVA) and the presence of visual disturbances. Near visual acuity was measured at a distance of 30 cm, ensuring consistency across all participants. Group A comprised 14 eyes of patients with CDVA better than LogMAR 0 and no visual disturbances at one week postoperatively, with a mean age of 62 ± 10 years. Group B included 10 eyes of patients whose CDVA did not reach LogMAR 0.1 and who experienced persistent visual disturbances at one month postoperatively, with a mean age of 76 ± 5.6 years. Phacoemulsification cataract surgery was performed by three skillful surgeons, using the Centurion® Vision System (Alcon) under topical anesthesia, supplemented by sub-Tenon's anesthesia using xylocaine (lidocaine). After disinfecting the conjunctival sac and surrounding skin, a 2.8 mm corneal incision was created on-axis at the corneal limbus, accompanied by two side-port incisions. A viscoelastic material was injected, and a continuous curvilinear capsulorhexis (CCC) with a 5.0 mm diameter was created. The nucleus was emulsified and aspirated using the phacoemulsification technique, with the "crack method" employed for nucleus division when necessary. The residual cortex was fully removed using an irrigation/aspiration (I/A) technique. The multifocal IOL was inserted into the capsular bag using a specialized injector system and appropriately positioned. Corneal wounds were closed via hydration. For cases involving FineVision and Intensity IOLs, femtosecond laser assisted cataract surgery (FLACS) was performed using the Catalys® Precision Laser System (J&J), involving a 4.8 mm capsulorhexis and cruciate lens fragmentation prior to proceeding with the standard phacoemulsification and IOL insertion techniques. Postoperatively, all patients were prescribed a regimen of levofloxacin 0.5% eye drops administered three times per day, fluorometholone 0.1% eye drops three times per day, and diclofenac sodium eye drops one to three times per day, depending on the severity of inflammation. The evaluation included multiple parameters. The MMSE test scores were recorded out of 30 points, and the completion time in seconds was also measured. The MMSE test was conducted postoperatively, at least one week after surgery, to assess cognitive function in a standardized manner. To minimize the influence of visual acuity on MMSE performance, all questions were read aloud by an examiner, and participants responded verbally. The only item requiring visual input was a simple figure-copying task, which had minimal impact on the total test duration. Visual function was assessed by preoperative and postoperative visual acuity, refraction, astigmatism, higher-order aberrations (HOAs), and pupil size. Pupil diameter measurements were performed under mesopic and photopic conditions. Patients included in this study met the following criteria. Tear break-up time (BUT) was ≥ 10 s with no corneal fluorescein staining. No posterior capsule opacification was present at one month postoperatively. Significant ocular pathologies such as glaucoma or macular disorders were absent. Additionally, patients exhibited a clear cornea and no significant anterior chamber inflammation at one week postoperatively. There were no significant vitreous opacities, and hearing impairments that could affect MMSE test performance were absent. T-tests were used to compare means between groups, with p -values < 0.05 considered statistically significant. The comparison of preoperative and postoperative measurement values between the two groups is presented in Table . MMSE test scores averaged 28.9 ± 1.7 points in Group A and 29.2 ± 0.69 points in Group B, with no significant difference between the groups ( p = 0.68) (Fig. ). However, the MMSE test completion time was significantly shorter in Group A (256 ± 50 s) compared to Group B (346 ± 67 s) ( p < 0.05) (Fig. ). The percentage of patients completing the MMSE test within five minutes was 93% (13 out of 14 patients) in Group A and 20% (2 out of 10 patients) in Group B. Preoperative visual function assessments revealed that uncorrected distance visual acuity (UDVA) was 0.86 ± 0.60 in Group A and 0.60 ± 0.40 in Group B ( p = 0.33). Corrected distance visual acuity (CDVA) was 0.23 ± 0.18 in Group A and 0.20 ± 0.15 in Group B ( p = 0.80). Higher order aberrations (HOAs) were 0.24 ± 0.08 µm in Group A and 0.23 ± 0.05 µm in Group B ( p = 0.57). Pupil diameter measurements under mesopic conditions were 4.4 ± 0.74 mm in Group A and 4.5 ± 0.67 mm in Group B ( p = 0.73). Under photopic conditions, pupil diameter was 3.3 ± 0.36 mm in Group A and 3.0 ± 0.10 mm in Group B, with significantly smaller pupil diameters in Group B ( p < 0.05). At one week postoperatively, UDVA was 0.05 ± 0.13 in Group A and 0.29 ± 0.20 in Group B ( p < 0.05). CDVA was −0.04 ± 0.07 in Group A and 0.23 ± 0.12 in Group B ( p < 0.05). Astigmatism was −0.61 ± 0.59D in Group A and −0.90 ± 0.45D in Group B ( p = 0.19). The spherical equivalent (SE) was −0.61 ± 0.91D in Group A and −1.1 ± 0.7D in Group B ( p = 0.13). At one month postoperatively, UDVA was 0.04 ± 0.12 in Group A and 0.39 ± 0.20 in Group B ( p < 0.05). CDVA was −0.06 ± 0.08 in Group A and 0.19 ± 0.11 in Group B ( p < 0.05). Uncorrected near visual acuity (UNVA) was 0.10 ± 0.09 in Group A and 0.45 ± 0.25 in Group B ( p < 0.05). Corrected near visual acuity (CNVA) was 0.06 ± 0.07 in Group A and 0.31 ± 0.16 in Group B ( p < 0.05). Astigmatism was −0.66 ± 0.49D in Group A and −0.90 ± 0.44D in Group B ( p = 0.32). The spherical equivalent (SE) was −0.75 ± 0.93D in Group A and −1.2 ± 0.86D in Group B ( p = 0.19). First of all, this study aimed to explore whether MMSE completion time could serve as a practical indicator of cognitive processing ability in patients undergoing multifocal IOL implantation. While the relationship between neuroadaptation and cognitive processing ability has been intuitively recognized in clinical practice, no previous study has attempted to evaluate this processing ability using a highly simplified clinical paper-based test. Given that MMSE is widely used and easily applicable in clinical settings, we examined whether its completion time could provide insights into neuroadaptation speed. However, we acknowledge that MMSE completion time does not directly measure neuroadaptation, and its sensitivity may vary based on test environments. In this study, patients were divided into two groups based on postoperative CDVA and the presence of visual disturbances, regardless of the ages of the patients. No significant differences were found between the two groups in terms of HOAs and no cases had any other clinically significant ocular diseases except cataract that could affect visual acuity. Both uncorrected and corrected visual acuity, for both distance and near vision, were significantly better in Group A, as a result. This may indicate that Group A, which exhibited more rapid brain adaptation, was also younger and had shorter MMSE test completion times. Conversely, Group B, which exhibited a tendency for slower brain adaptation, was also older and had longer MMSE test completion times. In other words, this study indicates that the speed of brain adaptation following multifocal intraocular lens (IOL) implantation may be reflected in MMSE test completion time. Although not statistically significant, postoperative astigmatism exceeding −0.75D in Group B may have influenced the visual outcomes in this group [ – ]. Additionally, while the difference in postoperative refractive values (spherical equivalent) was not significant, Group B showed a more myopic tendency. To minimize the influence of these factors, the groups were specifically defined based on CDVA. The significantly younger age of Group A, which was associated with better visual outcomes, is consistent with previously reported findings . Given that MMSE test scores did not differ significantly between the two groups, it is suggested that this test, originally designed for dementia diagnosis, may not be the best tool for directly assessing the quality of brain adaptation. However, MMSE test completion time likely reflects a certain aspect of processing ability and speed, which could serve as a useful measure for evaluating the speed of brain adaptation which we experience in daily clinical work. Whether patients in Group B would achieve good visual function over a longer postoperative period remains an important question for future investigation. Although methods such as MRI for evaluating the visual cortex have been explored , simple and ethically acceptable tests applicable in routine clinical practice are still needed. This study did not account for the potential influence of IOL types, but the impact of IOL design and the number of focal points on brain adaptation should be further investigated in future studies. It is anticipated that those allowing for easy brain adaptation from the early postoperative period and achieving rapid visual improvement will continue to be highly valued in clinical practice. Several limitations exist in this study. First, although patients with severe vitreous opacity were excluded, the degree of vitreous opacity was not quantified, making it difficult to rigorously evaluate its influence . Second, the sample size was limited, and the matching of age and pupil size between the two groups was incomplete, which may have affected the results. Furthermore, contrast sensitivity and flare values were not measured, and a more detailed assessment of visual quality is needed . Additionally, although appropriate verbal support was provided during the MMSE test, it cannot be completely ruled out that differences in visual acuity influenced the test completion time. Last but not least, a key limitation of this study is the absence of a monofocal IOL control group, which would allow for a clearer distinction between age-related effects and those specifically related to multifocal IOL neuroadaptation. Future studies should incorporate a monofocal IOL control group to further elucidate the distinct neuroadaptation processes associated with multifocal IOLs. Future research should address these limitations by including larger sample sizes, quantifying the degree of vitreous opacity, and incorporating more precise evaluations of visual quality, such as contrast sensitivity and glare measurements [ – ]. Although MMSE is a widely available and practical cognitive assessment tool, it may not be the most suitable test for evaluating neuroadaptation. More refined neurocognitive assessments, such as IQ tests, might provide greater precision in measuring cognitive adaptability. However, ethical and practical constraints led us to conclude that incorporating them in this study was not feasible. Future research should explore alternative assessment tools that balance accuracy with clinical applicability. Thus, further investigations employing more precise and clinically relevant assessment methods are necessary to gain deeper insights into the mechanisms of brain adaptation following multifocal IOL implantation . This study suggests that the speed of brain adaptation following multifocal IOL implantation may be reflected in MMSE test completion time. Future research is needed to further quantify brain adaptation speed using different IOL types, conditions, and refined test methods.
Estrogen-induced circFAM171A1 regulates sheep myoblast proliferation through the oar-miR-485-5p/MAPK15/MAPK pathway
9143026f-2edb-4882-9d8f-392999621f42
11923336
Musculoskeletal System[mh]
Estrogen is an important hormone that affects muscle development in female animals. It is mainly secreted by the ovary. Previous research suggests that a lack of estrogen induces bone muscle cell apoptosis, leading to a loss of bone muscle quality and force . Studies on mouse C2C12 (myoblast) cells have shown that estrogen exposure can protect against hydrogen peroxide-induced apoptosis by upregulating HSP27, which combines with Caspase-3, blocking its cleavage and inactivating the inhibitor, thereby modulating the downstream targets of Bcl-2, BAD, AKT, ERK, and MAPK and ultimately preventing apoptosis and promoting the viability of C2C12 cells [ – ]. Studies in rodents have shown that estrogen treatment reduces locomotion-induced HSP70 and HSP72 responses in males or ovariectomized females but has no effect on HSP27 in stifle muscles [ – ]. Wang also revealed that the underlying protein levels of HSP70, HSP27, and HSP90 in bone muscle were reduced in female rats after the loss of estrogen . In addition, Karvinen et al. reported that estrogen resistance downregulates a variety of miRNAs that may inhibit the apoptotic pathway, thus leading to increased cell death and a decrease in bone muscle quality . However, the detailed molecular mechanisms of estrogen in muscle development remain unclear. Estrogen may play a biological role by inducing the production of circRNAs. In studies of ER-positive breast cancer, estrogen has been found to induce the production of circPGR, which is positioned in the stroma of the cell and serves as a competitive endogenous RNA (ceRNA) to sponge miR-301a-5p, regulating the development of a number of cell cycle factors , indicating a mechanism of action by which estrogen functions in animals. CircRNAs in animals are produced by the cyclization of specific exons or a few introns , and they are particularly stable RNAs. In vivo, circRNA is produced from the spliceosome by reverse splicing: the 3′ side of the exon is attached by covalent splicing to the 5′ side of the upstream exon and is immune to the nucleic acid epimerase RNase R. The circRNA is also resistant to the nucleic acid epimerase RNase R [ – ]. CircRNAs play a critical role in the development of animal muscle, not only through competitive endogenous RNA to regulate the expression of miRNAs but also through binding to proteins to constitute functional units that are involved in the modulation of biological functions . For example, circHIPK3 can promote skeletal muscle development in chicken embryos by sponging miR-30a-3p . CircRNA FUT10 targets HOXA9 by combining with miR-365a-3p to ameliorate degenerative muscle disorders . CircNDST1 regulates bovine myogenic cell multiplication and specialization through the miR-411a/Smad4 axis . CircFoxo3 delays cell cytosolic advancement by entering into a triplet complex with p21 and CDK2 . Therefore, the above studies suggest that estrogen can first induce specific circRNAs and that these circRNAs can regulate the miRNA-gene pathway to participate in muscle development. Most of these circRNAs have been characterized in human, mouse, bovine, and porcine myoblasts. The expression of circRNAs throughout in vitro myoblast differentiation in mouse and human cells has been analyzed, and conserved circRNAs have been identified across species during the process of myogenesis and the development of Duchenne dystrophy . Wei et al. obtained circRNA profiles of bovine skeletal muscle at two stages of development (embryonic and mature muscle), revealing for the first time their participation in bovine muscle formation . Sun et al. demonstrated that a number of circRNAs are involved in muscle growth in the longissimus dorsi muscles of Rand and Blue Pond pigs . Studies have also identified circRNAs in sheep muscle but have not yet described differences in the expression and action of circRNAs in the muscles of ovariectomized and intact sheep. Skeletal muscle derives and originates from myogenic progenitor cells (MPCs), which mainly include somatic and myogenic cell multiplication and polarization, myotube fusion, and myofibril formation . Myogenesis is regulated by myogenic factors, including the Pax families Pax3 and Pax7 and the MRF families (Myf5, MyoD, MyoG, and MRF4) . Pax3 is required for the migration of MPCs and, together with a family of MRFs, for mediating myoblast differentiation. Pax3 activates Myf5, which, together with MRF4, affects the process of myoblast proliferation and differentiation. Pax3 is also required for the migration of MPCs and, together with a family of MRFs, for the mediation of myoblast differentiation. . P-CaMK-II promotes myogenic differentiation and the formation of type II myofibers by inhibiting β-catenin and p-ERK1/2 dephosphorylation via activation of the Zac1/GPR39 system . Muscle MSCs also give rise to muscle stem cells (also called satellite cells) that are activated immediately after muscle damage and split into proliferating myoblasts, which are characterized by the presence of Pax7 and MyoD . Although the transcriptional regulation of myogenesis has been explored, the essential functions of noncoding RNAs (e.g., circRNAs and miRNAs) during myogenesis are worthy of further investigation. The objective of this study was to characterize estrogen-induced circRNAs with potential functions in modulating muscle development in sheep. Using high-throughput RNA sequencing, we first systematically investigated the expression profiles and functions of circRNAs in the longissimus dorsi muscle of 8-month-old small-tailed Han sheep with and without ovaries. One significantly upregulated circRNA (novel_circ_0011822) in intact ewes, which was named circFAM171A1 on the basis of its source gene, was highlighted. Bioinformatics analysis revealed that circFAM171A1-oar-miR-485-5p-MAPK15 could form a ceRNA, in which MAPK15 is an important gene in the MAPK signaling pathway. In addition, we verified the influence of estrogen-induced ceRNAs on the growth and progression of sheep myoblasts in vitro. These findings may help to elucidate the molecular mechanisms by which estrogen regulates muscle progression in sheep. Ethics statement The study was approved by the IAS-CAAS Animal Ethics Committee under approval number IAS2019-63. In strict compliance with relevant regulations, we are committed to promoting animal science research to contribute to the development of agriculture in China. Sample collection and preparation In this study, we selected 10 small-tailed Han sheep ewes aged 2 months from Wulat Zhongqi Farm, Bayannur City, Inner Mongolia Autonomous Region, China. For comparative observations, the sheep were randomized into two groups: the ovariectomized group (n = 5, OR-STH) and the sham surgery group (n = 5, STH). There were no significant differences between the two groups in terms of height, weight, or age. After surgery, both groups of sheep were kept in the same feeding environment. Over a 6-month period, sheep weights were measured, and tissue samples were collected from the longissimus dorsi muscle. The mean body weights were 72.4 ± 1.86 kg and 88.4 ± 3.97 kg in the OR-STH and STH groups, respectively ( P < 0.05). The estrogen levels in the serum of the sheep were 28.71 ± 2.73 pg/mL and 12.23 ± 0.82 pg/mL in the STH group and OR-STH group, respectively ( P < 0.05). All the tissue samples were immediately frozen in liquid nitrogen to ensure their stability. The samples were subsequently stored in a cryogenic environment at −80 °C until further analysis. Library preparation and Illumina sequencing In this study, we first extracted total RNA from 10 muscle tissue samples using TRIzol reagent (Invitrogen, Carlsbad, CA, USA) according to the manufacturer’s instructions. The total amount of RNA extracted was 2 μg (concentration ≥ 300 ng/μL, OD260/280 between 1.8 and 2.2), which was used as the raw material for constructing the miRNA and cDNA libraries. To remove ribosomal RNA (rRNA), we used the Epicenter Ribo-Zero™ rRNA Removal Kit (Epicenter, Madison, WI, USA). After rRNA removal, we constructed sequencing libraries using the NEBNext® Ultra™ Directional RNA Library Prep Kit for Illumina® (NEB, Ipswich, MA, USA) following the manufacturer’s instructions. Throughout the process, we also purified the products using the AMPure XP system and assessed the quality of the library via gel electrophoresis and the NanoDrop 2000, Qubit 2.0, and Agilent Bioanalyzer 2100 systems. Finally, the libraries were sequenced on an Illumina HiSeq 2500 platform, yielding 150 bp paired-end reads. Identification of differentially expressed circRNAs and mRNAs First, we approximated the levels of expression of circRNAs in the constructed muscle tissue libraries via Illumina sequencing data and the FPKM/readcount values. To ensure the accuracy of the results, two software programs, find_circ and CIRI2, were used for circRNA identification, and the intersection was taken for the final results. To identify mRNA, we used the DE-Seq R software package (version 4.2.1). For the DE-Seq analysis, we used a threshold of a q value less than 0.05 and a |log2FoldChange| greater than 1 to adjust for mRNA. Next, we used the DEG-seq R package to analyze differentially expressed circRNAs (DECs) on the basis of normalized reads per thousand bases per million (TPM) values. During the analysis, we modified the q value and set the thresholds for significant DECs as a q value less than 0.05 and a |log2FoldChange| greater than 1. This series of analyses provided us with information about the differential expression of circRNAs in muscle tissues as well as the related genes, which provided an important basis for further studies. For a complete list of circRNAs see Supplementary Table . Comprehensive functional enrichment analysis In the present study, we used functional annotation to analyze DE circRNA host genes on the basis of GO and KEGG annotations. First, for the source genes, we performed GO annotation on the basis of the corresponding genes and their GO annotations in NCBI. This information was stored in the following database: https://ftp.ncbi.nlm.nih.gov/gene/DATA/gene2go.gz . Next, we used KOBAS software to test the statistical enrichment of host genes associated with DE circRNAs in the KEGG pathway . To determine the significance of the enrichment analysis, we set a threshold of P < 0.05. This series of functional annotation analyses helped us to gain a deeper understanding of the functions of DE circRNAs in organisms and the roles of their related genes in specific pathways. ceRNA and PPI networks We first constructed a ceRNA network based on predicted circRNA, miRNA and mRNA binding sites from whole-transcriptome sequencing data and then constructed a circRNA‒miRNA‒mRNA interaction network via Cytoscape software. This step helped us understand the functions of circRNAs in organisms and their interactions with other genes. Next, we constructed protein‒protein interaction (PPI) networks of the differentially expressed genes (DEGs) using STRING (version 11.5, https://string-db.org/ ). For PPI analysis, we used the STRING database v11.5 (species: Ovis aries ). To construct the PPI network, we collected target genes from the database and selected protein pairs with scores greater than 700 from the STRING database. Finally, we used Cytoscape software to visualize the protein pairs. This series of analyses helped us gain insight into the interactions between DEGs, providing strong support for studying the physiological functions of organisms and the mechanisms of disease occurrence. Immunofluorescence (IF) We first inoculated sheep primary myoblasts in 6-well plates maintained at a population density of 1 × 10 6 cells per well, with each group containing three replicates. Next, the polylysine-treated glass coverslips were placed in 6-well plates and removed after 16 h. To fix the cells, we treated them with 4% formaldehyde for 15 min and then washed them with PBS 3 times for 3 min each. Next, the samples were incubated with 10% goat serum (China) for 30 min. For immunofluorescence staining, the samples were incubated with Desmin (1:500) and MYOD1 (1:500) antibodies (Proteintech, USA) overnight at 4 °C. After the incubation was complete, the membrane was washed with PBS five times for 5 min each. Next, of the samples were incubated with fluorescent IgG (1:2000) (Saixin, China) for 1 h at 37 °C. Finally, DAPI (Beyotime, China) was added to stain the nuclei for 5 min, after which the cells were washed 5 times for 5 min each. Through this series of experiments, we successfully examined sheep primary myoblasts, which provided the basis for subsequent studies. Ribonuclease R (RNase R) One microgram of sheep muscle tissue RNA was added to RNase R reagent (1 U/μg) and incubated at 37 °C for 10 min. The cDNA was reverse transcribed from RNase R processed RNA and control pretreated RNA, and RT‒qPCR was utilized to measure the expression of circRNAs and the corresponding linear transcripts. Fluorescence in situ hybridization (FISH) A FISH kit SA-Biotin System (JiMa, Shanghai, China) and a circFAM171A1 probe mixture (Cy3 labeled) were used (Table ). FISH was conducted according to the manufacturer’s instructions to assess the localization of circFAM171A1 in sheep myoblasts. The procedure was as follows: the cells were cultured in 6-well plates overnight, stabilized with 4% paraformaldehyde for 15 min at room temperature, incubated with probe solution (1 μL of 1 μM biotin probe + 1 μL of 1 μM SA-Cy3 + 8 μL of PBS) added to the medium for 30 min at 37 °C, and placed in an incubator at 37 °C overnight (12–16 h) in the dark to allow hybridization. The cells were stained with DAPI solution (2 μg/mL) for 15 min at room temperature in the dark. An anti-fluorescence quenching blocker was then added. Images were obtained via a computerized laser scanning confocal microscope. Nucleoplasmic separation Sheep myoblasts were inoculated at a density of ≤ 3 × 10 6 in 6-cm culture dishes, and after 24 h of cell culture, the cells were washed with PBS twice, and the PBS was discarded. Two hundred microlitres of prechilled buffer J was added to the culture dish to cover the cell surface, the mixture was allowed to cool for 5 min, the lysis products were collected, the mixture was moved to an RNase-free sponge tube, and the mixture was centrifuged at 14,000 × g for 10 min at 4 °C. The liquid supernatant (cytoplasmic RNA) was pipetted into another centrifuge tube, 200 μL of buffer SK was added to the precipitate (cytosolic RNA), another 400 μL of buffer SK was added, the mixture was vortexed for 10 s, 200 μL of anhydrous ethanol was added, and the mixture was vortexed for 10 s. The mixture was transferred to a centrifuge column and centrifuged at 6000 rpm for 1 min at 4 °C; the supernatant was discarded, and the column was returned to the centrifuge tube. A total of 400 μL of wash solution A was added, and the mixture was centrifuged at 14,000 × g and 4 °C for 1 min. The supernatant was discarded, the column was washed once, the mixture was placed back into the collection tube, and the mixture was centrifuged at 14,000 × g and 4 °C for 2 min. Then, 50 μL of elution buffer E was added, the mixture was centrifuged at 6000 rpm and 4 °C for 2 min, followed by centrifugation at 14,000 × g and 4 °C for 1 min. The RNA concentration was then detected, and the samples were stored at −80 °C. Expression validation by RT‒qPCR mRNA and miRNA back-transcription were performed using the HiScript® IiI All-in-one RT SuperMix kit (Vazyme, Nanjing, China) and the miRNA first-strand cDNA synthetic kit (Vazyme, Nanjing, China). RT-qPCR was performed on a RocheLight Cycler® 480 II system (Roche Applied Science, Mannheim, Germany), and the mRNA and miRNA were extracted using the Taq Pro Universal SYBR qPCR Master Mix (Vazyme). RT-qPCR was performed on a RocheLight Cycler® 480 II system (Roche Applied Science, Mannheim, Germany), and mRNAs and miRNAs were extracted using Taq Pro Universal SYBR qPCR Master Mix (Vazyme). The RT-qPCR procedure was as follows: preliminary denaturation at 95 °C for 5 min, denaturation at 95 °C for 5 s, and degradation at 60 °C for 30 s for 35 cycles. The data were analyzed via the 2 −ΔΔCt method, with the sheep β-actin and U6 genes used as internal control genes. The relative expression was analyzed by a t test of dependent samples, and the significance of differences was analyzed by SPSS 20.0. The primers for RT‒qPCR were designed by Primer 5 software and constructed by Sangon Biotech Co. (Shanghai). The primer sequences are listed in Table . Cell culture The longissimus dorsi muscle tissues from both surfaces of the fetal spine of 90-day-old small-tailed Han sheep were isolated under aseptic conditions, and combinations of connective tissues and blood tubes were removed and then washed with PBS (2% penicillin/streptomycin). The muscle samples were pelleted and digested with 0.25% trypsin (Solarbio, Beijing, China) for 18 h at 4 °C and then cultured in an incubator (37 °C, 5% CO 2 ) for approximately 2 h. The cells were then cultured in the incubator for approximately 2 h. The cells were then incubated for approximately 1 h at 4 °C. The isolated cells were inoculated into 100 mm culture dishes and cultured with complete media (DMEM-F12, 10% FBS and 1% penicillin/streptomycin). After the cells reached more than 90% confluence, they were transferred to 6-well plates for subsequent experiments. The HEK293T cell line was cultivated under the same cultivation requirements. Plasmid construction and transfection Overexpression of pcDNA3.1-circFAM171A1 and interference siRNA vectors were designed and synthesized on the basis of the sequence of circFAM171A1. The mimics and inhibitors of oar-miR-485-5p were designed and synthesized on the basis of the sequence of oar-miR-485-5p. MAPK15 overexpressing pIRES2-EGFP-MAPK15 and interfering with si-MAPK15 were designed and synthesized on the basis of the sequence of MAPK15 provided by NCBI. The overexpression and interference vectors, mimics and inhibitors were synthesized by Shanghai Gemma Pharmaceuticals Technology Co. All the vectors were sequenced, and the sheep myoblasts and HEK293T cells were transfected with Lipofectamine 2000 (Invitrogen, Carlsbad, CA, USA) according to the manufacturer’s instructions. Cell growth and gene expression were assessed 48 h posttransfection. Western blot Proteins in the cell samples were extracted with radioimmunoprecipitation assay (RIPA) buffer (Solebro, Beijing, China) containing 1% PMSF. The protein concentration was measured with a BCA test kit (Solebro, Beijing, China). Proteins were separated on a 10% SDS-polyacrylamide gel (Bio-Rad, Hercules, CA, USA) and then transferred to a polyfluoroethylene membrane. The films were then incubated with specific primary antibodies against PCNA, CDK2 and Pax7 and the corresponding secondary antibodies, and the membranes were color developed with an ultrasensitive ECL chemiluminescent reagent (Biyuntian, Beijing, China), exposed with an Odyssey CLX imaging screen system (Li-COR), photographed and archived. The relative expression level of the target protein was determined by the ratio of the gray value of the target protein to that of GAPDH/β-tubulin. Cell proliferation assay The proliferation of sheep myoblasts was detected with a Cell Counting Kit-8 (CCK-8) (Biyun Tian, Beijing, China) according to the manufacturer’s instructions. After the transfection of related plasmids, 10 μL of CCK-8 was added to each well at 0, 6, 12, 24, and 48 h. After 2 h of incubation in an incubator, the proliferation rate of the myoblasts was calculated by measuring the absorbance at 450 nm with an enzyme marker. The proliferation of sheep myoblasts was detected with an EdU Cell Proliferation Detection Kit (Biyuntian, Beijing, China). After plasmid transfection, the cells were cultured for 48 h. EdU working solution preheated at 37 °C was added, and the cells were transfected for 2 h. The findings were visualized and photographed under a fluorescence microscope (Leica, Germany). Dual-luciferase reporter assay Sheep myoblasts were inoculated evenly into 24-well cell culture plates. When the desired cell density was reached, psiCHECK2-circFAM171A1-WT- and psiCHECK2-circFAM171A1-MUT-cotransfected cells were cotransfected with psiCHECK2- MAPK15-WT or psiCHECK2-MAPK15-MUT and oar-miR-485-5p mimics or NC mimics. CircFAM171A1-MUT-oar-miR-485-5p-transfected cells. Serum luciferase detection was performed according to the manufacturer's directions using a Dual-Luciferase Detection Kit (Vazyme, Nanjing, China). Luciferase activity was recorded 48 h after transfection and measured with a multimode microtitration system (EnS pire, Perkin Elmer, USA). RNA pull-down assay A pull-down assay with biotinylated miRNA was performed. Briefly, 3′-end biotinylated oar-miR-485-5p mimic or control RNA (50 nM, GenePharma, Shanghai, China) was transfected into SMs. After 24 h, the cells were washed with phosphate-buffered saline (PBS) and harvested using lysis buffer supplemented with 50 U of RNase OUT. The biotin-coupled RNA complex was collected with a Dynabeads MyOne Streptavidin C1 kit (Invitrogen, Carlsbad, CA, USA) according to the manufacturer's protocol. The bead-bound RNA (pull-down RNA) was isolated using TRIzol reagent. The abundance of circFAM171A1 in the isolated fractions was evaluated by RT-qPCR analysis. Each experiment was performed three times. Estradiol assay The 17β-estradiol (10 mM/mL, DMSO) was purchased from MedChe Express (New Jersey, USA). To establish the best concentration for the experiment, estradiol was diluted in a gradient (0 nM, 1 nM, 10 nM, and 100 nM) and then added to the myoblast culture medium along with the suspended cells. The cells were cultured in gradient-diluted estradiol medium for 48 h, after which RNA and proteins were extracted. Statistical analysis All analyses were performed with at least three technical replications. The data are expressed as the average ± standard error of measurement (SEM) and were plotted with GraphPad Prism software. Statistical data were analyzed using SPSS 20 (SPSS INC. Chicago, IL, USA) software, with separate samples t tests for making comparisons between two datasets and one-way ANOVA for making comparisons between more than two datasets. Statistical significance is expressed as ** P < 0.01, * P < 0.05. The study was approved by the IAS-CAAS Animal Ethics Committee under approval number IAS2019-63. In strict compliance with relevant regulations, we are committed to promoting animal science research to contribute to the development of agriculture in China. In this study, we selected 10 small-tailed Han sheep ewes aged 2 months from Wulat Zhongqi Farm, Bayannur City, Inner Mongolia Autonomous Region, China. For comparative observations, the sheep were randomized into two groups: the ovariectomized group (n = 5, OR-STH) and the sham surgery group (n = 5, STH). There were no significant differences between the two groups in terms of height, weight, or age. After surgery, both groups of sheep were kept in the same feeding environment. Over a 6-month period, sheep weights were measured, and tissue samples were collected from the longissimus dorsi muscle. The mean body weights were 72.4 ± 1.86 kg and 88.4 ± 3.97 kg in the OR-STH and STH groups, respectively ( P < 0.05). The estrogen levels in the serum of the sheep were 28.71 ± 2.73 pg/mL and 12.23 ± 0.82 pg/mL in the STH group and OR-STH group, respectively ( P < 0.05). All the tissue samples were immediately frozen in liquid nitrogen to ensure their stability. The samples were subsequently stored in a cryogenic environment at −80 °C until further analysis. In this study, we first extracted total RNA from 10 muscle tissue samples using TRIzol reagent (Invitrogen, Carlsbad, CA, USA) according to the manufacturer’s instructions. The total amount of RNA extracted was 2 μg (concentration ≥ 300 ng/μL, OD260/280 between 1.8 and 2.2), which was used as the raw material for constructing the miRNA and cDNA libraries. To remove ribosomal RNA (rRNA), we used the Epicenter Ribo-Zero™ rRNA Removal Kit (Epicenter, Madison, WI, USA). After rRNA removal, we constructed sequencing libraries using the NEBNext® Ultra™ Directional RNA Library Prep Kit for Illumina® (NEB, Ipswich, MA, USA) following the manufacturer’s instructions. Throughout the process, we also purified the products using the AMPure XP system and assessed the quality of the library via gel electrophoresis and the NanoDrop 2000, Qubit 2.0, and Agilent Bioanalyzer 2100 systems. Finally, the libraries were sequenced on an Illumina HiSeq 2500 platform, yielding 150 bp paired-end reads. First, we approximated the levels of expression of circRNAs in the constructed muscle tissue libraries via Illumina sequencing data and the FPKM/readcount values. To ensure the accuracy of the results, two software programs, find_circ and CIRI2, were used for circRNA identification, and the intersection was taken for the final results. To identify mRNA, we used the DE-Seq R software package (version 4.2.1). For the DE-Seq analysis, we used a threshold of a q value less than 0.05 and a |log2FoldChange| greater than 1 to adjust for mRNA. Next, we used the DEG-seq R package to analyze differentially expressed circRNAs (DECs) on the basis of normalized reads per thousand bases per million (TPM) values. During the analysis, we modified the q value and set the thresholds for significant DECs as a q value less than 0.05 and a |log2FoldChange| greater than 1. This series of analyses provided us with information about the differential expression of circRNAs in muscle tissues as well as the related genes, which provided an important basis for further studies. For a complete list of circRNAs see Supplementary Table . In the present study, we used functional annotation to analyze DE circRNA host genes on the basis of GO and KEGG annotations. First, for the source genes, we performed GO annotation on the basis of the corresponding genes and their GO annotations in NCBI. This information was stored in the following database: https://ftp.ncbi.nlm.nih.gov/gene/DATA/gene2go.gz . Next, we used KOBAS software to test the statistical enrichment of host genes associated with DE circRNAs in the KEGG pathway . To determine the significance of the enrichment analysis, we set a threshold of P < 0.05. This series of functional annotation analyses helped us to gain a deeper understanding of the functions of DE circRNAs in organisms and the roles of their related genes in specific pathways. We first constructed a ceRNA network based on predicted circRNA, miRNA and mRNA binding sites from whole-transcriptome sequencing data and then constructed a circRNA‒miRNA‒mRNA interaction network via Cytoscape software. This step helped us understand the functions of circRNAs in organisms and their interactions with other genes. Next, we constructed protein‒protein interaction (PPI) networks of the differentially expressed genes (DEGs) using STRING (version 11.5, https://string-db.org/ ). For PPI analysis, we used the STRING database v11.5 (species: Ovis aries ). To construct the PPI network, we collected target genes from the database and selected protein pairs with scores greater than 700 from the STRING database. Finally, we used Cytoscape software to visualize the protein pairs. This series of analyses helped us gain insight into the interactions between DEGs, providing strong support for studying the physiological functions of organisms and the mechanisms of disease occurrence. We first inoculated sheep primary myoblasts in 6-well plates maintained at a population density of 1 × 10 6 cells per well, with each group containing three replicates. Next, the polylysine-treated glass coverslips were placed in 6-well plates and removed after 16 h. To fix the cells, we treated them with 4% formaldehyde for 15 min and then washed them with PBS 3 times for 3 min each. Next, the samples were incubated with 10% goat serum (China) for 30 min. For immunofluorescence staining, the samples were incubated with Desmin (1:500) and MYOD1 (1:500) antibodies (Proteintech, USA) overnight at 4 °C. After the incubation was complete, the membrane was washed with PBS five times for 5 min each. Next, of the samples were incubated with fluorescent IgG (1:2000) (Saixin, China) for 1 h at 37 °C. Finally, DAPI (Beyotime, China) was added to stain the nuclei for 5 min, after which the cells were washed 5 times for 5 min each. Through this series of experiments, we successfully examined sheep primary myoblasts, which provided the basis for subsequent studies. One microgram of sheep muscle tissue RNA was added to RNase R reagent (1 U/μg) and incubated at 37 °C for 10 min. The cDNA was reverse transcribed from RNase R processed RNA and control pretreated RNA, and RT‒qPCR was utilized to measure the expression of circRNAs and the corresponding linear transcripts. A FISH kit SA-Biotin System (JiMa, Shanghai, China) and a circFAM171A1 probe mixture (Cy3 labeled) were used (Table ). FISH was conducted according to the manufacturer’s instructions to assess the localization of circFAM171A1 in sheep myoblasts. The procedure was as follows: the cells were cultured in 6-well plates overnight, stabilized with 4% paraformaldehyde for 15 min at room temperature, incubated with probe solution (1 μL of 1 μM biotin probe + 1 μL of 1 μM SA-Cy3 + 8 μL of PBS) added to the medium for 30 min at 37 °C, and placed in an incubator at 37 °C overnight (12–16 h) in the dark to allow hybridization. The cells were stained with DAPI solution (2 μg/mL) for 15 min at room temperature in the dark. An anti-fluorescence quenching blocker was then added. Images were obtained via a computerized laser scanning confocal microscope. Sheep myoblasts were inoculated at a density of ≤ 3 × 10 6 in 6-cm culture dishes, and after 24 h of cell culture, the cells were washed with PBS twice, and the PBS was discarded. Two hundred microlitres of prechilled buffer J was added to the culture dish to cover the cell surface, the mixture was allowed to cool for 5 min, the lysis products were collected, the mixture was moved to an RNase-free sponge tube, and the mixture was centrifuged at 14,000 × g for 10 min at 4 °C. The liquid supernatant (cytoplasmic RNA) was pipetted into another centrifuge tube, 200 μL of buffer SK was added to the precipitate (cytosolic RNA), another 400 μL of buffer SK was added, the mixture was vortexed for 10 s, 200 μL of anhydrous ethanol was added, and the mixture was vortexed for 10 s. The mixture was transferred to a centrifuge column and centrifuged at 6000 rpm for 1 min at 4 °C; the supernatant was discarded, and the column was returned to the centrifuge tube. A total of 400 μL of wash solution A was added, and the mixture was centrifuged at 14,000 × g and 4 °C for 1 min. The supernatant was discarded, the column was washed once, the mixture was placed back into the collection tube, and the mixture was centrifuged at 14,000 × g and 4 °C for 2 min. Then, 50 μL of elution buffer E was added, the mixture was centrifuged at 6000 rpm and 4 °C for 2 min, followed by centrifugation at 14,000 × g and 4 °C for 1 min. The RNA concentration was then detected, and the samples were stored at −80 °C. mRNA and miRNA back-transcription were performed using the HiScript® IiI All-in-one RT SuperMix kit (Vazyme, Nanjing, China) and the miRNA first-strand cDNA synthetic kit (Vazyme, Nanjing, China). RT-qPCR was performed on a RocheLight Cycler® 480 II system (Roche Applied Science, Mannheim, Germany), and the mRNA and miRNA were extracted using the Taq Pro Universal SYBR qPCR Master Mix (Vazyme). RT-qPCR was performed on a RocheLight Cycler® 480 II system (Roche Applied Science, Mannheim, Germany), and mRNAs and miRNAs were extracted using Taq Pro Universal SYBR qPCR Master Mix (Vazyme). The RT-qPCR procedure was as follows: preliminary denaturation at 95 °C for 5 min, denaturation at 95 °C for 5 s, and degradation at 60 °C for 30 s for 35 cycles. The data were analyzed via the 2 −ΔΔCt method, with the sheep β-actin and U6 genes used as internal control genes. The relative expression was analyzed by a t test of dependent samples, and the significance of differences was analyzed by SPSS 20.0. The primers for RT‒qPCR were designed by Primer 5 software and constructed by Sangon Biotech Co. (Shanghai). The primer sequences are listed in Table . The longissimus dorsi muscle tissues from both surfaces of the fetal spine of 90-day-old small-tailed Han sheep were isolated under aseptic conditions, and combinations of connective tissues and blood tubes were removed and then washed with PBS (2% penicillin/streptomycin). The muscle samples were pelleted and digested with 0.25% trypsin (Solarbio, Beijing, China) for 18 h at 4 °C and then cultured in an incubator (37 °C, 5% CO 2 ) for approximately 2 h. The cells were then cultured in the incubator for approximately 2 h. The cells were then incubated for approximately 1 h at 4 °C. The isolated cells were inoculated into 100 mm culture dishes and cultured with complete media (DMEM-F12, 10% FBS and 1% penicillin/streptomycin). After the cells reached more than 90% confluence, they were transferred to 6-well plates for subsequent experiments. The HEK293T cell line was cultivated under the same cultivation requirements. Overexpression of pcDNA3.1-circFAM171A1 and interference siRNA vectors were designed and synthesized on the basis of the sequence of circFAM171A1. The mimics and inhibitors of oar-miR-485-5p were designed and synthesized on the basis of the sequence of oar-miR-485-5p. MAPK15 overexpressing pIRES2-EGFP-MAPK15 and interfering with si-MAPK15 were designed and synthesized on the basis of the sequence of MAPK15 provided by NCBI. The overexpression and interference vectors, mimics and inhibitors were synthesized by Shanghai Gemma Pharmaceuticals Technology Co. All the vectors were sequenced, and the sheep myoblasts and HEK293T cells were transfected with Lipofectamine 2000 (Invitrogen, Carlsbad, CA, USA) according to the manufacturer’s instructions. Cell growth and gene expression were assessed 48 h posttransfection. Proteins in the cell samples were extracted with radioimmunoprecipitation assay (RIPA) buffer (Solebro, Beijing, China) containing 1% PMSF. The protein concentration was measured with a BCA test kit (Solebro, Beijing, China). Proteins were separated on a 10% SDS-polyacrylamide gel (Bio-Rad, Hercules, CA, USA) and then transferred to a polyfluoroethylene membrane. The films were then incubated with specific primary antibodies against PCNA, CDK2 and Pax7 and the corresponding secondary antibodies, and the membranes were color developed with an ultrasensitive ECL chemiluminescent reagent (Biyuntian, Beijing, China), exposed with an Odyssey CLX imaging screen system (Li-COR), photographed and archived. The relative expression level of the target protein was determined by the ratio of the gray value of the target protein to that of GAPDH/β-tubulin. The proliferation of sheep myoblasts was detected with a Cell Counting Kit-8 (CCK-8) (Biyun Tian, Beijing, China) according to the manufacturer’s instructions. After the transfection of related plasmids, 10 μL of CCK-8 was added to each well at 0, 6, 12, 24, and 48 h. After 2 h of incubation in an incubator, the proliferation rate of the myoblasts was calculated by measuring the absorbance at 450 nm with an enzyme marker. The proliferation of sheep myoblasts was detected with an EdU Cell Proliferation Detection Kit (Biyuntian, Beijing, China). After plasmid transfection, the cells were cultured for 48 h. EdU working solution preheated at 37 °C was added, and the cells were transfected for 2 h. The findings were visualized and photographed under a fluorescence microscope (Leica, Germany). Sheep myoblasts were inoculated evenly into 24-well cell culture plates. When the desired cell density was reached, psiCHECK2-circFAM171A1-WT- and psiCHECK2-circFAM171A1-MUT-cotransfected cells were cotransfected with psiCHECK2- MAPK15-WT or psiCHECK2-MAPK15-MUT and oar-miR-485-5p mimics or NC mimics. CircFAM171A1-MUT-oar-miR-485-5p-transfected cells. Serum luciferase detection was performed according to the manufacturer's directions using a Dual-Luciferase Detection Kit (Vazyme, Nanjing, China). Luciferase activity was recorded 48 h after transfection and measured with a multimode microtitration system (EnS pire, Perkin Elmer, USA). A pull-down assay with biotinylated miRNA was performed. Briefly, 3′-end biotinylated oar-miR-485-5p mimic or control RNA (50 nM, GenePharma, Shanghai, China) was transfected into SMs. After 24 h, the cells were washed with phosphate-buffered saline (PBS) and harvested using lysis buffer supplemented with 50 U of RNase OUT. The biotin-coupled RNA complex was collected with a Dynabeads MyOne Streptavidin C1 kit (Invitrogen, Carlsbad, CA, USA) according to the manufacturer's protocol. The bead-bound RNA (pull-down RNA) was isolated using TRIzol reagent. The abundance of circFAM171A1 in the isolated fractions was evaluated by RT-qPCR analysis. Each experiment was performed three times. The 17β-estradiol (10 mM/mL, DMSO) was purchased from MedChe Express (New Jersey, USA). To establish the best concentration for the experiment, estradiol was diluted in a gradient (0 nM, 1 nM, 10 nM, and 100 nM) and then added to the myoblast culture medium along with the suspended cells. The cells were cultured in gradient-diluted estradiol medium for 48 h, after which RNA and proteins were extracted. All analyses were performed with at least three technical replications. The data are expressed as the average ± standard error of measurement (SEM) and were plotted with GraphPad Prism software. Statistical data were analyzed using SPSS 20 (SPSS INC. Chicago, IL, USA) software, with separate samples t tests for making comparisons between two datasets and one-way ANOVA for making comparisons between more than two datasets. Statistical significance is expressed as ** P < 0.01, * P < 0.05. CircRNA expression profile in the longissimus dorsi of ovariectomized and sham-operated small-tailed Han sheep A total of 1,026,294,838 raw reads were derived from 10 muscle sequencing libraries from the OR-STH and STH groups, and 932,636,656 clean reads, accounting for 90.87% of the raw data, were obtained after quality control (Table ). In addition, the Q30 values were greater than 91.14%, indicating good sequencing quality, and the average net read rate was 83.81% (from 78.61 to 85.69%) for only mapping to the sheep genome. Following the removal of ribosomal RNA (rRNA), 8,721 potential candidate circRNAs were identified (Supplementary Table ). To identify circRNAs with potential functions in sheep muscle development, we counted the identified circRNAs. The findings revealed that the OR-STH group library contained 2,042 circRNAs and that the STH group library contained 1,972 circRNAs (Fig. A). The major circRNA types identified in the study were all exonic (OR_STH: 93.70%; STH: 93.75%), lasso-type (OR_STH:4.26%; STH:4.16%) and intergenic (OR_STH: 2.03%; STH:2.09%; Fig. B, ). The circRNAs were generated primarily from exon splicing, with a smaller share of intronic and regenerative splicing (Fig. D). The statistics of circRNA density per chromosome indicated that circRNAs were located on chromosomes 1 to 9, with the proportion of circRNAs on chromosomes 1, 2, 3, and 4 being the highest (approximately 42%) (Fig. E). The distribution of genes on chromosomes is shown in supplementary materials Figure A and S1B. Differential expression analysis of circRNAs The number of circRNAs expressed by each individual was calculated and standardized to SRPBM. The expression of circRNAs was normalized to SRPBM expression on the basis of the normalized expression, |log2 (foldchange)|> 1, P value < 0.05. A total of 118 DE circRNAs (71 upregulated and 47 downregulated) were identified when OR-STH was compared with STH (Fig. A). The overall distribution of the differentially expressed circRNAs is shown in the scatter plot in Fig. B, the cluster heatmap in Fig. C and the box plot and violin plot in supplementary materials Figure C and S1D. To ensure the precision of the RNA-seq protocol, we randomly selected eight circRNAs that were differentially expressed and designed specific RT‒qPCR primers in the circRNA boundary region. The levels of DE circRNA expression measured by RT‒qPCR and RNA‒seq showed the same trend (Fig. D). This result indicates that the RNA-seq acquisition and subsequent organization of the data in this study are reliable. In addition, to provide more information about the functions of circRNAs in muscle growth and development in sheep, we performed GO and KEGG analyses of the host genes (supplementary materials Figure E and S1F). Analysis of ceRNA regulatory networks (circRNA-miRNA‒mRNA networks) A combination of differentially expressed circRNAs (DECs), differentially expressed miRNAs (DEMs), and differentially expressed mRNAs (DEGs) were identified from sheep longissimus dorsi muscle by analyzing whole-transcriptome data to generate a ceRNA regulatory network. A total of 41 circRNA‒miRNA pairs and 3,499 miRNA‒mRNA pairs were filtered by comparing negatively correlated circRNA‒miRNA pairs and miRNA‒mRNA pairs in the OR-STH and STH data ( P < 0.05). Four randomly selected differentially expressed circRNAs with more or less than three binding sites and 14 miRNAs and 90 mRNAs, all of which were differentially expressed, were selected, resulting in the construction of an interaction network: the circRNA‒miRNA‒mRNA interface network (Fig. A). The screened differential genes were analyzed against the host genes using the STRING database. We selected > 700-point protein pairs and constructed protein–protein interaction (PPI) networks of host genes using Cytoscape (Fig. B). Six sets of differential circRNA-miRNA‒mRNA pairs were subsequently chosen for RT‒qPCR: circFAM171A1-oar-miR-485-5p-MAPK15, novel_circ_0002443-novel_329-GIPC1, novel_circ_0009805-novel_141-PRPSAP1, novel_circ_0006391-novel_210-BRPF1, novel_circ_0015927-novel_563-MAPKBP1. RT‒qPCR confirmed that the changes of circRNA, miRNA and mRNA expression levels in sheep muscle tissues were consistent with the RNA-seq data and was negatively correlated ( P < 0.05) (Fig. C). Estrogen induced circFAM171A1 production in sheep myoblasts RNA-seq and RT‒qPCR revealed that the abundance of circFAM171A1 in the STH group was clearly greater than that in the OR-STH group, suggesting that the difference in the expression level of circFAM171A1 may be a result of estrogen induction. To clarify the above results, we added different concentrations of estrogen to isolated myoblasts in vitro to detect the circulation of circFAM171A1 in myoblasts. The findings indicated that the level of circFAM171A1 increased significantly in all groups after the addition of estrogen, and when the concentration reached 10 nM, the expression level of circFAM171A1 was significantly greater than that in the other groups (Fig. ). This result suggests that the level of circFAM171A1 expression in sheep myoblasts was indeed affected by estrogen. Identification of circFAM171A1 as a candidate circRNA Previous research has demonstrated that circRNAs can eliminate the potential for miRNAs to negatively affect the expression of target genes . CircFAM171A1 is 159 bp long and originates from exon 5 of the gene encoding the protein FAM171A1. The reverse splice junction of circFAM171A1 was verified via Sanger sequencing (Fig. A). The level of circFAM171A1 expression was not markedly reduced in sheep myoblasts after RNase R treatment, but the levels of linear FAM171A1 and GAPDH mRNA expression were reduced (Fig. B). Next, we examined the expression of circFAM171A1 in muscle tissues from both ovariectomized and sham surgery in small-tailed Han sheep via RT‒qPCR. The results revealed that circFAM171A1 was expressed in muscle tissues from both the OR-STH and STH groups and that the expression level of circFAM171A1 was significantly greater in the STH group than in the OR-STH group, which is consistent with our RNA-seq data (Fig. C). Subsequent RNA nucleoplasmic isolation assays revealed that circFAM171A1 was localized predominantly in the cytoplasm (more than 80%) and to a lesser extent in the nucleus (Fig. D). RNA fluorescence in situ hybridization experiments further confirmed these findings (Figure ). Immunofluorescence staining revealed that the myoblast marker MYOD1 was expressed predominantly in the nucleus, whereas Desmin was expressed predominantly in the cytoplasm (Fig. E). These results indicate that the circularized form of circFAM171A1 was more stable than the linear transcript RNA, and circFAM171A1 was primarily functional when expressed in the cytoplasm. Effects of circFAM171A1 on the proliferation of sheep myoblasts To validate the influence of circFAM171A1 on muscle progression in sheep, we constructed overexpression and interference plasmids for circFAM171A1 and transfected them into sheep primary myoblasts for 48 h. Figure A and 6C show that circFAM171A1 overexpression and interference plasmids were effective. The RT‒qPCR results revealed that the expression levels of CDK2, Pax7 and PCNA, which are markers of cell proliferation, were significantly increased in sheep myoblasts after overexpression of circFAM171A1, whereas the opposite was observed after inhibition of their expression (Fig. B, D). The Western blot results were consistent with the RT‒qPCR results (Fig. E, F). EdU staining also revealed that circFAM171A1 overexpression significantly increased the number of EdU-positive cells, whereas the opposite effect was observed after circFAM171A1 expression was inhibited (Fig. G, H). The CCK-8 results revealed that overexpression of circFAM171A1 significantly increased the viability of sheep myoblasts, whereas inhibition of circFAM171A1 expression had the opposite effect (Fig. I, J). These findings indicate that circFAM171A1 promoted the proliferation of sheep myoblasts. CircFAM171A1 acts as a sponge for oar-miR-485-5p to regulate myoblast proliferation Cellular localization of circFAM171A1 via nucleoplasmic separation and FISH probes indicated that circFAM171A1 is located mainly in the cytoplasm. When a circRNA is contained in the cytoplasm, it functions mainly by interacting with miRNAs . Therefore, we hypothesized that circFAM171A1 might be a possible sponge for miRNAs. To verify that circFAM171A1 is a ceRNA aimed at miRNAs, we selected miRNAs that participate in the development of sheep myoblasts via transcriptomic data and confirmed the binding relationship between circFAM171A1 and miRNAs via RNA hybridization. We discovered that circFAM171A1 might bind to oar-miR-485-5p, with the binding information shown in Fig. A. RT‒qPCR revealed that overexpression of circFAM171A1 decreased the expression of oar-miR-485-5p, whereas disruption of circFAM171A1 increased the expression of oar-miR-485-5p (Fig. B, ). Next, we constructed plasmids for luciferase reporter assays to validate the binding of circFAM171A1 to oar-miR-485-5p. Dual luciferase reporter assays suggested that oar-miR-485-5p significantly inhibited the Rluc expression of pCK-circFAM171A1-WT in HEK293T cells, yet it had no influence on pCK-circFAM171A1-MUT (Fig. D, ). We then performed RNA pull-down experiments, and the results revealed that circFAM171A1 was significantly enriched compared with the negative control (NC) (Fig. F). We further confirmed that circFAM171A1 can bind oar-miR-485-5p to regulate myoblast proliferation. RT‒qPCR revealed that the mRNA levels of CDK2, Pax7, and PCNA were significantly decreased after transfection of the mimics, whereas the exact opposite was observed after transfection of the inhibitors (Fig. G–J). The Western blotting results were consistent with the RT‒qPCR results (Fig. K, ). In addition, CCK-8 and EdU assays revealed similar changes (Fig. M–P). These findings demonstrate that circFAM171A1 can act as a sponge for oar-miR-485-5p and confirmed that oar-miR-485-5p can inhibit the proliferation of sheep myoblasts. CircFAM171A1 impairs the inhibitory effect of oar-miR-485-5p on MAPK15 expression To clarify the circRNA-miRNA‒mRNA ceRNA mechanism, we investigated the oar-miR-485-5p target gene MAPK15. We examined the expression level of the target gene MAPK15 in sheep muscle. Western blotting revealed that there was a statistically significant difference in MAPK15 protein activity between the OR-STH and STH groups (Fig. A). The expression of MAPK15 increased significantly after the overexpression of circFAM171A1 and decreased after circFAM171A1 inhibition, as shown by RT‒qPCR (Fig. B). The expression of MAPK15 was significantly reduced after oar-miR-485-5p overexpression, whereas the expression of MAPK15 increased after treatment with an inhibitor of oar-miR-485-5p (Fig. C). To verify the binding of miRNAs with target genes, we constructed wild-type (WT) and mutant (MUT) psi-CHECK2 plasmids containing the 3′UTR of MAPK15. A dual luciferase activity assay demonstrated that oar-miR-485-5p significantly inhibited the luciferase activity of the wild-type MAPK15′UTR plasmid but not that of the mutant APK15 3′UTR plasmid in HEK293T cells (Fig. D). We subsequently synthesized the MAPK15 overexpression plasmid pIRES2-EGFP- MAPK15 and the interference plasmid si- MAPK15, which were subsequently transfected into sheep myoblasts for subsequent validation. We detected a significant increase in MAPK15expression after the transfection of pIRES2-EGFP-MAPK15 (Fig. E). Following the transfection of si-MAPK15, we detected a significant decrease in MAPK15 expression. The overexpression or interference of MAPK15 significantly increased or decreased the phosphorylation level of MAPK15, respectively (Fig. F). RT‒qPCR and Western blot analyses revealed that the overexpression or inhibition of MAPK15 significantly increased or decreased, respectively, the expression of CDK2, Pax7 and PCNA at both the mRNA and protein levels (Fig. G, H). EdU and CCK-8 assays revealed that overexpression of MAPK15 increased the proliferation rate of myoblasts, whereas interference with MAPK15 also inhibited the proliferation rate of myoblasts (Fig. I, J). These findings indicate that MAPK15 acts in concert with circFAM171A1 in sheep myoblasts and that circFAM171A1 functions as a ceRNA. In summary, circFAM171A1 acts as a sponge of oar-miR-485-5p, weakens its inhibitory effect on MAPK15, and promotes the proliferation of sheep myoblasts. Estrogen regulates sheep myoblast proliferation through the circFAM171A1/oar-miR-485-5p/MAPK15 pathway To investigate the influence of estrogen on the proliferation of sheep myoblasts, we examined the proliferation of myoblasts after the addition of 10 nM estradiol. RT‒qPCR and Western blotting revealed that the phosphorylation of MAPK15 and the expression of PCNA and CDK2 were significantly increased in the estradiol-treated group (Fig. A–D), and the proliferation rate of sheep myoblasts treated with 10 nM estradiol was significantly increased, as indicated by the results of the CCK8 and EdU assays (Fig. E–G). To verify the pathway by which estrogen regulates myoblast proliferation, we analyzed the expression levels of oar-miR-485-5p and MAPK15 in sheep myoblasts after the addition of estrogen. The findings indicated that the expression level of oar-miR-485-5p significantly decreased, whereas the protein and mRNA levels of MAPK15 significantly increased ( P < 0.05) (Fig. A–C). These findings indicate that estrogen can facilitate the proliferation of sheep myoblasts through the circFAM171A1/oar-miR-485-5p/MAPK15 pathway (Fig. ). longissimus dorsi of ovariectomized and sham-operated small-tailed Han sheep A total of 1,026,294,838 raw reads were derived from 10 muscle sequencing libraries from the OR-STH and STH groups, and 932,636,656 clean reads, accounting for 90.87% of the raw data, were obtained after quality control (Table ). In addition, the Q30 values were greater than 91.14%, indicating good sequencing quality, and the average net read rate was 83.81% (from 78.61 to 85.69%) for only mapping to the sheep genome. Following the removal of ribosomal RNA (rRNA), 8,721 potential candidate circRNAs were identified (Supplementary Table ). To identify circRNAs with potential functions in sheep muscle development, we counted the identified circRNAs. The findings revealed that the OR-STH group library contained 2,042 circRNAs and that the STH group library contained 1,972 circRNAs (Fig. A). The major circRNA types identified in the study were all exonic (OR_STH: 93.70%; STH: 93.75%), lasso-type (OR_STH:4.26%; STH:4.16%) and intergenic (OR_STH: 2.03%; STH:2.09%; Fig. B, ). The circRNAs were generated primarily from exon splicing, with a smaller share of intronic and regenerative splicing (Fig. D). The statistics of circRNA density per chromosome indicated that circRNAs were located on chromosomes 1 to 9, with the proportion of circRNAs on chromosomes 1, 2, 3, and 4 being the highest (approximately 42%) (Fig. E). The distribution of genes on chromosomes is shown in supplementary materials Figure A and S1B. The number of circRNAs expressed by each individual was calculated and standardized to SRPBM. The expression of circRNAs was normalized to SRPBM expression on the basis of the normalized expression, |log2 (foldchange)|> 1, P value < 0.05. A total of 118 DE circRNAs (71 upregulated and 47 downregulated) were identified when OR-STH was compared with STH (Fig. A). The overall distribution of the differentially expressed circRNAs is shown in the scatter plot in Fig. B, the cluster heatmap in Fig. C and the box plot and violin plot in supplementary materials Figure C and S1D. To ensure the precision of the RNA-seq protocol, we randomly selected eight circRNAs that were differentially expressed and designed specific RT‒qPCR primers in the circRNA boundary region. The levels of DE circRNA expression measured by RT‒qPCR and RNA‒seq showed the same trend (Fig. D). This result indicates that the RNA-seq acquisition and subsequent organization of the data in this study are reliable. In addition, to provide more information about the functions of circRNAs in muscle growth and development in sheep, we performed GO and KEGG analyses of the host genes (supplementary materials Figure E and S1F). A combination of differentially expressed circRNAs (DECs), differentially expressed miRNAs (DEMs), and differentially expressed mRNAs (DEGs) were identified from sheep longissimus dorsi muscle by analyzing whole-transcriptome data to generate a ceRNA regulatory network. A total of 41 circRNA‒miRNA pairs and 3,499 miRNA‒mRNA pairs were filtered by comparing negatively correlated circRNA‒miRNA pairs and miRNA‒mRNA pairs in the OR-STH and STH data ( P < 0.05). Four randomly selected differentially expressed circRNAs with more or less than three binding sites and 14 miRNAs and 90 mRNAs, all of which were differentially expressed, were selected, resulting in the construction of an interaction network: the circRNA‒miRNA‒mRNA interface network (Fig. A). The screened differential genes were analyzed against the host genes using the STRING database. We selected > 700-point protein pairs and constructed protein–protein interaction (PPI) networks of host genes using Cytoscape (Fig. B). Six sets of differential circRNA-miRNA‒mRNA pairs were subsequently chosen for RT‒qPCR: circFAM171A1-oar-miR-485-5p-MAPK15, novel_circ_0002443-novel_329-GIPC1, novel_circ_0009805-novel_141-PRPSAP1, novel_circ_0006391-novel_210-BRPF1, novel_circ_0015927-novel_563-MAPKBP1. RT‒qPCR confirmed that the changes of circRNA, miRNA and mRNA expression levels in sheep muscle tissues were consistent with the RNA-seq data and was negatively correlated ( P < 0.05) (Fig. C). RNA-seq and RT‒qPCR revealed that the abundance of circFAM171A1 in the STH group was clearly greater than that in the OR-STH group, suggesting that the difference in the expression level of circFAM171A1 may be a result of estrogen induction. To clarify the above results, we added different concentrations of estrogen to isolated myoblasts in vitro to detect the circulation of circFAM171A1 in myoblasts. The findings indicated that the level of circFAM171A1 increased significantly in all groups after the addition of estrogen, and when the concentration reached 10 nM, the expression level of circFAM171A1 was significantly greater than that in the other groups (Fig. ). This result suggests that the level of circFAM171A1 expression in sheep myoblasts was indeed affected by estrogen. Previous research has demonstrated that circRNAs can eliminate the potential for miRNAs to negatively affect the expression of target genes . CircFAM171A1 is 159 bp long and originates from exon 5 of the gene encoding the protein FAM171A1. The reverse splice junction of circFAM171A1 was verified via Sanger sequencing (Fig. A). The level of circFAM171A1 expression was not markedly reduced in sheep myoblasts after RNase R treatment, but the levels of linear FAM171A1 and GAPDH mRNA expression were reduced (Fig. B). Next, we examined the expression of circFAM171A1 in muscle tissues from both ovariectomized and sham surgery in small-tailed Han sheep via RT‒qPCR. The results revealed that circFAM171A1 was expressed in muscle tissues from both the OR-STH and STH groups and that the expression level of circFAM171A1 was significantly greater in the STH group than in the OR-STH group, which is consistent with our RNA-seq data (Fig. C). Subsequent RNA nucleoplasmic isolation assays revealed that circFAM171A1 was localized predominantly in the cytoplasm (more than 80%) and to a lesser extent in the nucleus (Fig. D). RNA fluorescence in situ hybridization experiments further confirmed these findings (Figure ). Immunofluorescence staining revealed that the myoblast marker MYOD1 was expressed predominantly in the nucleus, whereas Desmin was expressed predominantly in the cytoplasm (Fig. E). These results indicate that the circularized form of circFAM171A1 was more stable than the linear transcript RNA, and circFAM171A1 was primarily functional when expressed in the cytoplasm. To validate the influence of circFAM171A1 on muscle progression in sheep, we constructed overexpression and interference plasmids for circFAM171A1 and transfected them into sheep primary myoblasts for 48 h. Figure A and 6C show that circFAM171A1 overexpression and interference plasmids were effective. The RT‒qPCR results revealed that the expression levels of CDK2, Pax7 and PCNA, which are markers of cell proliferation, were significantly increased in sheep myoblasts after overexpression of circFAM171A1, whereas the opposite was observed after inhibition of their expression (Fig. B, D). The Western blot results were consistent with the RT‒qPCR results (Fig. E, F). EdU staining also revealed that circFAM171A1 overexpression significantly increased the number of EdU-positive cells, whereas the opposite effect was observed after circFAM171A1 expression was inhibited (Fig. G, H). The CCK-8 results revealed that overexpression of circFAM171A1 significantly increased the viability of sheep myoblasts, whereas inhibition of circFAM171A1 expression had the opposite effect (Fig. I, J). These findings indicate that circFAM171A1 promoted the proliferation of sheep myoblasts. Cellular localization of circFAM171A1 via nucleoplasmic separation and FISH probes indicated that circFAM171A1 is located mainly in the cytoplasm. When a circRNA is contained in the cytoplasm, it functions mainly by interacting with miRNAs . Therefore, we hypothesized that circFAM171A1 might be a possible sponge for miRNAs. To verify that circFAM171A1 is a ceRNA aimed at miRNAs, we selected miRNAs that participate in the development of sheep myoblasts via transcriptomic data and confirmed the binding relationship between circFAM171A1 and miRNAs via RNA hybridization. We discovered that circFAM171A1 might bind to oar-miR-485-5p, with the binding information shown in Fig. A. RT‒qPCR revealed that overexpression of circFAM171A1 decreased the expression of oar-miR-485-5p, whereas disruption of circFAM171A1 increased the expression of oar-miR-485-5p (Fig. B, ). Next, we constructed plasmids for luciferase reporter assays to validate the binding of circFAM171A1 to oar-miR-485-5p. Dual luciferase reporter assays suggested that oar-miR-485-5p significantly inhibited the Rluc expression of pCK-circFAM171A1-WT in HEK293T cells, yet it had no influence on pCK-circFAM171A1-MUT (Fig. D, ). We then performed RNA pull-down experiments, and the results revealed that circFAM171A1 was significantly enriched compared with the negative control (NC) (Fig. F). We further confirmed that circFAM171A1 can bind oar-miR-485-5p to regulate myoblast proliferation. RT‒qPCR revealed that the mRNA levels of CDK2, Pax7, and PCNA were significantly decreased after transfection of the mimics, whereas the exact opposite was observed after transfection of the inhibitors (Fig. G–J). The Western blotting results were consistent with the RT‒qPCR results (Fig. K, ). In addition, CCK-8 and EdU assays revealed similar changes (Fig. M–P). These findings demonstrate that circFAM171A1 can act as a sponge for oar-miR-485-5p and confirmed that oar-miR-485-5p can inhibit the proliferation of sheep myoblasts. To clarify the circRNA-miRNA‒mRNA ceRNA mechanism, we investigated the oar-miR-485-5p target gene MAPK15. We examined the expression level of the target gene MAPK15 in sheep muscle. Western blotting revealed that there was a statistically significant difference in MAPK15 protein activity between the OR-STH and STH groups (Fig. A). The expression of MAPK15 increased significantly after the overexpression of circFAM171A1 and decreased after circFAM171A1 inhibition, as shown by RT‒qPCR (Fig. B). The expression of MAPK15 was significantly reduced after oar-miR-485-5p overexpression, whereas the expression of MAPK15 increased after treatment with an inhibitor of oar-miR-485-5p (Fig. C). To verify the binding of miRNAs with target genes, we constructed wild-type (WT) and mutant (MUT) psi-CHECK2 plasmids containing the 3′UTR of MAPK15. A dual luciferase activity assay demonstrated that oar-miR-485-5p significantly inhibited the luciferase activity of the wild-type MAPK15′UTR plasmid but not that of the mutant APK15 3′UTR plasmid in HEK293T cells (Fig. D). We subsequently synthesized the MAPK15 overexpression plasmid pIRES2-EGFP- MAPK15 and the interference plasmid si- MAPK15, which were subsequently transfected into sheep myoblasts for subsequent validation. We detected a significant increase in MAPK15expression after the transfection of pIRES2-EGFP-MAPK15 (Fig. E). Following the transfection of si-MAPK15, we detected a significant decrease in MAPK15 expression. The overexpression or interference of MAPK15 significantly increased or decreased the phosphorylation level of MAPK15, respectively (Fig. F). RT‒qPCR and Western blot analyses revealed that the overexpression or inhibition of MAPK15 significantly increased or decreased, respectively, the expression of CDK2, Pax7 and PCNA at both the mRNA and protein levels (Fig. G, H). EdU and CCK-8 assays revealed that overexpression of MAPK15 increased the proliferation rate of myoblasts, whereas interference with MAPK15 also inhibited the proliferation rate of myoblasts (Fig. I, J). These findings indicate that MAPK15 acts in concert with circFAM171A1 in sheep myoblasts and that circFAM171A1 functions as a ceRNA. In summary, circFAM171A1 acts as a sponge of oar-miR-485-5p, weakens its inhibitory effect on MAPK15, and promotes the proliferation of sheep myoblasts. To investigate the influence of estrogen on the proliferation of sheep myoblasts, we examined the proliferation of myoblasts after the addition of 10 nM estradiol. RT‒qPCR and Western blotting revealed that the phosphorylation of MAPK15 and the expression of PCNA and CDK2 were significantly increased in the estradiol-treated group (Fig. A–D), and the proliferation rate of sheep myoblasts treated with 10 nM estradiol was significantly increased, as indicated by the results of the CCK8 and EdU assays (Fig. E–G). To verify the pathway by which estrogen regulates myoblast proliferation, we analyzed the expression levels of oar-miR-485-5p and MAPK15 in sheep myoblasts after the addition of estrogen. The findings indicated that the expression level of oar-miR-485-5p significantly decreased, whereas the protein and mRNA levels of MAPK15 significantly increased ( P < 0.05) (Fig. A–C). These findings indicate that estrogen can facilitate the proliferation of sheep myoblasts through the circFAM171A1/oar-miR-485-5p/MAPK15 pathway (Fig. ). The important role of estrogen in female mammalian reproduction is well known. However, its functions are not limited to the reproductive system; rather, it also plays important roles in different physiological processes, such as cardiovascular, skeletal muscle and neural networks, which are often overlooked . In living organisms, the estrogen receptors ERα, ERβ and the G protein-coupled estrogen receptor (GPER) play key roles. The coordinated action of these three receptors in the body ensures the effective action of estrogenic substances on target tissues . Estrogen plays an important role in regulating the growth of bone tissue. However, in women, estrogen deficiency during menopause or after bilateral oophorectomy may lead to the loss of cancellous and cortical bone, which in turn can lead to osteoporosis [ – ]. The mechanisms by which estrogen regulates muscle development are unclear, which poses a challenge for further research. In this work, we used the longissimus dorsi muscle of small-tailed Han sheep in a sham surgery group (STH) and ovariectomy group (OR-STH) and collected RNA from the longissimus dorsi muscle tissue for RNA-seq and molecular biology analysis. We successfully identified 11,297 circRNAs in a comparison between the STH group and the OR-STH group, 118 of which were differentially expressed. In a follow-up study, the differentially expressed circRNA host genes were analyzed for GO terms and KEGG pathway enrichment. This analysis revealed multiple important pathways involved in muscle growth, development and degradation, such as the AMPK signaling pathway, ECM receptor interactions, the ErbB signaling pathway, ubiquitin-mediated protein hydrolysis and the mTOR signaling pathway. These pathways play key roles in muscle development and are important for muscle growth and functional maintenance [ – ]. As key gene expression regulators, circRNAs play important roles in animal growth and development. Our study revealed that changes in circRNA abundance may be associated with ovariectomy. Further studies on the role of circRNAs in skeletal muscle growth and development will help to elucidate the relevant physiological mechanisms and provide useful references for clinical practice. In recent years, with the completion of the assembly and annotation of sheep breed genomes, there is now a wealth of reference data for sheep transcriptome analysis. As functionally conserved molecules, circRNAs have been shown to be important for animal development and growth in humans and mice [ – ]. However, few studies on functional circRNAs in sheep have been reported. In this study, circRNA sequencing results revealed that estrogen induced the production of a high abundance of circFAM171A1. We subsequently confirmed this result using in vitro addition of different concentrations of estradiol. In a subsequent study, we used a cell transfection technique to investigate the key role of circFAM171A1 in the proliferation of sheep myoblasts. The experimental results revealed that circFAM171A1 significantly promoted the proliferation of myoblasts. These findings reveal the important role of circFAM171A1 in sheep muscle progression and provide a basis for further research on the function of circRNAs in mammalian muscle biology. Increasingly, circRNAs have been found to function as miRNA sponges in the cytoplasm. This phenomenon was exemplified by circRNA-UBE2G1, a circRNA found mainly in the cytoplasm. circRNA-UBE2G1 was found to act as a sponge for miR-373 to modulate chondrocyte damage after lipopolysaccharide (LPS) treatment . CircFGFR2 is located in the cytoplasm and can serve as a molecular sponge for miR-133a-5p and miR-29b-1-5p to promote myoblast proliferation and differentiation . In this study, we confirmed that circFAM171A1 was distributed mostly in the cytoplasm through nucleoplasmic separation and fluorescence in situ hybridization (FISH) experiments. Transcriptome integration analysis revealed that circFAM171A1 functions as a ceRNA and mediates the expression of oar-miR-485-5p and MAPK15. Notably, miR-485-5p has been shown to be closely associated with the prevention and treatment of kidney and ovarian cancer . Furthermore, there is substantial evidence that miR-485-5p can function by binding to circRNAs. For example, circRUNX1 elevates SLC38A1 by adsorbing miR-485-5p to promote colorectal cancer cell growth, metastasis, and glutamine metabolism . CircFOXK2 can bind to miR-485-5p and activate PD-L1, thereby accelerating the development of non-small cell lung cancer (NSCLC) . Circ_0008529 modulates high glucose (HG)-induced apoptosis and inflammatory injury in human kidney cells (HK-2) by targeting the miR-485-5p/WNT2B pathway, indicating that Circ_0008529 plays a key role in the development of diabetic nephropathy (DN) . In our study, a dual-luciferase reporter assay confirmed that circFAM171A1 was able to bind oar-miR-485-5p. A functional study further revealed that oar-miR-485-5p could inhibit the proliferation of sheep primary myoblasts. However, overexpression of circFAM171A1 attenuated or even reversed this inhibitory effect. These results indicated that the effect of oar-miR-485-5p is opposite to that of circFAM171A1, suggesting that circFAM171A1 acts as a sponge for oar-miR-485-5p to regulate the proliferation of sheep myoblasts. These findings provide an important basis for further exploration of the functions of circRNAs in biology and medicine. The optimal working condition of miRNA sponge effect is that circRNA and miRNA concentrations are equivalent. However, the effectiveness of miRNA sponge effect is not entirely dependent on the change multiple of circRNA expression, but on the relative concentration and affinity between circRNA and miRNA. Even if the expression of circRNA does not change much, if the concentration of miRNA is also relatively low, the sponge effect may still occur. To more accurately evaluate the interaction between circFAM171A1 and oar-miR-485-5p, we validated this interaction through RNA pull-down experiments. This experimental approach was able to provide direct evidence of whether binding between circRNA and miRNA occurs, and the strength of this binding. In summary, although circFAM171A1 expression changes little, it does not rule out the possibility of miRNA sponge effect. We took into account the concentration, affinity and biological background of circRNA and miRNA, and confirmed the role of circFAM171A1 as an oar-miR-485-5p sponge through experimental verification. This interaction may have important effects on the regulation of miRNA activity and gene expression in cells, which is worthy of further study and exploration. The discovery that MAPK15, ERK8 and MAPK7 (ERK7) are atypical members of the MAP kinase family has provided a new perspective on the study of the MAP kinase family, a group of serine/threonine kinases that are widely found in eukaryotes and play key roles in cell growth, differentiation, apoptosis and other biological processes . MAPK8 and MAPK11 are also important members of the MAP kinase family . circACTA1 can act as a miR199a-5p and miR-433 sponge, thereby eliminating the inhibitory effect on the target genes MAP3K11 and MAPK8. circACTA1 further affects the biological behavior of bovine primary myoblasts by activating the MAP3K11/MAP2K7/JNK signaling pathway. During cell proliferation, apoptosis and differentiation, circACTA1 plays an important regulatory role . Moreover, miR-138 prevents anoxia-induced apoptosis in cardiomyocytes through the MAP3K11/JNK/c-Jun pathway . Recent studies have shown that MAPK15 affects BCR-ABL 1-induced phagocytosis and modulates cancer gene-dependent cell proliferation and neighboring tumor formation . Building on these findings, we hypothesized that circFAM171A1 modulates cell growth through the oar-miR-485-5p/MAPK15 signaling pathway. In this study, overexpression of circFAM171A1 significantly increased the expression of MAPK15. However, interference with circFAM171A1 had the opposite effect. Transfection of mimics of oar-miR-485-5p significantly decreased MAPK15 expression, but treatment with inhibitors of oar-miR-485-5p had the opposite effect. These results suggest that circFAM171A1 can regulate sheep myoblast proliferation via the oar-miR-485-5p/MAPK1 pathway. Estrogen deficiency has been found to induce hormonal endocrine and metaphorical disruption in women after menopause, resulting in osteoporosis, metabolic syndrome, and loss of muscle force and mass . Studies have shown that estrogen therapy reduces hepatic lipoatrophy by increasing liver aquaporin 7 (AQP7) secretion in an ovariectomized (OR) mouse phantom and a steatosis cell model . Seko et al. confirmed that estrogen has a modulatory role in muscle growth and rejuvenation in young adult female mice via ERβ in bone muscle-specific stem cells . We found that the addition of estradiol to sheep myoblasts promoted their proliferation and that the expression level of oar-miR-485-5p decreased, whereas that of MAPK15 increased. These results suggest that the addition of appropriate levels of estrogen stimulates the growth and proliferation of sheep myoblasts through the circFAM171A1/oar-miR-485-5p/MAPK15 pathway. In this study, we found that estrogen induced circFAM171A1 expression in sheep myoblasts. Transcriptome integration analysis revealed that circFAM171A1 can act as a ceRNA to regulate the expression of oar-miR-485-5p and MAPK15 in sheep myoblasts. In vitro addition of estrogen promoted myoblast proliferation through the circFAM171A1/oar-miR-485-5p/MAPK15 pathway. These findings offer new perspectives for further knowledge of estrogen regulation of myoblast proliferation in female animals and provide insights into the molecular processes by which estrogen affects muscle uptake and growth. Below is the link to the electronic supplementary material. Supplementary file1 (JPG 1888 KB) Supplementary file2 (JPG 968 KB) Supplementary file3 (XLS 587 KB) Supplementary file4 (DOCX 630 KB) Supplementary file5 (XLSX 1320 KB)
Description of Opioid-involved Hospital Deaths that Do Not Have a Subsequent Autopsy
b6af0c01-732d-401c-9668-67c7ea9a5404
9719799
Forensic Medicine[mh]
The ongoing opioid epidemic has caused immeasurable loss of human life and has adversely impacted families and communities in the United States. Opioid-use disorders are associated with diminished health status, high economic and medical costs, increased disability, and early death. In 2019, an estimated 0.8% of the US population met the criteria for opioid-use disorders, with 2%–3% of the population meeting the criteria during their lifetime. Over the past two decades among persons 12 years or older in the United States, the percent misusing and meeting the criteria for opioid-use disorders relating to prescription opioids has slightly declined, while lifetime and recent use of heroin has slightly increased. Despite these modest changes in misuse and prevalence of opioid-use disorders, the rate of opioid-involved deaths has increased sevenfold since 2000 from 2.8 to 21 per 100,000 persons in 2020. While there are multiple reasons for the increase in mortality, it has principally been attributed to the introduction of fentanyl, its analogs, and other adulterants in illegally obtained opioids. Furthermore, while only an estimated 8% of persons misusing opioids use heroin and fentanyl analogs, studies show that fentanyl is found in 50%–70% of fatal opioid overdoses. – The opioid overdose toxidrome is characterized by a constellation of cardiovascular, respiratory, immunologic, and gastrointestinal signs. – Comprehensive surveillance data of nonfatal and fatal opioid-involved overdoses is crucial for designing interventions, informing public health planning, and guiding research outcomes. , In the United States, surveillance data primarily relies on death records to monitor the opioid epidemic despite research showing that death records are likely undercounting opioid-involved deaths by 20% to over 40%. – However, past studies only provide an estimate of the overall undercount, and do not provide detailed information about who is more likely to be undercounted. Autopsies are essential for determining and documenting detailed cause of death. However, US Centers for Disease Control and Prevention (CDC) data demonstrates that no state requires an autopsy following a death related to drug use, abuse, and overdose. The majority of persons who die from an acute injury (All S and T ICD-10 codes) do not have an autopsy and national data indicates that autopsy rates have remained relatively steady since 2005. Furthermore, prior studies demonstrate that the completeness of documenting multiple causes of death on death certificates, particularly identifying the drugs involved in a death, is lower among coroners compared to medical examiners or pathologists and varies across jurisdictions. , , , Given the large proportion of deaths that do not have an autopsy and the varying resources across jurisdictions responsible for completing death certificates, it is possible that a large number of opioid-involved deaths are missed entirely by death certificates. Since the current US surveillance system relies on death records to monitor the most severe outcome of opioid-use disorders, we need to ensure that opioid-involved deaths are captured as fully as possible and provide detailed information about those that die and the specific drugs contributing to their deaths. However, there continues to be a lack of reported findings that have linked other data systems with death records to determine how many overdose deaths are missed entirely. The current study links hospital data with medical examiner data to (1) describe characteristics of decedents from opioid-involved overdoses in Cook County hospitals that do not have an autopsy, and (2) evaluate the proportion of opioid-involved deaths not captured by the county medical examiner that may contribute to a cumulative undercount reported by the CDC. Design and study participants We conducted a data linkage of hospital and medical examiner data of all opioid-involved deaths occurring among residents of Cook County who died in Cook County (county location for the city of Chicago) from 2016 to 2019. A claim of exemption was approved for this project by the BLINDED IRB (#2020-0753). Outpatient and Inpatient Hospital Data The data covers the period of January 1, 2016, through December 31, 2019 (2020 data were not available at the time of initiating this analysis). The hospital data are based on billing records compiled by the Illinois Hospital Association. The outpatient database includes all patients treated in emergency departments (ED) for less than 24 hours who were not admitted as an inpatient to the hospital. The outpatient data only includes patients seeking emergent care in the ED or those with referrals to clinics that require registration in the ED (e.g., orthopedic ambulatory surgery and gastrointestinal ambulatory procedures). The inpatient database includes all patients treated for 24 hours or more in Illinois hospitals for any medical reason. Both datasets include information on patient demographics (age, race, gender), clinical outcomes (diagnoses, hospital procedures, and discharge status), and economic outcomes (hospital charges and payer source). Based on the annual state audit of hospitals, 97% of all inpatient admissions statewide are captured by the participating hospitals included in the dataset. , Cook County Medical Examiner Data We obtained Cook County Medical Examiner (CCME) data for the period January 1, 2016, through December 31, 2019 through a public online open data portal. A forensic pathologist conducts each investigation of deaths occurring under the jurisdiction of Cook County that includes suspected drug overdoses. The CCME dataset includes information on the date and time of death, demographics of the decedent including race and ethnicity, decedent residence, cause, and manner of death, contributing causes, and incident location including geolocation. In suspected drug overdoses, a toxicologic analysis is conducted that typically uses peripheral blood; in a very small proportion of cases, tissue from the liver, kidney and muscle is analyzed. Toxicologic analysis for biologically active agents involves initial screening followed by confirmatory and quantification analyses. Initial screening is conducted using enzyme-linked immunosorbent assay (ELISA) techniques, which provides an initial rapid screening for drugs. Positive ELISA toxicologic analyses are then verified through (1) gas chromatography– mass spectrometry (GC-MS) or (2) liquid chromatography– mass spectrometry (LC-MS). These secondary tests are used to identify the specific agents and respective concentrations. When multiple agents are identified in the bioassay, all the agents are individually identified. Limits of detection varies for each agent analyzed. Inclusion Criteria Based on the Council of State and Territorial Epidemiologists (CSTE) nonfatal opioid overdose standardized surveillance case definition, we only included hospital deaths with an initial encounter ICD-10 diagnosis code indicating intoxication (F11.12, F11.22, F11.92) or poisonings (T40.0-T40.4, T40.6; excluding adverse effects and underdosing of prescription opioids, eAppendix A http://links.lww.com/EDE/B964 ). In addition, among the hospital cases that met the modified CSTE case definition, we only included cases that also had clinical signs associated with the constellation of opioid overdose prior to death – and/or had an autopsy confirming that the death was opioid-related. Of the deaths that met the initial CSTE case definition, 51 did not have any clinical signs associated with opioid overdose and were excluded. To identify related deaths in the CCME data, we only included deaths in which opioids were identified as the primary cause of death by the pathologist (n = 34 deaths testing positive for opioids were excluded because opioids exposure was listed as a secondary cause). Probabilistic Linkage Unique identifiers were not available in either dataset. We used probabilistic data linkage to link suspected opioid-involved deaths from the hospital data to the ME data, which was accomplished in multiple passes. The initial pass identified records that matched exactly on the following variables: residential ZIP code, race, Hispanic/Latino, birth year, gender, date of admission, date of death, and concurrent exposure to opioids and other agents (heroin/fentanyl, methadone, all other/unspecified opioids, ethanol, cocaine, amphetamine, and benzodiazepines). In subsequent passes, we used different combinations of the linkage variables (eAppendix B http://links.lww.com/EDE/B964 ). Outcome The main outcome evaluated was whether decedents dying from opioid-involved overdoses in a hospital setting were sent to the medical examiner for an autopsy, which were the hospital deaths that linked to the CCME data. Statistical Analysis For the descriptive analysis, we present data on diagnoses associated with the opioid toxidrome, and sociodemographic, clinical, and temporal factors of decedents related to occurrence of an autopsy. In addition, our analysis utilized the Elixhauser comorbidity index to assess comorbidities in hospital decedents. We used multivariable logistic regression to assess selected predictors of not having an autopsy among those who died during hospital care. We used a priori knowledge informed by existing literature to determine inclusion of covariates in the multivariable analysis and present a model that includes the following predictors: age, race-ethnicity, year of death, whether decedent was admitted to the hospital, length of hospitalization, exposure to heroin and synthetic opioids, exposure to stimulants, and whether the hospital had a level I or II trauma center. We also evaluated individual comorbidities as measured by the Elixhauser comorbidity index and constructed a variable identifying patients with lymphoma, metastatic cancer, and solid tumor without metastasis. Odds ratios in the adjusted models are presented, including the 95% confidence intervals (95% CI). No evidence of multicollinearity was indicated among predictors based on evaluation of change in standard errors, variance of inflation (variance of inflation >10 suggests evidence of multicollinearity) and tolerance tests (tolerance value <0.1 suggests evidence of multicollinearity). We used SAS software for all statistical analyses (SAS v.9.4; Cary, NC). Estimate of Undercount To estimate the potential undercount of opioid-involved deaths, we compared the number of opioid-involved deaths as reported by the CDC in the Multiple Cause of Death (MCOD) data with total unique opioid-involved deaths captured in both the Cook County ME data and the hospital data for Cook County residents. Following the CDC case definition, we included deaths among Cook County residents with the following ICD-10 codes: T40.0-.4 and T40.6. However, we also included F11 ICD codes to be consistent with the CSTE case definition used in our analysis (n = 67 MCOD cases had F11 codes exclusively). We also present CDC data on autopsy rates, place of death, and age distribution. Sensitivity Analysis As a sensitivity analysis, we excluded decedents dying within 3 days of receiving anesthesia to account for deaths associated with medical complications from surgical anesthesia. Date of surgery identified through procedure codes was used to identify deaths occurring within 3 days of surgery. We assumed all surgical cases to have been exposed to anesthesia. In addition, even though the hospital deaths met the CSTE case definition, and all presented with clinical signs of the opioid toxidrome, it was plausible that not all cases in the medical records with a mention of opioids should be considered an opioid-involved death. Therefore, an additional sensitivity analysis was conducted, which excluded opioid-involved deaths with codes associated with therapeutic adverse effects, underdosing of prescribed narcotics, or remission (T40.2X5, T40.2X6, T40.4X5, T40.4X6, T40.605, F11.21; this only applies to patients that had more than one related opioid exposure code), patients diagnosed with cancer who only had a diagnosis of opioid-use disorder (F11) but no T40 codes, and patients with only one clinical sign of opioid overdose. – We conducted a data linkage of hospital and medical examiner data of all opioid-involved deaths occurring among residents of Cook County who died in Cook County (county location for the city of Chicago) from 2016 to 2019. A claim of exemption was approved for this project by the BLINDED IRB (#2020-0753). The data covers the period of January 1, 2016, through December 31, 2019 (2020 data were not available at the time of initiating this analysis). The hospital data are based on billing records compiled by the Illinois Hospital Association. The outpatient database includes all patients treated in emergency departments (ED) for less than 24 hours who were not admitted as an inpatient to the hospital. The outpatient data only includes patients seeking emergent care in the ED or those with referrals to clinics that require registration in the ED (e.g., orthopedic ambulatory surgery and gastrointestinal ambulatory procedures). The inpatient database includes all patients treated for 24 hours or more in Illinois hospitals for any medical reason. Both datasets include information on patient demographics (age, race, gender), clinical outcomes (diagnoses, hospital procedures, and discharge status), and economic outcomes (hospital charges and payer source). Based on the annual state audit of hospitals, 97% of all inpatient admissions statewide are captured by the participating hospitals included in the dataset. , We obtained Cook County Medical Examiner (CCME) data for the period January 1, 2016, through December 31, 2019 through a public online open data portal. A forensic pathologist conducts each investigation of deaths occurring under the jurisdiction of Cook County that includes suspected drug overdoses. The CCME dataset includes information on the date and time of death, demographics of the decedent including race and ethnicity, decedent residence, cause, and manner of death, contributing causes, and incident location including geolocation. In suspected drug overdoses, a toxicologic analysis is conducted that typically uses peripheral blood; in a very small proportion of cases, tissue from the liver, kidney and muscle is analyzed. Toxicologic analysis for biologically active agents involves initial screening followed by confirmatory and quantification analyses. Initial screening is conducted using enzyme-linked immunosorbent assay (ELISA) techniques, which provides an initial rapid screening for drugs. Positive ELISA toxicologic analyses are then verified through (1) gas chromatography– mass spectrometry (GC-MS) or (2) liquid chromatography– mass spectrometry (LC-MS). These secondary tests are used to identify the specific agents and respective concentrations. When multiple agents are identified in the bioassay, all the agents are individually identified. Limits of detection varies for each agent analyzed. Based on the Council of State and Territorial Epidemiologists (CSTE) nonfatal opioid overdose standardized surveillance case definition, we only included hospital deaths with an initial encounter ICD-10 diagnosis code indicating intoxication (F11.12, F11.22, F11.92) or poisonings (T40.0-T40.4, T40.6; excluding adverse effects and underdosing of prescription opioids, eAppendix A http://links.lww.com/EDE/B964 ). In addition, among the hospital cases that met the modified CSTE case definition, we only included cases that also had clinical signs associated with the constellation of opioid overdose prior to death – and/or had an autopsy confirming that the death was opioid-related. Of the deaths that met the initial CSTE case definition, 51 did not have any clinical signs associated with opioid overdose and were excluded. To identify related deaths in the CCME data, we only included deaths in which opioids were identified as the primary cause of death by the pathologist (n = 34 deaths testing positive for opioids were excluded because opioids exposure was listed as a secondary cause). Unique identifiers were not available in either dataset. We used probabilistic data linkage to link suspected opioid-involved deaths from the hospital data to the ME data, which was accomplished in multiple passes. The initial pass identified records that matched exactly on the following variables: residential ZIP code, race, Hispanic/Latino, birth year, gender, date of admission, date of death, and concurrent exposure to opioids and other agents (heroin/fentanyl, methadone, all other/unspecified opioids, ethanol, cocaine, amphetamine, and benzodiazepines). In subsequent passes, we used different combinations of the linkage variables (eAppendix B http://links.lww.com/EDE/B964 ). The main outcome evaluated was whether decedents dying from opioid-involved overdoses in a hospital setting were sent to the medical examiner for an autopsy, which were the hospital deaths that linked to the CCME data. For the descriptive analysis, we present data on diagnoses associated with the opioid toxidrome, and sociodemographic, clinical, and temporal factors of decedents related to occurrence of an autopsy. In addition, our analysis utilized the Elixhauser comorbidity index to assess comorbidities in hospital decedents. We used multivariable logistic regression to assess selected predictors of not having an autopsy among those who died during hospital care. We used a priori knowledge informed by existing literature to determine inclusion of covariates in the multivariable analysis and present a model that includes the following predictors: age, race-ethnicity, year of death, whether decedent was admitted to the hospital, length of hospitalization, exposure to heroin and synthetic opioids, exposure to stimulants, and whether the hospital had a level I or II trauma center. We also evaluated individual comorbidities as measured by the Elixhauser comorbidity index and constructed a variable identifying patients with lymphoma, metastatic cancer, and solid tumor without metastasis. Odds ratios in the adjusted models are presented, including the 95% confidence intervals (95% CI). No evidence of multicollinearity was indicated among predictors based on evaluation of change in standard errors, variance of inflation (variance of inflation >10 suggests evidence of multicollinearity) and tolerance tests (tolerance value <0.1 suggests evidence of multicollinearity). We used SAS software for all statistical analyses (SAS v.9.4; Cary, NC). To estimate the potential undercount of opioid-involved deaths, we compared the number of opioid-involved deaths as reported by the CDC in the Multiple Cause of Death (MCOD) data with total unique opioid-involved deaths captured in both the Cook County ME data and the hospital data for Cook County residents. Following the CDC case definition, we included deaths among Cook County residents with the following ICD-10 codes: T40.0-.4 and T40.6. However, we also included F11 ICD codes to be consistent with the CSTE case definition used in our analysis (n = 67 MCOD cases had F11 codes exclusively). We also present CDC data on autopsy rates, place of death, and age distribution. As a sensitivity analysis, we excluded decedents dying within 3 days of receiving anesthesia to account for deaths associated with medical complications from surgical anesthesia. Date of surgery identified through procedure codes was used to identify deaths occurring within 3 days of surgery. We assumed all surgical cases to have been exposed to anesthesia. In addition, even though the hospital deaths met the CSTE case definition, and all presented with clinical signs of the opioid toxidrome, it was plausible that not all cases in the medical records with a mention of opioids should be considered an opioid-involved death. Therefore, an additional sensitivity analysis was conducted, which excluded opioid-involved deaths with codes associated with therapeutic adverse effects, underdosing of prescribed narcotics, or remission (T40.2X5, T40.2X6, T40.4X5, T40.4X6, T40.605, F11.21; this only applies to patients that had more than one related opioid exposure code), patients diagnosed with cancer who only had a diagnosis of opioid-use disorder (F11) but no T40 codes, and patients with only one clinical sign of opioid overdose. – From 2016 to 2019, the hospital and CCME datasets identified 4,936 unique opioid-involved deaths among residents of Cook County, IL. During the 4-year period, 1,174 patients died from opioid-involved overdoses in Cook County hospitals. However, only 239 (20%) of these hospital decedents were identified as having an autopsy. Of the hospital cases, heroin was reported in 252 cases along with the following nonopioids commonly associated with substance use disorders: stimulants (n = 233); sedatives, hypnotics, or anxiolytics (n = 64); and ethanol (n = 192). We identified an additional 3,762 unique deaths from opioid-involved overdoses in the CCME data. The constellation of clinical signs and procedures related to opioid overdose are presented in Table ; as a comparison group, the prevalence of each sign or procedure among all deaths occurring in the general hospital population are also presented (i.e., not limited to opioid-involved deaths). Among the decedents without an autopsy, 93% had two or more clinical signs of opioid overdose compared to 79% of decedents with an autopsy. The opioid cases identified using the CSTE case definition had substantially higher proportion of all clinical signs of opioid overdose compared to all deaths occurring in the general hospital population. Persons without an autopsy had a mean of 4.0 (SD = 1.8) clinical diagnoses related to the opioid overdose constellation compared to 4.1 (SD = 2.4) among those with an autopsy, while all hospital deaths had an average of 2.0 (SD = 1.7) clinical signs. When restricting the clinical signs to only those strongly associated with in-hospital mortality (e.g., respiratory signs, hepatitis, HIV, sepsis, neurologic conditions, and rhabdomyolysis), 71% of those without an autopsy and 71% of those with an autopsy had two or more of these diagnoses compared to only 20% of all hospital deaths that includes deaths not involving opioids. Characteristics of decedents are presented in Table . Of the 1,174 hospital deaths, 1,104 (94%) were inpatient cases. Among those that died while in a hospital, persons who did not have an autopsy were disproportionately female, 55 years and older, Black/African American and other race-ethnicity (Table ). Furthermore, the proportion of cases receiving an autopsy were lower in homeless individuals (with autopsy vs without autopsy; 1% vs 2%), decedents having Medicare insurance (17% vs 30%), decedents admitted as inpatient cases (83% vs 97%), persons undergoing a surgical intervention (38% vs 49%) and cases treated in a hospital with a level 1 or 2 trauma unit (39% vs 52%). Additionally, decedents that did not receive an autopsy demonstrated longer lengths of hospitalization (Mean ± SD: 8.8 ± 12.3 days vs 3.7 ± 6.4 days). However, those who did not have an autopsy demonstrated lower identified exposures to heroin (16% vs 42%), and concurrent exposures to stimulants (17% vs 32%); sedatives, hypnotics, and anxiolytics (5% vs 9%); and ethanol (16% vs 19%). The type and number of comorbidities differed among those with and without autopsies after dying in a hospital (Table ). A lower proportion of hospital decedents who subsequently had an autopsy performed were identified as having congestive heart failure, complicated hypertension, complicated diabetes, malignant cancer, renal failure, and coagulopathy (Table ). Table presents the results of the multivariable logistic regression for the main and sensitivity models evaluating selected factors associated with not having an autopsy performed by the medical examiner after dying in a hospital following exposure to opioids. The main model confirms that decedents with the following characteristics have a higher odds of not receiving an autopsy: over the age of 50 years (50–64 years, aOR = 2.0; 95% CI = 1.4, 2.9; 65+ years, aOR = 3.2; 95% CI = 1.9, 5.5), identifying as other race-ethnicity (aOR = 2.3; 95% CI = 1.3, 4.3), those who died in more recent years (aOR = 1.2; 95% CI = 1.0, 1.4), inpatient cases (aOR = 3.7; 95% CI = 2.1, 6.5), decedents hospitalized for 4+ days (aOR = 2.2; 95% CI = 1.5, 3.1), those treated in hospitals with a level I or II trauma unit (aOR = 1.6; 95% CI = 1.1, 2.2), and decedents with malignant cancer (aOR = 4.3; 95% CI = 1.8, 10.1). However, the model demonstrates that decedents exposed to heroin and the group of synthetic opioids that includes fentanyl (T40.4; aOR = 0.39; 95% CI = 0.28, 0.55), as well as concurrent exposure to stimulants (aOR = 0.44; 95% CI = 0.31, 0.64), had substantially lower odds of not receiving an autopsy (i.e., were more likely to have an autopsy). In the first sensitivity model that excluded patients who died within 3 days of receiving anesthesia, predictors of not having an autopsy were similar to the main model. In the second sensitivity model, the parameter estimates remained nearly identical to the main model with the exception of inpatient cases and decedents with cancer comorbidities. These changes were driven by the near elimination of outpatient and cancer cases from the model. Based on the reported number of opioid deaths by the CDC MCOD, there were 4,303 residents of Cook County with opioid-involved deaths between 2016 and 2019. In our data linkage analysis, we identified 633 more cases than reported by the CDC, representing an undercount of 15%. However, it appears that the CDC is primarily missing cases that die after admission to a hospital. The number of inpatient deaths (place of death “Medical Facility—Inpatient”) reported by the CDC was 413 compared to 1,104 in our dataset. Our analysis identified 2.7 times more inpatient cases, which is almost precisely the difference between the CDC and our numbers. In addition, the difference in counts among inpatient cases (413 vs 1,104; 37.4% of total hospital deaths identified in our analysis) is close to the reported autopsy rate for all injury cases (ICD-10 S and T codes) who have been admitted to a hospital among Cook County residents between 2016 and 2019 (CDC reported autopsy rate of 25%). Furthermore, when comparing the age distribution of the cases identified in our analysis with those reported by the CDC, the CDC data are primarily missing decedents over the age of 45 years (Figure), which is consistent with our multivariable analysis. Even excluding opioid-involved deaths with codes associated with therapeutic adverse effects, underdosing or remission, patients diagnosed with cancer who only had a diagnosis of opioid-use disorder (F11) but no T40 codes, and those with less than two clinical signs of opioid overdose, there are 267 more cases than reported by the CDC (6% undercount). This linkage project of hospital and medical examiner data evaluating opioid-involved deaths in Cook County Illinois demonstrates (1) overall low autopsy rates for opioid-involved deaths occurring in a hospital setting, (2) that the odds of an autopsy varies by age, clinical characteristics, and type of drug exposures, and (3) a resulting undercount of opioid-involved deaths occurring in a hospital setting. Our undercount estimate ranging between 6% and 15% is smaller than the low-end estimate in prior studies, which have only used death certificate data and focused primarily on misclassification of unspecified drug overdoses (ICD-10: T50.9). – , As our approach of linking hospital data to medical examiner data are unique relative to these prior studies, it is likely that the undercount identified in our analysis exists in addition to the estimated 20%–40% undercount identified within death records. – , Because this analysis uses a novel approach to identify opioid-involved deaths, we assessed the inclusion of hospital opioid-involved deaths through a series of steps to improve the validity of the inclusion criteria and reduce misclassification. First, we based the inclusion criteria for hospitalized cases on national criteria developed by CSTE rather than arbitrary criteria. All of the persons included in this study had an ICD-10 code indicating opioid-use disorder with intoxication or acute opioid poisoning (T40). Second, all hospital cases presented clinical signs consistent with the recognized opioid toxidrome, and the proportion of these signs were substantially higher than the general hospital population of deaths (i.e., in-hospital mortality related to any cause). Third, as with death records, the focus is on contributing causes, not solely on the primary underlying cause. Opioids directly increase mortality by causing respiratory depression, anoxia, hypoxia, and related brain damage, but also by increasing risk of infection and other adverse systemic effects that lead to medical complications and increased in-hospital mortality. – For this reason, the CDC uses multiple contributing causes to estimate total opioid-involved deaths, not just the underlying cause of death. Fourth, the distribution of the constellation of clinical signs associated with opioid overdose was similar for decedents without an autopsy as those confirmed by the pathologist at the medical examiner’s office. Fifth, the cases that were identified by hospital staff as T40 opioid overdoses and had clinical signs of overdose were not being systematically autopsied. Only 126 (24%) of the 523 persons with a T40 code had an autopsy, despite all cases presenting with other clinical signs of the opioid toxidrome. Past studies have shown that women and white non-Hispanics, , , those dying from suicide, residents of nonurban lower income counties, , and people in specific age groups , , , are more likely to be coded as “T50.9: unspecified drug overdose” on death records. Some studies have also identified lower general autopsy rates among white non-Hispanics, older individuals, – and individuals having chronic diseases, but these studies were not specific to opioid-involved deaths. However, our study connects the evidence between death record misclassification and autopsy rates showing that a substantial number of opioid-involved deaths not involving heroin and fentanyl (i.e. prescription opioids), particularly among older patients with chronic health conditions, do not have autopsies and are missed entirely within death records. ICD-10-CM coding of opioids only has six categories. In our analysis, we focus on ICD-10 codes that had the greatest specificity in identifying a single agent or correspond to the opioids associated with “illicit street drugs” (heroin and fentanyl). Since there were very few patients testing positive for opium and methadone alone (n = 14), the inverse of the heroin and synthetic opioid category almost exclusively represents prescription opioids. If the findings are representative of a national pattern, the contribution of prescription opioids to mortality is being diminished in national death data, particularly underestimating the lethality of prescription opioids in the absence of concurrent use of heroin or fentanyl. Furthermore, with data demonstrating increased opioid misuse among the elderly, the low autopsy rates may indicate that we are also substantially undercounting opioid-involved deaths among the elderly. Limitations There are several potential limitations with this analysis. First, our analysis utilized medical examiner data which does not include all death certificates. It is possible that the death records completed by a physician at the hospital accurately recorded opioids as a contributing cause of death. However, cumulative deaths reported exclusively by the ME coincides with the count reported by CDC vital records, indicating that very few opioid-involved deaths without a subsequent autopsy had opioids included as a contributing cause on the death certificate. Second, hospital ICD-10 coding may not accurately or completely capture cases of opioid intoxication/overdose. However, two large analyses of medical record coding , noted high positive predictive values of coding opioid poisoning in electronic health records, with 96% of opioid intoxication/overdose cases coded correctly. Third, we did not include opioid-involved cases discharged to hospice because we do not have information on their dates of death (an additional 840 deaths). It is possible that many of these hospice patients are also missed on death records given the concordance between the numbers reported by the CCME and CDC MCOD vital records. There are several potential limitations with this analysis. First, our analysis utilized medical examiner data which does not include all death certificates. It is possible that the death records completed by a physician at the hospital accurately recorded opioids as a contributing cause of death. However, cumulative deaths reported exclusively by the ME coincides with the count reported by CDC vital records, indicating that very few opioid-involved deaths without a subsequent autopsy had opioids included as a contributing cause on the death certificate. Second, hospital ICD-10 coding may not accurately or completely capture cases of opioid intoxication/overdose. However, two large analyses of medical record coding , noted high positive predictive values of coding opioid poisoning in electronic health records, with 96% of opioid intoxication/overdose cases coded correctly. Third, we did not include opioid-involved cases discharged to hospice because we do not have information on their dates of death (an additional 840 deaths). It is possible that many of these hospice patients are also missed on death records given the concordance between the numbers reported by the CCME and CDC MCOD vital records. There is little doubt that the increase of fentanyl congeners in the illicit drug supply chain has contributed to the precipitous rise in opioid-involved deaths over the last 10 years. However, in our analysis opioid-involved deaths involving prescription opioids were less likely to have an autopsy and be captured in death certificates. It is important to characterize decedents that do and do not have autopsies in order to develop better estimates of opioid-involved deaths, and more importantly enhanced surveillance systems and overdose prevention programs to avoid undercounting individuals at risk of experiencing an opioid-involved death.
Immunohistochemical comparison of three programmed death-ligand 1 (PD-L1) assays in triple-negative breast cancer
2727bda5-1400-41e3-8de9-c09071c26571
8462691
Anatomy[mh]
Triple-negative breast cancer (TNBC), characterized by the absence of estrogen and progesterone receptors and human epidermal growth factor receptor 2 (HER2), accounts for 12%–17% of breast cancers [ – ]. It is well known that the rates of recurrence, distant metastasis, and mortality rate are significantly higher in TNBC than in other breast cancer subtypes . One of the reasons for the high mortality rate is the limited therapeutic options. However, immune checkpoint inhibitors, such as anti-programmed death ligand 1 (PD-L1) and anti-programmed death protein 1 (PD-1), have been breakthroughs in the treatment of patients with TNBC. Some studies have reported that 20%–58% of TNBC patients express PD-L1, and higher expression of PD-L1 was observed in TNBC patients than in non-TNBC individuals [ – ]. Moreover, several studies have demonstrated the effectiveness of immune checkpoint inhibitors in patients with TNBC. For example, the IMpassion130 trial (NCT02425891) showed that as the first-line treatment, anti-PD-L1 agent (atezolizumab) plus nab-paclitaxel was superior to placebo plus nab-paclitaxel for advanced or metastatic TNBC patients showing ≥ 1% PD-L1 expression on immune cells (ICs) . Therefore, the identification of TNBC patients who may benefit from immune checkpoint inhibitors is a critical issue. Immunohistochemical assays are used to evaluate PD-L1 expression. Currently, several primary antibodies for PD-L1 and immunohistochemical protocols and platforms are available for commercial use . Each assay is linked to a specific therapeutic agent. For example, in non-small cell lung cancer, the 22C3 assay has been approved as a companion diagnostic for pembrolizumab and the SP263 assay for durvalumab . In TNBC, the SP142 assay is the companion diagnostic for atezolizumab , the 73–10 assay is the companion diagnostic for avelumab (JAVELIN Solid Tumor study; NCT01772004l) , and the E1L3N assay is used as a laboratory-developed test ; these assays have different cut-off values for PD-L1 immunoreactivity and use different types of positive cells (tumor cells (TCs) vs. ICs). Moreover, the differences in positive immunoreactivity among primary PD-L1 antibodies are well known . In lung cancer, some studies, including the Blueprint PD-L1 immunohistochemical assay comparison study, evaluated the differences in the properties of PD-L1 primary antibodies [ – ]. Although a few studies have analyzed PD-L1 immunoreactivity using the 22–8, 22C3, SP142, SP263, and E1L3N assays in TNBC patients [ – ], the immunoreactivity of PD-L1 using the 73–10 assay has not been compared with that of the SP142 assay. Thus, we aimed to evaluate PD-L1 immunoreactivity using the SP142, 73–10, and E1L3N assays in TNBC tissues. Patient selection We selected 165 consecutive patients with TNBC who underwent surgical resection at the Department of Surgery of the Kansai Medical University Hospital between January 2006 and December 2018. Patients who received neoadjuvant chemotherapy were excluded from the study because neoadjuvant chemotherapy may influence PD-L1 expression. Patients who were diagnosed with invasive breast carcinoma of no special type according to the recent World Health Organization Classification of Breast Tumors were selected. Patients with a special type of invasive carcinoma were excluded from the study because each special type of carcinoma has unique clinicopathological features. In total, 62 patients with TNBC were included in the study cohort. This study cohort was fundamentally the same as that used in our previous studies [ – ]. In a previous study, we analyzed the relationship between adipophilin expression, a lipid droplet-associated protein, and the clinicopathological features of patients with TNBC . In our previous studies, we examined the significance of PD-L1 expression in cancer-associated fibroblasts , and the relationship between CD155, an immune checkpoint protein, and PD-L1 expression in TNBC tissues. Thus, the contents of the present study do not overlap with those of our previous studies [ – ]. This retrospective single-institution study was conducted in accordance with the principles of the Declaration of Helsinki, and the study protocol was approved by the Institutional Review Board of the Kansai Medical University Hospital (Approval #2019041). All data were fully anonymized. Institutional Review Board waived the requirement for informed consent, because of the retrospective design of the study; medical records and archival samples were used with no risk to the participants. Moreover, the present study did not include minors. Information regarding this study, such as the inclusion criteria and opportunity to opt-out, was provided through the institutional website. Histopathological analysis Surgically resected specimens were fixed with formalin, sectioned, and stained with hematoxylin and eosin. All histopathological diagnoses were independently evaluated by more than two experienced diagnostic pathologists. We used the TNM Classification of Malignant Tumors, Eighth Edition. Histopathological grading was based on the Nottingham histological grade . According to a meta-analysis of patients with TNBC, the Ki-67 labeling index (LI) ≥ 40% was considered high in operative specimens . Stromal tumor-infiltrating lymphocytes (TILs) were identified using hematoxylin and eosin staining and were considered lymphocyte-predominant breast cancer (LPBC) at ≥ 60% and non-LPBC at < 59%, according to the TIL Working Group recommendation . Tissue microarray Hematoxylin and eosin-stained slides were used to select the regions that were morphologically most representative of carcinoma, and three tissue cores with a diameter of 2 mm were punched out from the paraffin-embedded blocks for each patient. The tissue cores were arrayed in the recipient paraffin blocks. Immunohistochemistry Immunohistochemical analyses were performed using an autostainer (the SP142 and E1L3N assays on Discovery ULTRA System; Roche Diagnostics, Basel, Switzerland; and the 73–10 assay on Leica Bond-III; Leica Biosystems, Bannockburn, IL, USA) according to the manufacturer’s instructions. Three different primary monoclonal antibodies were used to detect PD-L1: SP142 (Roche Diagnostics, Basel, Switzerland), E1L3N (Cell Signaling Technology, Danvers, MA, USA), and 73–10 (Leica Biosystems, Newcastle, UK). A minimum of two researchers independently evaluated the immunohistochemical staining results. PD-L1 expression on the ICs (lymphocytes, macrophages, dendritic cells, and granulocytes) of all samples was evaluated. PD-L1 expression on ICs was assessed as the proportion of tumor area occupied by PD-L1-positive ICs of any intensity using the same method as previously reported [ , , ]. Tumor area was defined as the area containing viable tumor cells, associated intratumoral stroma, and contiguous peritumoral stroma. PD-L1-positivity was assessed by the percentage of PD-L1-positive ICs related to the total number of ICs and defined as positive when PD-L1-expressing ICs were ≥ 1% in the tumor area [ , , ]. PD-L1 expression on TCs was assessed as the proportion of viable invasive carcinoma cells showing membranous staining of any intensity divided by the total number of viable invasive carcinoma cells . PD-L1 expression on ≥ 1% TCs was defined as positive . Statistical analyses All analyses were performed using Statistical Package for the Social Sciences (SPSS) Statistics software (version 27.0, IBM, Armonk, NY, USA). The differences in the PD-L1 expression levels of identical specimens detected by the SP142, 73–10, and E1L3N assays were analyzed using the Wilcoxon matched-pairs signed-rank test. Correlations between two groups were determined using Fisher’s exact test for categorical variables. Agreement between two groups was analyzed using the kappa test. Statistical significance was set at p < 0 . 05 . We selected 165 consecutive patients with TNBC who underwent surgical resection at the Department of Surgery of the Kansai Medical University Hospital between January 2006 and December 2018. Patients who received neoadjuvant chemotherapy were excluded from the study because neoadjuvant chemotherapy may influence PD-L1 expression. Patients who were diagnosed with invasive breast carcinoma of no special type according to the recent World Health Organization Classification of Breast Tumors were selected. Patients with a special type of invasive carcinoma were excluded from the study because each special type of carcinoma has unique clinicopathological features. In total, 62 patients with TNBC were included in the study cohort. This study cohort was fundamentally the same as that used in our previous studies [ – ]. In a previous study, we analyzed the relationship between adipophilin expression, a lipid droplet-associated protein, and the clinicopathological features of patients with TNBC . In our previous studies, we examined the significance of PD-L1 expression in cancer-associated fibroblasts , and the relationship between CD155, an immune checkpoint protein, and PD-L1 expression in TNBC tissues. Thus, the contents of the present study do not overlap with those of our previous studies [ – ]. This retrospective single-institution study was conducted in accordance with the principles of the Declaration of Helsinki, and the study protocol was approved by the Institutional Review Board of the Kansai Medical University Hospital (Approval #2019041). All data were fully anonymized. Institutional Review Board waived the requirement for informed consent, because of the retrospective design of the study; medical records and archival samples were used with no risk to the participants. Moreover, the present study did not include minors. Information regarding this study, such as the inclusion criteria and opportunity to opt-out, was provided through the institutional website. Surgically resected specimens were fixed with formalin, sectioned, and stained with hematoxylin and eosin. All histopathological diagnoses were independently evaluated by more than two experienced diagnostic pathologists. We used the TNM Classification of Malignant Tumors, Eighth Edition. Histopathological grading was based on the Nottingham histological grade . According to a meta-analysis of patients with TNBC, the Ki-67 labeling index (LI) ≥ 40% was considered high in operative specimens . Stromal tumor-infiltrating lymphocytes (TILs) were identified using hematoxylin and eosin staining and were considered lymphocyte-predominant breast cancer (LPBC) at ≥ 60% and non-LPBC at < 59%, according to the TIL Working Group recommendation . Hematoxylin and eosin-stained slides were used to select the regions that were morphologically most representative of carcinoma, and three tissue cores with a diameter of 2 mm were punched out from the paraffin-embedded blocks for each patient. The tissue cores were arrayed in the recipient paraffin blocks. Immunohistochemical analyses were performed using an autostainer (the SP142 and E1L3N assays on Discovery ULTRA System; Roche Diagnostics, Basel, Switzerland; and the 73–10 assay on Leica Bond-III; Leica Biosystems, Bannockburn, IL, USA) according to the manufacturer’s instructions. Three different primary monoclonal antibodies were used to detect PD-L1: SP142 (Roche Diagnostics, Basel, Switzerland), E1L3N (Cell Signaling Technology, Danvers, MA, USA), and 73–10 (Leica Biosystems, Newcastle, UK). A minimum of two researchers independently evaluated the immunohistochemical staining results. PD-L1 expression on the ICs (lymphocytes, macrophages, dendritic cells, and granulocytes) of all samples was evaluated. PD-L1 expression on ICs was assessed as the proportion of tumor area occupied by PD-L1-positive ICs of any intensity using the same method as previously reported [ , , ]. Tumor area was defined as the area containing viable tumor cells, associated intratumoral stroma, and contiguous peritumoral stroma. PD-L1-positivity was assessed by the percentage of PD-L1-positive ICs related to the total number of ICs and defined as positive when PD-L1-expressing ICs were ≥ 1% in the tumor area [ , , ]. PD-L1 expression on TCs was assessed as the proportion of viable invasive carcinoma cells showing membranous staining of any intensity divided by the total number of viable invasive carcinoma cells . PD-L1 expression on ≥ 1% TCs was defined as positive . All analyses were performed using Statistical Package for the Social Sciences (SPSS) Statistics software (version 27.0, IBM, Armonk, NY, USA). The differences in the PD-L1 expression levels of identical specimens detected by the SP142, 73–10, and E1L3N assays were analyzed using the Wilcoxon matched-pairs signed-rank test. Correlations between two groups were determined using Fisher’s exact test for categorical variables. Agreement between two groups was analyzed using the kappa test. Statistical significance was set at p < 0 . 05 . Patients’ characteristics This study included 62 female patients, and summarizes their clinical and pathological characteristics. The median age at the time of initial diagnosis was 68 years (range, 31–93 years). Based on the biopsy results, all patients had TNBC (invasive carcinomas of no special type). PD-L1 expression status using different assays The prevalence of PD-L1 expression on ICs was 79.0% (49 patients), 67.7% (42 patients), and 46.8% (29 patients) as determined using the 73–10, SP142, and E1L3N assays, respectively ( ), while the prevalence of PD-L1 expression on TCs was 17.7% (11 patients), 6.5% (4 patients), and 12.9% (8 patients) using the 73–10, SP142, and E1L3N assays, respectively ( ). Representative expression patterns of PD-L1 on ICs and TCs were shown by each assay (Figs – ). Comparison of PD-L1 expression analysis on ICs among the 73–10, SP 142, and E1L3N assays The expression levels of PD-L1 on ICs analyzed by the 73–10, SP142, and E1L3N assays are illustrated in . Higher PD-L1 expression was noted using the 73–10 assay compared to using the SP142 assay (median [range], 8% [0–80%] (73–10 assay) vs. 1% [0–50%] (SP142 assay), p < 0 . 001 ). Fifty patients (80.6%) were positive for PD-L1 expression on their ICs using either the 73–10 or the SP142 assay, and the remaining 12 patients (19.4%) tested negative for PD-L1 based on the results of the both primary assays ( ). The concordance rate between the 73–10 and SP142 assays was 85.5%, and Cohen’s kappa coefficient was 0.634 (substantial agreement, p < 0 . 001 ). Higher PD-L1 expression was also noted using the 73–10 assay than the E1L3N assay (median [range], 8% [0%–80%] (73–10 assay) vs. 0% [0%–40%] (E1L3N assay), p < 0 . 001 ). Forty-eight patients (79.0%) tested positive for PD-L1 expression on their ICs as determined using either the 73–10 or the E1L3N assay, and the remaining 13 (21.0%) patients tested negative according to the results of both the assays ( ); the concordance rate was 67.7%, and Cohen’s kappa coefficient was 0.378 (fair agreement, p < 0 . 001 ). Higher PD-L1 expression was also noted using the SP142 assay compared to the E1L3N assay (median [range], 1% [0%–50%] (SP142 assay) vs. 0% (0%] (E1L3N assay), p = 0 . 002 ). Forty-two patients (67.7%) were positive for PD-L1 expression on their ICs using either the SP142 or the E1L3N assay, and the remaining 20 (32.3%) patients were negative according to both the assays ( ); the concordance rate was 79.0%, and Cohen’s kappa coefficient was 0.590 (moderate agreement, p < 0 . 001 ). Comparison of PD-L1 expression levels on TCs using the 73–10, SP 142, and E1L3N assays Higher PD-L1 expression was noted using the 73–10 assay compared to the SP142 assay. Eleven patients (17.7%) tested positive for PD-L1 expression on their TCs using either the 73–10 or the SP142 assay, and the remaining 51 patients (82.3%) tested negative for PD-L1 according to both the assays ( ). The concordance rate between the 73–10 and SP142 assays was 88.7%, and Cohen’s kappa coefficient was 0.485 (moderate agreement, p < 0 . 001 ). Higher PD-L1 expression was noted using the 73–10 assay than the E1L3N assay. Eleven patients (17.7%) tested positive for PD-L1 expression on their TCs using either the 73–10 or the E1L3N assay, and the remaining 51 (82.3%) patients tested negative according to both the assays ( ); the concordance rate was 95.2%, and Cohen’s kappa coefficient was 0.814 (almost perfect agreement, p < 0 . 001 ). Higher PD-L1 expression was also noted using the E1L3N assay compared to the SP142 assay. Eight patients (12.9%) tested positive for PD-L1 expression on their TCs using either the E1L3N or the SP142 assay, and the remaining 54 (87.1%) patients tested negative according to both the assays ( ); the concordance rate was 93.5%, and Cohen’s kappa coefficient was 0.635 (substantial agreement, p < 0 . 001 ). PD-L1 expression status on ICs based on sample age using the 73–10, SP142, and E1L3N assays The rates of PD-L1 expression in samples of different ages as determining using the 73–10, SP142, and E1L3N assays are illustrated in . The positivity rates of PD-L1 expression using the 73–10, SP142, and E1L3N assays were 84.2%, 84.2%, and 52.6% in the samples aged < 5 years; 79.2%, 58.3%, and 45.8% in the samples aged 5 years ≤ and < 10 years, and 73.7%, 63.2%, and 42.1% in the samples aged > 10 years, respectively. The concordance rates of the SP 142 and E1L3N assays with the 73–10 assay in the samples aged < 5 years were 89.5% and 68.4%, and the Cohen’s kappa coefficients were 0.604 (substantial agreement, p = 0 . 008 ) and 0.345 (fair agreement, p = 0 . 047 ), respectively ( ). The concordance rate between the SP142 and E1L3N assays in the samples aged < 5 years was 68.4%, and the Cohen’s kappa coefficient was 0.345 (fair agreement, p = 0 . 047 ) ( ). The concordance rates of the SP 142 and E1L3N assays with the 73–10 assay in samples aged 5 years ≤ and < 10 years were 79.2% and 66.7%, and the Cohen’s kappa coefficients were 0.538 (moderate agreement, p = 0 . 003 ) and 0.364 (fair agreement, p = 0 . 021 ), respectively ( ). The concordance rate between the SP142 and E1L3N assays in samples aged 5 years ≤ and < 10 years was 87.5%, and the Cohen’s kappa coefficient was 0.753 (substantial agreement, p < 0 . 001 ) ( ). The concordance rates of the SP 142 and E1L3N assays with the 73–10 assay in the samples aged > 10 years were 89.5% and 68.4%, and the Cohen’s kappa coefficients were 0.759 (substantial agreement, p = 0 . 001 ) and 0.412 (moderate agreement, p = 0 . 026 ), respectively ( ). The concordance rate between the SP142 and E1L3N assays in the samples aged > 10 years was 78.9%, and the Cohen’s kappa coefficient was 0.596 (moderate agreement, p = 0 . 005 ) ( ). PD-L1 expression status on TCs based on sample age using the 73–10, SP142, and E1L3N assays PD-L1 expression rates on TCs based on different sample ages using the 73–10, SP142, and E1L3N assays are illustrated in . The positivity rates of PD-L1 expression using the 73–10, SP142, and E1L3N assays were 5.3%, 0%, and 5.3% in the samples aged < 5 years; 20.8%, 8.3%, and 12.5% in the samples aged 5 years ≤ and < 10 years, and 26.3%, 10.5%, and 20.1% in the samples aged > 10 years, respectively. The concordance rates of the SP 142 and E1L3N assays with the 73–10 assay in the samples aged < 5 years were 94.7% and 100.0%, and the Cohen’s kappa coefficients were noncalculable and 1.000 (perfect agreement, p < 0 . 001 ), respectively ( ). The concordance rate between the SP142 and E1L3N assays in the samples aged < 5 years was 94.7%, and the Cohen’s kappa coefficient was noncalculable ( ). The concordance rates of the SP 142 and E1L3N assays with the 73–10 assay in samples aged 5 years ≤ and < 10 years were 87.5% and 91.7%, and the Cohen’s kappa coefficients were 0.514 (moderate agreement, p = 0 . 004 ) and 0.704 (substantial agreement, p < 0 . 001 ), respectively ( ). The concordance rate between the SP142 and E1L3N assays in the samples aged 5 years ≤ and < 10 years was 95.8%, and the Cohen’s kappa coefficient was 0.778 (substantial agreement, p < 0 . 001 ) ( ). The concordance rates of the SP 142 and E1L3N assays with the 73–10 assay in the samples aged >10 years were 84.2% and 94.7%, and the Cohen’s kappa coefficients were 0.496 (moderate agreement, p = 0 . 012 ) and 0.855 (perfect agreement, p < 0 . 001 ), respectively ( ). The concordance rate between the SP142 and E1L3N assays in the samples aged > 10 years was 89.5%, and the Cohen’s kappa coefficient was 0.612 (substantial agreement, p = 0 . 004 ) ( ). PD-L1 expression status on ICs according to tumor diameter using the 73–10, SP142, and E1L3N assays Positivity rates of PD-L1 expression for different tumor diameters according to the 73–10, SP142, and E1L3N assays are illustrated in . According to tumor diameter, the positivity rates of PD-L1 expression using the 73–10, SP142, and E1L3N assays were 87.1%, 77.4%, and 54.8% for tumor diameter ≤ 20 mm, and 71.0%, 58.1%, and 38.7% for tumor diameter > 20 mm, respectively. The concordance rates of the SP 142 and E1L3N assays with the 73–10 assay for tumors with a diameter ≤ 20 mm were 90.3% and 67.7%, and the Cohen’s kappa coefficients were 0.674 (substantial agreement, p < 0 . 001 ) and 0.305 (fair agreement, p = 0 . 018 ), respectively ( ). The concordance rate between the SP 142 and E1L3N assays for tumors with a diameter ≤ 20 mm was 77.4%, and the Cohen’s kappa coefficient was 0.523 (moderate agreement, p = 0 . 001 ) ( ). The concordance rates of the SP 142 and E1L3N assays with the 73–10 assay for tumors with a diameter > 20 mm were 80.6% and 67.7%, and the Cohen’s kappa coefficients were 0.585 (moderate agreement, p = 0 . 001 ) and 0.411 (moderate agreement, p = 0 . 005 ), respectively ( ). The concordance rate between the SP 142 and E1L3N assays for tumors with a diameter > 20 mm was 80.6%, and the Cohen’s kappa coefficient was 0.627 (substantial agreement, p < 0 . 001 ) ( ). PD-L1 expression status on TCs based on tumor diameter using the 73–10, SP142, and E1L3N assays Positivity rates of PD-L1 on TCs with different tumor diameters according to the 73–10, SP142, and E1L3N assays are illustrated in . According to tumor diameter, the positivity rates of PD-L1 expression using the 73–10, SP142, and E1L3N assays were 16.1%, 3.2%, and 9.7% at tumor diameter ≤ 20 mm, and 19.4%, 9.7%, and 16.1% at tumor diameter > 20 mm, respectively. The concordance rates of the SP 142 and E1L3N assays with the 73–10 assay for tumors with a diameter ≤ 20 mm were 87.1% and 93.5%, and the Cohen’s kappa coefficients were 0.295 (fair agreement, p = 0 . 02 ) and 0.716 (substantial agreement, p < 0 . 001 ), respectively ( ). The concordance rate between the SP 142 and E1L3N assays for tumors with diameter ≤ 20 mm was 93.5%, and the Cohen’s kappa coefficient was 0.475 (moderate agreement, p = 0 . 002 ) ( ). The concordance rates of the SP 142 and E1L3N assays with the 73–10 assay for tumors with diameter > 20 mm were 90.3% and 96.8%, and the Cohen’s kappa coefficients were 0.617 (substantial agreement, p < 0 . 001 ) and 0.890, respectively (perfect agreement, p < 0 . 001 ) ( ). The concordance rate between the SP 142 and E1L3N assays for tumors with diameter > 20 mm was 93.5%, and the Cohen’s kappa coefficient was 0.716 (substantial agreement, p < 0 . 001 ) ( ). This study included 62 female patients, and summarizes their clinical and pathological characteristics. The median age at the time of initial diagnosis was 68 years (range, 31–93 years). Based on the biopsy results, all patients had TNBC (invasive carcinomas of no special type). The prevalence of PD-L1 expression on ICs was 79.0% (49 patients), 67.7% (42 patients), and 46.8% (29 patients) as determined using the 73–10, SP142, and E1L3N assays, respectively ( ), while the prevalence of PD-L1 expression on TCs was 17.7% (11 patients), 6.5% (4 patients), and 12.9% (8 patients) using the 73–10, SP142, and E1L3N assays, respectively ( ). Representative expression patterns of PD-L1 on ICs and TCs were shown by each assay (Figs – ). The expression levels of PD-L1 on ICs analyzed by the 73–10, SP142, and E1L3N assays are illustrated in . Higher PD-L1 expression was noted using the 73–10 assay compared to using the SP142 assay (median [range], 8% [0–80%] (73–10 assay) vs. 1% [0–50%] (SP142 assay), p < 0 . 001 ). Fifty patients (80.6%) were positive for PD-L1 expression on their ICs using either the 73–10 or the SP142 assay, and the remaining 12 patients (19.4%) tested negative for PD-L1 based on the results of the both primary assays ( ). The concordance rate between the 73–10 and SP142 assays was 85.5%, and Cohen’s kappa coefficient was 0.634 (substantial agreement, p < 0 . 001 ). Higher PD-L1 expression was also noted using the 73–10 assay than the E1L3N assay (median [range], 8% [0%–80%] (73–10 assay) vs. 0% [0%–40%] (E1L3N assay), p < 0 . 001 ). Forty-eight patients (79.0%) tested positive for PD-L1 expression on their ICs as determined using either the 73–10 or the E1L3N assay, and the remaining 13 (21.0%) patients tested negative according to the results of both the assays ( ); the concordance rate was 67.7%, and Cohen’s kappa coefficient was 0.378 (fair agreement, p < 0 . 001 ). Higher PD-L1 expression was also noted using the SP142 assay compared to the E1L3N assay (median [range], 1% [0%–50%] (SP142 assay) vs. 0% (0%] (E1L3N assay), p = 0 . 002 ). Forty-two patients (67.7%) were positive for PD-L1 expression on their ICs using either the SP142 or the E1L3N assay, and the remaining 20 (32.3%) patients were negative according to both the assays ( ); the concordance rate was 79.0%, and Cohen’s kappa coefficient was 0.590 (moderate agreement, p < 0 . 001 ). Higher PD-L1 expression was noted using the 73–10 assay compared to the SP142 assay. Eleven patients (17.7%) tested positive for PD-L1 expression on their TCs using either the 73–10 or the SP142 assay, and the remaining 51 patients (82.3%) tested negative for PD-L1 according to both the assays ( ). The concordance rate between the 73–10 and SP142 assays was 88.7%, and Cohen’s kappa coefficient was 0.485 (moderate agreement, p < 0 . 001 ). Higher PD-L1 expression was noted using the 73–10 assay than the E1L3N assay. Eleven patients (17.7%) tested positive for PD-L1 expression on their TCs using either the 73–10 or the E1L3N assay, and the remaining 51 (82.3%) patients tested negative according to both the assays ( ); the concordance rate was 95.2%, and Cohen’s kappa coefficient was 0.814 (almost perfect agreement, p < 0 . 001 ). Higher PD-L1 expression was also noted using the E1L3N assay compared to the SP142 assay. Eight patients (12.9%) tested positive for PD-L1 expression on their TCs using either the E1L3N or the SP142 assay, and the remaining 54 (87.1%) patients tested negative according to both the assays ( ); the concordance rate was 93.5%, and Cohen’s kappa coefficient was 0.635 (substantial agreement, p < 0 . 001 ). The rates of PD-L1 expression in samples of different ages as determining using the 73–10, SP142, and E1L3N assays are illustrated in . The positivity rates of PD-L1 expression using the 73–10, SP142, and E1L3N assays were 84.2%, 84.2%, and 52.6% in the samples aged < 5 years; 79.2%, 58.3%, and 45.8% in the samples aged 5 years ≤ and < 10 years, and 73.7%, 63.2%, and 42.1% in the samples aged > 10 years, respectively. The concordance rates of the SP 142 and E1L3N assays with the 73–10 assay in the samples aged < 5 years were 89.5% and 68.4%, and the Cohen’s kappa coefficients were 0.604 (substantial agreement, p = 0 . 008 ) and 0.345 (fair agreement, p = 0 . 047 ), respectively ( ). The concordance rate between the SP142 and E1L3N assays in the samples aged < 5 years was 68.4%, and the Cohen’s kappa coefficient was 0.345 (fair agreement, p = 0 . 047 ) ( ). The concordance rates of the SP 142 and E1L3N assays with the 73–10 assay in samples aged 5 years ≤ and < 10 years were 79.2% and 66.7%, and the Cohen’s kappa coefficients were 0.538 (moderate agreement, p = 0 . 003 ) and 0.364 (fair agreement, p = 0 . 021 ), respectively ( ). The concordance rate between the SP142 and E1L3N assays in samples aged 5 years ≤ and < 10 years was 87.5%, and the Cohen’s kappa coefficient was 0.753 (substantial agreement, p < 0 . 001 ) ( ). The concordance rates of the SP 142 and E1L3N assays with the 73–10 assay in the samples aged > 10 years were 89.5% and 68.4%, and the Cohen’s kappa coefficients were 0.759 (substantial agreement, p = 0 . 001 ) and 0.412 (moderate agreement, p = 0 . 026 ), respectively ( ). The concordance rate between the SP142 and E1L3N assays in the samples aged > 10 years was 78.9%, and the Cohen’s kappa coefficient was 0.596 (moderate agreement, p = 0 . 005 ) ( ). PD-L1 expression rates on TCs based on different sample ages using the 73–10, SP142, and E1L3N assays are illustrated in . The positivity rates of PD-L1 expression using the 73–10, SP142, and E1L3N assays were 5.3%, 0%, and 5.3% in the samples aged < 5 years; 20.8%, 8.3%, and 12.5% in the samples aged 5 years ≤ and < 10 years, and 26.3%, 10.5%, and 20.1% in the samples aged > 10 years, respectively. The concordance rates of the SP 142 and E1L3N assays with the 73–10 assay in the samples aged < 5 years were 94.7% and 100.0%, and the Cohen’s kappa coefficients were noncalculable and 1.000 (perfect agreement, p < 0 . 001 ), respectively ( ). The concordance rate between the SP142 and E1L3N assays in the samples aged < 5 years was 94.7%, and the Cohen’s kappa coefficient was noncalculable ( ). The concordance rates of the SP 142 and E1L3N assays with the 73–10 assay in samples aged 5 years ≤ and < 10 years were 87.5% and 91.7%, and the Cohen’s kappa coefficients were 0.514 (moderate agreement, p = 0 . 004 ) and 0.704 (substantial agreement, p < 0 . 001 ), respectively ( ). The concordance rate between the SP142 and E1L3N assays in the samples aged 5 years ≤ and < 10 years was 95.8%, and the Cohen’s kappa coefficient was 0.778 (substantial agreement, p < 0 . 001 ) ( ). The concordance rates of the SP 142 and E1L3N assays with the 73–10 assay in the samples aged >10 years were 84.2% and 94.7%, and the Cohen’s kappa coefficients were 0.496 (moderate agreement, p = 0 . 012 ) and 0.855 (perfect agreement, p < 0 . 001 ), respectively ( ). The concordance rate between the SP142 and E1L3N assays in the samples aged > 10 years was 89.5%, and the Cohen’s kappa coefficient was 0.612 (substantial agreement, p = 0 . 004 ) ( ). Positivity rates of PD-L1 expression for different tumor diameters according to the 73–10, SP142, and E1L3N assays are illustrated in . According to tumor diameter, the positivity rates of PD-L1 expression using the 73–10, SP142, and E1L3N assays were 87.1%, 77.4%, and 54.8% for tumor diameter ≤ 20 mm, and 71.0%, 58.1%, and 38.7% for tumor diameter > 20 mm, respectively. The concordance rates of the SP 142 and E1L3N assays with the 73–10 assay for tumors with a diameter ≤ 20 mm were 90.3% and 67.7%, and the Cohen’s kappa coefficients were 0.674 (substantial agreement, p < 0 . 001 ) and 0.305 (fair agreement, p = 0 . 018 ), respectively ( ). The concordance rate between the SP 142 and E1L3N assays for tumors with a diameter ≤ 20 mm was 77.4%, and the Cohen’s kappa coefficient was 0.523 (moderate agreement, p = 0 . 001 ) ( ). The concordance rates of the SP 142 and E1L3N assays with the 73–10 assay for tumors with a diameter > 20 mm were 80.6% and 67.7%, and the Cohen’s kappa coefficients were 0.585 (moderate agreement, p = 0 . 001 ) and 0.411 (moderate agreement, p = 0 . 005 ), respectively ( ). The concordance rate between the SP 142 and E1L3N assays for tumors with a diameter > 20 mm was 80.6%, and the Cohen’s kappa coefficient was 0.627 (substantial agreement, p < 0 . 001 ) ( ). Positivity rates of PD-L1 on TCs with different tumor diameters according to the 73–10, SP142, and E1L3N assays are illustrated in . According to tumor diameter, the positivity rates of PD-L1 expression using the 73–10, SP142, and E1L3N assays were 16.1%, 3.2%, and 9.7% at tumor diameter ≤ 20 mm, and 19.4%, 9.7%, and 16.1% at tumor diameter > 20 mm, respectively. The concordance rates of the SP 142 and E1L3N assays with the 73–10 assay for tumors with a diameter ≤ 20 mm were 87.1% and 93.5%, and the Cohen’s kappa coefficients were 0.295 (fair agreement, p = 0 . 02 ) and 0.716 (substantial agreement, p < 0 . 001 ), respectively ( ). The concordance rate between the SP 142 and E1L3N assays for tumors with diameter ≤ 20 mm was 93.5%, and the Cohen’s kappa coefficient was 0.475 (moderate agreement, p = 0 . 002 ) ( ). The concordance rates of the SP 142 and E1L3N assays with the 73–10 assay for tumors with diameter > 20 mm were 90.3% and 96.8%, and the Cohen’s kappa coefficients were 0.617 (substantial agreement, p < 0 . 001 ) and 0.890, respectively (perfect agreement, p < 0 . 001 ) ( ). The concordance rate between the SP 142 and E1L3N assays for tumors with diameter > 20 mm was 93.5%, and the Cohen’s kappa coefficient was 0.716 (substantial agreement, p < 0 . 001 ) ( ). In the present study, for the first time, we analyzed the PD-L1 expression status on ICs and TCs in TNBC tissues using the 73–10 assay and compared it with the expression status according to the SP142 and E1L3N assays. The highest positivity rate on ICs was observed using the 73–10 assay, followed by the SP142 and E1L3N assays, and the highest positivity rate on TCs was observed using the 73–10 assay followed by the E1L3N and SP142 assays. For ICs, a substantial agreement was observed in the concordance rate between the 73–10 and SP142 assays, and fair agreement in the concordance rate between and the 73–10 and E1L3N assays. For TCs, the concordance rate between the 73‐10 and SP142 assays was in moderate agreement, and the rate between the 73‐10 and E1L3N assays was in almost perfect agreement. The SP142 assay is used for companion diagnostics for atezolizumab in patients with TNBC. The IMpassion130 trial clearly demonstrated that atezolizumab plus nab-paclitaxel significantly prolonged progression-free survival in advanced or metastatic PD-L1-positive TNBC patients ; the study defined PD-L1-positivity as PD-L1-expressing ICs ≥1% in the tumor area , which is the same definition used in this study. However, various immunohistochemical platforms have been developed to evaluate PD-L1 expression; thus, few studies have compared the differences between all the PD-L1 immunohistochemical assays in TNBC tissues [ – ]. For example, the 73–10 assay, which is the companion diagnostic tests for avelumab , has not yet been analyzed in TNBC tissues. summarizes the comparisons of PD-L1 expression on ICs using different antibodies according to the results of previous studies and the present one. The rates of PD-L1 expression were relatively different among these studies [ – ]. The PD-L1 positivity rates using the SP142 assay ranged from 19.3% to 67.7% and those using the 22C3 assay ranged from 32.6% to 81% (the 22C3 assay was analyzed by a combined positive score). Although PD-L1-positivity rate on ICs was defined as more than one PD-L1-positive IC in one study , the remaining studies, including the present one, used the same definition (PD-L1-expressing IC ≥1%). Our cohort showed the highest positivity rate using the SP142 assay (67.7%). summarizes the comparisons of PD-L1 expression status on TCs using different antibodies according to the results of previous studies and the present one. In contrast to the results obtained for ICs, the positivity rates of PD-L1 expression on TCs were relatively consistent among all previous studies [ – ]. The PD-L1 positivity rates based on the SP142 assay ranged from 5.1% to 16.8%. All studies, including the present one, defined PD-L1-expressing TC ≥ 1% as positive [ – ]. The sample size, population, and interobserver variation may have influenced these results . Except for the post-hoc immunohistochemical analysis of the IMpassion130 trial , and another study , four studies, including the present one, used the tissue microarray (TMA) technique to evaluate PD-L1 expression. Selection bias of the tumor sample may influence the positivity rate of PD-L1 expression because PD-L1 expression can show heterogeneity within the same tumor tissue . Moreover, the patient population may also influence the difference in PD-L1 expression. The patients in the IMpassion130 trials had metastatic or unresectable advanced TNBC . In contrast, most of our patients had no metastasis , and the present study included TNBC patients in various stages with or without metastasis. Moreover, the ratio of LPBCs in the cohort might have influenced the PD-L1-positive rate. This cohort comprised 30.6% LPBC cases, and this type of information is available in only one other study (28.9%) . Thus, additional studies are needed to clarify the PD-L1 expression status in a larger patient population, which should also record the percentage of LPBC cases. Although the positivity rates of PD-L1 on ICs were relatively different among the studies, the concordance among primary antibodies of PD-L1 was relatively high in these studies. Previous reports demonstrated 91.2% concordance between the SP263 and SP142 assays , 86.2% between the 28–8 and E1L3N assays, 78.0% between the E1L3N and SP142 assays , 95% between the 28–8 and 22C3 assays, 84% between the 28–8 and SP142 assays, and 85% between the 22C3 and SP142 assays . The present study showed 85.5% concordance between the 73–10 and SP142 assays, and 67.7% between the 73–10 and E1L3N assays. Moreover, the differences in the positivity rates of PD-L1 expression on TCs among different studies were less, and the concordance rate among primary antibodies of PD-L1 was also high. Previous reports demonstrated 70.0% concordance between the SP263 and SP142 assays , 92.9% between the 28–8 and 22C3 assays, 88.8% between the 28–8 and SP142 assays, and 89.8% between the 22C3 and SP142 assays , and the kappa value between the 28–8 and E1L3N assays was 0.752, and between the SP142 and E1L3N assays was 0.537 . The present study showed 88.7% concordance between the 73–10 and SP142 assays (kappa coefficient: 0.485), and 95.2% between the 73–10 and E1L3N assays (kappa coefficient: 0.814). This study also demonstrated substantial agreement between the 73–10 and SP142 assays on ICs. However, the present study provided no information to assess the predictive value of the efficacy of anti-PD-L1 immunotherapy because none of the patients in this cohort were treated with this therapy. Moreover, the present study showed that the sample age and tumor diameter did not influence the PD-L1 expression rates on both ICs and TCs among the three primary antibodies for PD-L1. This was the first time that such an observation was made. Consistent with the results of this study, a previous study showed that sample age did not influence the PD-L1-positive ratio (28–8 and 22C3 assays) in non-small cell lung cancer . As described earlier, there were some limitations to the present study. First, this study used a small sample size (approximately 50% patients had Nottingham histological grade 3) from a single institution, which could have led to the selection bias. Second, TMA was used to evaluate PD-L1 expression; this may have led to selection bias, although we selected regions that were the most representative of carcinoma tissue. In TNBC tissues, it is recommended that a whole section should be used for the evaluation of PD-L1 expression; however, in this study, we did not aim to assess prognostic or diagnostic significance of PD-L1 expression, instead we compared PD-L1 expression in the same sample using three different assays; thus, the use of TMA may be acceptable. Third, the present study provided no information to assess the predictive value of the efficacy of anti-PD-L1 immunotherapy. Thus, further studies are needed to clarify the PD-L1 expression status among various primary antibodies in a larger population of patients treated with anti-PD-L1 immunotherapy. In conclusion, the present study demonstrated that the positivity rates of PD-L1 expression on ICs were the highest using the 73–10 assay, followed by the SP142 and E1L3N assays, and there was substantial agreement in the concordance rate between the 73–10 and SP142 assays. However, further studies are needed to clarify the PD-L1 expression status among various primary antibodies in a larger patient population treated with anti-PD-L1 immunotherapy . This would be a prerequisite to the establishment of an effective evaluation method to assess the predictive value of anti-PD-L1 immunotherapies. S1 File Clinicopathological characteristics of patients with triple-negative breast cancer. (PDF) Click here for additional data file.
From Deutsche Zeitschrift to International Journal of Legal Medicine—100 years of legal medicine through the lens of journal articles
813d90c9-c329-453f-869e-ba2ed56a4ee8
9576647
Forensic Medicine[mh]
The International Journal of Legal Medicine can trace its roots right back to the middle of the nineteenth century. The journal which eventually developed into the International Journal of Legal Medicine was founded by Johann Ludwig Casper, the great reformer of Prussian legal medicine , who embodies the beginnings of the journal’s long tradition. He inaugurated and edited the Vierteljahrsschrift für gerichtliche und öffentliche Medicin (Quarterly Journal of Legal and Public Health Medicine), first published in early 1852. By the time of his death in 1864, a total of 25 volumes had been published. While a number of similar periodicals came and went , his Vierteljahrsschrift endured. New editor Wilhelm Horn praised the previous volumes as “an archive for science” in which “many of the most significant questions have been debated and brought nearer to a conclusion” . The subsequent “ Neue Folge” (New Series) commenced in 1864 with Volume 1 and was published under the existing title until Volume 15 in 1871, and thereafter as the Vierteljahrsschrift für gerichtliche Medicin und öffentliches Sanitätswesen (Quarterly Journal of Legal Medicine and Public Health) until Volume 53 in 1890. Following Horn’s death in 1871, from volume 14 the Vierteljahrsschrift was edited by Hermann Eulenburg. When Albrecht Wernich became editor in 1891, the “ Dritte Folge ” (Third Series) again restarted from Volume 1. From volume 12 in 1896, the appointment of Fritz Strassmann, associate professor and director of the Berlin institute, meant that, for the first time in decades, the journal was once more under the editorship of a representative of the legal medicine profession. He co-edited the journal with a senior government medical officer until it ceased publication with Volume 62 in 1921. The collaboration between two editors from different specialities helped to establish the Vierteljahrsschrift as a periodical for the old Staatsarzneikunde (combination of Legal Medicine and Public Health). The Dritte Folge was directly followed from 1922 by the Deutsche Zeitschrift für die gesamte gerichtliche Medizin (which, for brevity, we hereafter refer to as the Deutsche Zeitschrift ). The continuity between the two periodicals was made explicit by its subtitle, Fortsetzung der Vierteljahrsschrift für gerichtliche Medizin und öffentliches Sanitätswesen (Continuation of Vierteljahrsschrift ). The title page also identified the new journal as Organ der deutschen Gesellschaft für gerichtliche und soziale Medizin (Official Journal of the German Association for Legal and Social Medicine), so that the scientific association , founded in 1904, now had its own journal. The expanded editorial board consisted of Fritz Strassmann, two of his postgraduate students, Paul Fraenckel based in Berlin and Georg Puppe based in Breslau, and forensic psychiatrist Ernst Schultze based in Göttingen. After Puppe’s death in 1925, Ernst Ziemke, a Professor at Kiel and also an adherent of the Strassmann school, joined the committee starting with Volume 7. Following the Nazi seizure of power, two further editors—Friedrich Pietrusky, a professor from Bonn and Eduard Schütt, a legal medical officer from Wuppertal-Barmen—joined the editorial board commencing with Volume 21. Both had distinguished themselves through their exceptionally zealous support for the new ruling party . In 1935, the worsening political situation in Germany resulted in a reorganisation of the editorial board. The implementation of the Reich Citizenship Law, the Nazi law depriving Jews of their German citizenship, meant that Fraenckel from Volume 25 and shortly afterwards Strassmann from Volume 26 were removed from their positions—and not just at the journal . At the same time, Ziemke’s death in 1935 reduced the number of editors further, with Hermann Merkel from Munich appointed only in 1938 from Volume 29. Schultze died that same year, but the editorial board did not appoint another forensic psychiatrist to replace him. The Deutsche Zeitschrift continued to be edited by Merkel, Pietrusky und Schütt until Volume 35 in 1942. Pietrusky was then replaced by Kurt Walcher from Würzburg for Volumes 36 to 38 until 1944. The last issue of the Deutsche Zeitschrift to be published before the war’s end was Issue 5 of Volume 38 in spring 1944. As in the previous quarterly journal, after 1922 the contents continued to span original papers, abstracts and reports. As the official journal of the scientific association, the Deutsche Zeitschrift was used to publish conference proceedings. Our analysis focuses on original papers, because our aim is to present the academic development of legal medicine and related fields through the lens of the Deutsche Zeitschrift . We have therefore considered both original papers and presentations from the proceedings of the scientific association prepared for publication. In addition, for our exploration of the history and development of the subject, we have also considered opening speeches from conferences. Key areas of academic enquiry in the period from 1922 to 1944 In keeping with the broad-based concept underpinning the Deutsche Zeitschrift , which extends well beyond the narrow confines of legal medicine, in addition to core legal medicine topics, our analysis of the content also covers topics from related forensic science fields. Other topics covered include historical and epistemological matters, plus laudations, obituaries, etc. The classification on which the analysis is based and an overview of the number of publications on individual topics over the period under review is shown in Table . History and evolution of legal medicine Legal medicine is one of the oldest medical specialities. Its position on the boundary of medicine and law means that it has historical links to a wide range of other disciplines (24/1). The literature on the history of legal medicine does not reflect this variety. This deficiency, as he saw it, prompted Ferdinand von Neureiter to call for collaboration on a “History of German Legal Medicine as a Subject for Research and Teaching” (28/60). The interdisciplinary nature of legal medicine means that, over the years, the evolution of the subject has often been closely scrutinised. In 1924, Willy Vorkastner spoke at length on “The Status and Role of Legal Medicine” (5/89). In two key articles from 1928, both entitled “Old and New Directions in Legal Medicine” (11/1, 11/14), Richard Kockel and Fritz Reuter explored the different directions in which the discipline was evolving. The proceedings of the scientific association , which were printed in Volume 1 and later at irregular intervals in the Deutsche Zeitschrift , offer a useful overview of topical issues in legal medicine. Between 1933 and 1940, the journal published edited versions of talks at the 21 st and 24 th to 29 th congresses of the scientific association. On many occasions, these expert gatherings were accompanied by political declarations reflecting the zeitgeist. As chairman of the German Society for Legal and Social Medicine, Berthold Mueller, at the time a Professor in Göttingen, concluded his opening speech at the 26 th congress in spring 1937 (29/133) with the words, “As with all collective endeavours, today we would like to turn our thoughts to the man who, despite differences of opinion large and small, beyond parties and hatred, has brought the German people together to form a unified whole. We are conscious that, in our academic work too, we are answerable to him and to the German people. Heil Hitler!”. The opening speech at the 28 th congress in spring 1939 (32/191) was given by Arthur Gütt, Undersecretary at the Reich Ministry of the Interior. He gave delegates a detailed elucidation of the Health Service Standardisation Law of July 3, 1934. The Law replaced district medical officers with Health Authorities which, under Sect. 3(1)(III) of the Law, were made responsible for all legal medicine activities. As one advantage of this reform, Gütt emphasised the close relationship between the role performed by legal medicine and “ Erb- und Rassenpflege ” (“genetic and racial hygiene”). The welcome message delivered by Gerhard Buhtz, Chairman of the Society and a Professor at Breslau, at the 29 th congress in spring 1940 (34/1), was also driven by the prevailing zeitgeist. In conjunction with a declaration of loyalty to the Nazi leadership and the war goals, he discussed endeavours to link together legal medicine and forensic science. In keeping with this trend, papers on requirements for university teaching focus particularly on teaching for detectives. Endeavours to achieve effective collaboration between legal medicine specialists, forensic scientists and lawyers are a key topic in Volume 18. Articles range from the role of legal medicine specialists at crime scenes, to issues relating to training, to the practice of law enforcement. Personalia (People) From the beginning, the Deutsche Zeitschrift served as the official journal of the scientific association. Leading personalities of the discipline were therefore celebrated on notable birthdays or in obituaries. In addition, a number of issues took the form of commemorative issues dedicated to Albin Haberda (1/581), Richard Kockel (5/3), Carl Ipsen (7/137), Ernst Ziemke (10/147), Fritz Strassmann (12/1), Emil Ungar (14/3) and Hermann Merkel (21/57). Legal issues, expert witness work Within the category legal issues , the most popular subject by far is criminal law and criminal procedure. The largest group is made up of 13 papers from the 1920s, in which medics and criminal lawyers discuss the oft-revised draft of the Allgemeines Deutsches Strafgesetzbuch (German General Criminal Code). With respect to the General Part of the Criminal Code in force at the time, most notable are articles on guilt and criminal responsibility. The majority of discussions relating to the Special Part of the Criminal Code deal with clauses on violent and sexual offences. Of the papers concerned with criminal procedure, 3 of 7 papers deal with police interrogation. The period after 1933 sees the publication of 10 works explicitly referring to “Nazi criminal legislation”. There are two reviews exploring the realignment of criminal law principles. In one, a senior public prosecutor from Stendal states, “We absolutely may view the law as representing binding principles and at the same time an order from the government as to how things should be arranged. The legal system thus becomes a system of formal orders from the Führer. Criminal legislation in particular acquires meaning and purpose in the great work of the new concept of the ethnic body politic, which began with detoxification, purification and healing and progresses to defence and education” (24/8). The second review was penned by Berthold Mueller, who analysed some of the individual provisions of the future criminal law and criminal procedure. Medico-legal facets of his review include such problems as the principle of strict liability, whereby the harmful consequences of a criminal act are of less significance than the criminal will of the offender, Rasseverrat (“racial betrayal”), which asks forensic medical specialists to decide whether the identification of certain non-German characteristics in a child can be used to conclude with sufficient certainty that sexual intercourse with someone from “another race” has taken place, and, his final example, euthanasia, “The proposals set out in the Prussian memorandum [“Nazi criminal law”] would also make possible the destruction of lebensunwertes Leben (“life unworthy of life”). I do not believe any objections to this can be raised from either a racial or medical point of view” (24/114). Other articles on Nazi criminal law deal with other far-reaching new legislation, such as the Gesetz zur Verhütung erbkranken Nachwuchses (Law for the Prevention of Offspring with Hereditary Diseases) or the Verordnung gegen Volksschädlinge (roughly translatable as Regulation against Parasites on the Body of the People). Articles on other areas of law are primarily concerned with family law and medical law. A review entitled “Legal Issues in Road Traffic Accidents” (28/30) deals with current legislation and case law. There are also isolated essays on the Code of Civil Procedure, the Prison Act and social legislation. Papers on civil law include an essay by Heinrich Többen on “Farmer Eligibility under the Reich Farm Inheritance Act” (29/317), a characteristic piece of Nazi legislation. Papers on expert witness work can be divided into two groups. There is a general group comprising 10 articles on methodological problems in appraisals and on fundamental questions such as causality or expert witness professional confidentiality. There is also a more specific group of 39 articles concerned with the significance of medical opinions in specific questions. Most of the published expert witness reports deal with an assessment of the causal relationship between physical injuries and preceding trauma. Thirteen of these relate to insurance medicine, and there are a further 4 case reports on simulation. Three of the case studies on simulated symptoms of illness include descriptions of experiences from the First World War, which in some cases attained renewed significance in the context of insurance assessments. General legal medicine The majority of papers on post-mortem inspection and forensic autopsy are concerned with the performance of these two basic post-mortem diagnostic techniques. Seven articles deal with domestic and foreign regulations on autopsy and the purpose and objectives of court-ordered forensic autopsies. A further 7 articles concern specific autopsy techniques. Additional investigations performed after autopsies span a wide spectrum of techniques and are accordingly found in a number of volumes. Most papers are concerned with either toxicology or histological examination. The 14 publications in this area include both microscopy techniques and forensically significant microscopy findings from histological examination of organ material. There are also 3 papers on bacteriological and 3 on biochemical analysis, 2 on X-ray examinations and 3 on a range of other techniques. Twelve papers discuss expert evaluation of autopsy results. Problems of competing causes of death and the capacity to act of the victim following various traumas are considered in a number of papers. Eight papers deal with exhumation, with repeated emphasis on its diagnostic value. Finally, in 1922 and 1936, several authors called for the introduction of coroner’s post-mortem examinations. Twenty-seven papers, almost half of all papers on thanatology, are concerned with early post-mortem changes . The phenomenon of supravital reactions includes studies on early post-mortem pupil changes (6/22, 6/32, 20/144), on tissue changes as a result of autolysis (3/359, 16/61), and individual studies on the pattern (11/317) and colour (13/261) of livor mortis. There are 17 articles on problems relating to rigor mortis, forming the largest group of articles relating to thanatology. The main topic considered is the physiology of rigor mortis and the lactic acid theory (2/1). There are also earnest discussions of the special case of catalepsy in both a detailed literature review (2/647) and a collection of case studies (3/349). In addition, there are 3 reports on cases of cataleptic spasm (3/357, 3/562, 13/13). There are a total of 28 papers on late post-mortem changes . Dictated by practical concerns, the dominant theme is decomposition of the body, with 7 papers primarily concerned with differentiating between putrefactive gases and air embolism. A series of publications (4/562, 6/650, 9/459) on this diagnostic difficulty were published by Berlin forensic pathologist Felix Dyrenfurth, whose earlier findings were nonetheless overlooked in a later study (6/379). There is also a paper on gas analysis in putrefying lungs (16/459). Three papers explore the aetiology of coffin birth, which at the time had not been definitively explained. There are 9 papers on preservative post-mortem changes and 5 papers on animal bites on bodies. In the field of thanatochemistry , there are 6 papers on the production and detection of putrefaction products such as sulphhaemoglobin and biogenic amines. The utility of individual post-mortem changes and the diagnostic value of stomach contents and stomach mucosa are discussed as methodologies for time of death estimation . A review summarising the current state of knowledge was penned by Hermann Merkel in 1930 (15/285). Still in the 1930s appeared the first papers on correlations between post-mortem rectal temperature and time of death (28/172, 29/158, 31/256). With respect to the phenomenon of vital reactions , by far the largest number of papers are on general reactions. In particular there are 19 papers on embolism, with the largest number of papers on air embolism, followed by fat embolism, and finally tissue embolism. There is one review devoted to the medico-legal significance of these types of embolism when presenting as a paradoxical embolism (23/338). The general vital reactions described include individual studies on haemorrhage, shock, aspiration and swallowing. There is relatively little on local vital reactions. In addition to 2 papers on thrombosis (15/330, 21/147), there are a variety of post-mortem biochemical analyses intended to identify vital changes in post-mortem blood, muscle tissue, bronchi, nerve cells and cerebrospinal fluid. Forensic traumatology and pathology Injuries from blunt force trauma are as varied as the mechanisms by which they are produced. It is not uncommon that external findings are able to shed light on the event that caused them. This is true for the shape of epithelial abrasions (37/33), the morphology of the wound margin (22/299) and “ischaemic impact patterns” (29/408). Scalp wounds also have practical diagnostic significance, and specific features of these wounds are presented in 3 papers. If we look at the distribution of violence by body region, records of head injuries are by far the most common. Of the 11 papers which focus specifically on the shape of skull fractures, one is concerned with the sequence of skull fractures (8/430) and one with the fracture mechanism in babies (32/461). The 30 papers on the wide variety of brain injuries range from concussion and intracranial haemorrhages to delayed consequences of brain injuries. The mechanism of coup-contrecoup injuries is the most frequent subject, discussed in 7 papers, with some debate over Lenggenhager’s centrifugal theory as a potential mechanism (31/61, 31/278, 31/280). There are also several papers on the consequences of blunt force to the trunk and extremities. The most common subject, with 12 papers, is blunt abdominal trauma and resulting injuries to the parenchymal organs and gastrointestinal tract. There are detailed discussions of the difficulties of reconstructing events in the context of a fall in the mountains (28/90) and a fall from height (30/334). Boxing-related deaths form a special category, with 6 case studies published. The causes of death were intracranial haemorrhage (12/392, 16/341, 25/41), one case of “death from shock”, in the sense of a sudden cardiac event (1/481, 19/415), and suffocation due to acute laryngeal oedema (1/695). Finally, a further example of death from blunt force is the legendary case report “A new method of insurance fraud: the Tetzner case” (21/112) by Richard Kockel. Of publications on the consequences of sharp force, by far the largest number are concerned with stab wounds. Almost all of the 21 papers on the subject deal with knife wounds and the possibility of using the morphology of the wound to draw conclusions on the implement used or the events of the crime. Far less common are cuts. Five of 6 case reports are concerned with the question of whether a death has occurred as a result of murder or suicide in cases where the throat has been slit. A further small group of cases consists of 5 papers on wounds inflicted with slashing weapons, in which once again the focus is on the question of whether the injuries were inflicted by the victim or a third party. In addition to 2 collections of case studies (2/412, 22/407), there are case reports on 3 suicides involving a blow from an axe (20/64, 27/308, 28/422). A special form of penetrating trauma is bite wounds. Two of 3 papers on this subject are concerned with the value of bite wounds for identification purposes (16/89, 29/453). There are 4 case studies on criminal corpse dismemberment with cutting tools, 2 of which deal with medico-legal aspects of serial murders by Karl Großmann (3/147) and Karl Denke (8/703). Other reports on high-profile murders committed using penetrating force include the widely reported sexually motivated murder of high school graduate Helmut Daube (14/158) and serial murders by sadist Peter Kürten (17/247). There are 13 works on asphyxiation , including a number of unusual case studies and several experimental studies discussing morphological features diagnostic of asphyxiation. Gerhard Schrader penned a review (28/134) in which he attempts to produce a systematic overview of then (mid-1930s) known diagnostic criteria for violent asphyxiation. Six of the 7 reports on homicides through manual strangulation concern crimes where the perpetrator has attempted to disguise the crime. One further paper deals with the significance of the carotid sinus reflex ( Hering reflex) in strangulation. Following a comparative analysis of human corpses and experimental animal corpses, the author expresses the opinion “that a powerful stranglehold around the neck can lead to unconsciousness as a result of the carotid sinus reflex, without leaving localised signs of strangulation on the victim” (20/361). There are only 5 articles on ligature strangulation , with 3 case reports highlighting the possibility of self-strangulation. A report on a homicide triggered an expert debate on the carotid sinus reflex during ligature strangulation (15/419, 15/572). There are 25 case reports on hanging . The most significant from a pathophysiological perspective are 3 extensive studies (1/686, 4/165, 11/145) on the mechanism of death by hanging. Further insights into the dying process are provided by “Observations on Death in Executions by Hanging” (22/192) from 1917/18. The vast majority of papers on death in water are concerned with drowning. These include 8 papers on the pathophysiology of drowning and 3 on morphological findings. Erich Fritz describes a new sign of death by drowning in the form of tears in the mucous membrane of the stomach in people who have died by drowning (18/285), and Gyula Balázs’ collection of case studies represents the first report on “ischaemic impact patterns” after falling into water (21/515). Nine papers describe microscopic and biochemical techniques for diagnosing death by drowning. There are 11 papers concerned with post-mortem changes in bodies recovered from water and the phenomenon by which a body will float back up to the surface some time after death. There are individual treatises on the causes of atypical drowning and death when diving. An instructive overview of death by drowning (18/557) was produced by Gerhard Buhtz. Investigation of gunshot wounds necessitates close cooperation with forensic investigators, with the result that the section on ballistics includes a number of such publications from the forensic science field. From a medico-legal perspective, the focus is on the effects of and expert assessment of gunshots in humans. A key diagnostic objective is recognising and distinguishing the entry and exit wounds. There are 17 papers on this topic. With respect to determining the range of fire, there are 13 articles on signs indicative of gunshot from close range, including Anton Werkgartner’s first paper in the Deutsche Zeitschrift on muzzle imprint in contact wounds (11/154). There is just one paper (23/375) on determining the direction of fire from the body. In terms of body region, the vast majority, 18 papers, deal with specific features of gunshot wounds to the skull, including one case of a Krönlein shot (1/141). There are just 5 articles on gunshots passing through the chest, causing fatal injuries to the heart and lungs. There are 10 case reports on homicides involving firearms, including an interpretation of findings relating to the shots that killed Communist Party of Germany co-founder Karl Liebknecht (5/247). From 1940, there are 3 extensive reports on large-scale investigations into multiple gunshot wounds. Two legal medicine specialists working separately detail their findings in relation to the victims of Bloody Sunday in Bromberg (34/7) and “from the hostage processions in the Warthegau” (34/54). A forensic science approach is taken in reporting the results of an “Investigation into Polish Atrocities against Ethnic German” (34/90). Seven papers describe rarely used weapon systems (animal stunning equipment, shotguns, terzeroles, Karabiner 98 k) and the characteristic wounds they produce. Finally, 3 reports on explosion injuries can also be considered to belong to this group of topics. These include just a single study on bomb and shrapnel injuries from hostilities during the Second World War (35/173). There are 16 papers on death from electricity , encompassing accidents, suicides and homicides. There are also articles on non-fatal electrical injuries and on animal experiments to investigate electricity-specific effects. There are 5 papers on the appearance and identification of electrical burn marks from the electrothermal effect on the skin. The state of knowledge at the time is summarised in a review by leading expert Stefan Jellinek “On Electropathological Semiotics and Causality” (12/104) and by Friedrich Pietrusky on morphological findings after exposure to technical electricity (29/135). There is also a collection of case studies (32/407) concerned specifically with the consequences of lightning strike. Thermal damage caused by heat , with a focus on morphological findings in burns victims, is the subject of 5 publications. Hermann Merkel published a survey of diagnostic options for burnt corpses (18/232). Of the other publications, there are 3 papers on microscopy findings in human corpses (19/293, 23/19, 23/281) and one study using animal experiments (20/445). There is also one paper on delayed death from burns (35/75) and one on the pathogenesis of scalding (3/401). In addition, there is a case report on a self-scalding by a psychiatric patient in a boiling pan (36/49). A collection of case studies discusses 3 homicides by burning (21/120). The disposal of corpses by burning is the subject of a further 2 papers, and there is also an article entitled “Self-immolation of the Human Body” (18/437) which takes a historical criticism approach. Two papers deal with injuries caused by cold . One reports on the clinical progression of fatal hypothermia (18/270), while the other reports on results from animal studies on general hypothermia (30/199). Physical changes in the corpse after starvation are described in detail in the context of a death in connection with a fasting regime (6/520). Reports on single or multiple deaths involving various types of external violence are not easily assigned to any of the above groups. These 9 papers are therefore recorded separately. In addition there are two collections of case studies on fatal sport-related accidents occurring in the course of a wide range of different sport activities. Table shows a summary of the number of publications on all types of violent death. Forensic obstetrics as a speciality is concerned with forensic medical problems relating to pregnancy and childbirth and to the physical condition of neonates. The first group of 14 papers deals with health risks posed by mechanical contraceptive devices, pregnancy testing, complications of pregnancy and indications for terminating pregnancy. Eleven articles on complications of childbirth affecting mother or child are primarily focused on reliably distinguishing spontaneous injury from the results of criminal acts. Six papers on equivocal findings in neonates concern the same objective. Section 90 of the German Code of Criminal Procedure (StPO), which sets out the regulations on autopsies on neonates, raises several questions for the pathologist performing the autopsy. These are discussed in a total of 38 publications. One recurring theme is determining whether an infant was alive at birth, whereby in addition to a number of papers on the lung float test (2/31, 2/267, 5/47, 6/5, 14/7), a cord vessel test (36/21) is cited as a novel method for answering this question. The then Sect. 218 of the German Criminal Code (StGB), which dealt with termination of pregnancy, meant that from the 1920s to the 1940s abortion was one of the primary issues in forensic obstetrics. Consequently, the 30 publications on the methods and consequences of illegal procedures on pregnant women represent a further major topic for the volumes of the Deutsche Zeitschrift examined by this review. While doctors were motivated to oppose illegal abortion because of the health risks for the pregnant women, the Nazi era saw the focus of opposition shift to population policy objectives. Thus we have a 1940 publication entitled, “The Fight against Abortion as a Political Task” (32/226). There were 18 publications on infanticide as defined in Sect. 217 of the German Criminal Code (StGB). These illustrate the difficulties of forensic diagnostics, which consisted primarily of using morphological features to distinguish maternal attempts to facilitate delivery from actions performed with the intention of killing the child. Publications on medical malpractice are concerned with medical interventions in both surgical and non-surgical specialities. The most common topic is reports on adverse drug events, of which there are 10 cases. There are 4 reports on anaesthesia-related adverse events during surgery and an equal number alleging post-operative medical errors. There are also 3 case reports on radiation injuries, 2 on complications during diagnostic procedures and 2 on vaccine-related injuries. There is also a review tackling the subject of the “History and Concept of Malpractice” (20/161), primarily from a legal perspective. With respect to papers on disease and sudden unexpected natural death , there is a dramatic imbalance in the number of cases concerning the two age groups generally analysed. While there are 65 publications on adults, there are only 13 papers on sudden unexpected natural death in children. The most frequently cited groups of fatal diseases in adults are diseases of the central nervous system, on which there are 15 publications, and cardiovascular disease with 13 articles. There were far fewer reports on other groups of fatal diseases. There are 3 reports on deceased children, each of which describes malformations or infections as the cause of death. Other publications on childhood consist of individual case reports on children with rare diagnoses and reports on experience gleaned from large numbers of autopsies. Forensic toxicology The largest group of publications in this subject area consists of descriptions of the clinical features of various types of poisoning , on which there are 125 papers. In addition to 9 collections of case studies on various poisons and 3 cases of combined poisoning, there are 68 papers on inorganic poisons and 45 on organic poisons. Of the inorganic poisons, by far the most widely discussed are carbon monoxide, on which there are 13 papers, arsenic, on which there are 9, and thallium, on which there are 7. Of the organic poisons, there are notable clusters of 17 papers on plant toxins and 13 reports on pharmaceutical poisoning. With respect to illegal drugs, widespread today, there is just one article on the suicide by cocaine of a male addict (4/40) and one on fatal poisoning with heroin after being mistaken for cocaine by a cocaine-dependent individual (12/112). The identification of pathological changes in tissue and organ anatomy plays a role in the diagnosis of poisoning, though it does have some natural limitations. Of 28 papers on this subject, primarily on organ toxicity, 13 are concerned with the central nervous system, which is particularly vulnerable to poisoning. There are also 2 articles on characteristic morphological changes in specific types of poisoning. These are Mees’ lines and coma blisters, both first described in the Deutsche Zeitschrift (34/307). The second, smaller group on pathological anatomical changes consists of 8 reports on histological changes in various organs caused by one or more poisons. The gold standard for diagnosing poisoning is toxicological analysis . In the period under review, 68 papers were published on this subject, of which 59 are concerned with the detection of a single poison or group of substances. As might be expected given the prevalence of carbon monoxide poisoning in that era, by far the most common, with 16 examples, are papers on various techniques for detecting CO or CO-Hb. Of groups of substances, alkaloid detection is described in 4 papers. In contrast to analysis of specific substances, there are 9 papers on the technical details of a variety of analytical methods, including spectrophotometry (1/411) and microscopy-based melting point determination (34/353). One area of toxicology which has evolved into its own extensive speciality is the toxicology of alcohol . Of 33 publications on alcohol metabolism, 26 studies are concerned with factors which might affect the absorption, distribution and elimination of alcohol, such as age, trauma and chemicals. Just 5 publications focus on the correlation between blood alcohol concentration and the effects of alcohol. These include a review entitled “On the Terminology and Forensic Assessment of Alcoholic Intoxication” (14/296), which outlines the state of knowledge in 1930. The only paper on fatal alcohol poisoning (33/44) is primarily concerned with developmental peculiarities of alcohol clearance in children. There are 2 papers specifically on methanol poisoning. Thirty-one publications deal with methods for determining alcohol concentration in body fluids and tissues. In addition to two reviews (10/377, 11/134), there are papers on individual methods, including the Nicloux method (18/638) and the Widmark method (19/513). Between these papers and the end of the war, there are 18 further studies on Widmark’s micromethod. All of these papers deal with subtle technical points for overcoming shortcomings in the method and eliminating sources of error. Identification of unknown decedents There are a total of 21 publications on then common medico-legal identification methods . Six publications in the field of osteology discuss ways of differentiating between species, sex diagnosis and age estimation from skeletal remains. One of these papers, which takes a detailed look at the forensic medical significance of the humerus, includes basic data on several identification procedures (22/332). There are also 4 papers specifically on skull identification, 2 of which include detailed descriptions of techniques for the superprojection of an unknown skull onto a photograph of a missing person (20/33, 27/335). Five papers describe odontological features which can be used for identification, and 2 linked articles discuss identifying features visible using radiological techniques. A case study on the particular utility of the frontal sinuses for identification (7/625) is the first paper on X-ray comparison in the German-language literature (10/81). There are 2 papers dealing with methods for specific investigatory queries and combinations of methods required for problem cases. Forensic genetics Table shows a summary of the number of publications on forensic genetics. The papers on the topic of blood groups cover test methods, prevalence, distribution and inheritance. The first paper on blood groups to appear in the Deutsche Zeitschrift was published in 1924 and was entitled “On the Forensic Significance of Isoagglutination of Red Blood Cells in Humans” (3/42). The papers on test methods discuss the production, use and storage of test sera and other reactants. The first publication on the prevalence and distribution of A subgroups in the Deutsche Zeitschrift was by Karl Landsteiner, the discoverer of the main blood groups. Another pioneer of blood group research, Berlin serologist Fritz Schiff , published the first paper on “The Medico-legal Significance of Landsteiner and Levine’s M and N Serological Properties” (18/41). Seventeen further articles on the M, N and P blood group systems followed. The publications on the inheritance of blood groups deal with Bernstein’s theory and of the inheritance model devised by von Dungern and Hirszfeld. There is also one experimental study on the secretion of blood group substances (28/234), the results of which confirmed that secretor status is inherited in a dominant manner. The practical use of identificatory blood groups is the subject of medico-legal trace evidence analysis . The contemporary arsenal of methods necessitated a hierarchical sequence of examinations: test for the presence of blood − determine species specificity – determine blood group. Four articles describe peroxidase and crystal tests for blood detection . The specificity of these tests varies and depending on the test method should be considered a presumptive test only. The Uhlenhuth test has been used to determine species specificity since 1901, predominantly to distinguish whether trace material is human or animal . The precipitation principle is the subject of 11 publications, and there are a further 2 papers on alternative serological methods for differentiating between species. The discovery of human blood groups was soon followed by the development of the first methodical approach to the individuation of bloodstains . Landsteiner and Richter’s agglutination test represents the beginning of the use of inherited blood groups for forensic purposes . The small quantities and ageing of trace material require modifications to test techniques. To achieve optimum results under these conditions, Leone Lattes introduced the coverslip method (9/402) and Franz Josef Holzer further refined Schiff’s agglutinin binding test (16/445). As the publications on this topic show, of the then known blood group systems, forensic trace evidence analysis determined both AB0 system and MN group groups. Other publications deal with specific trace evidence-related issues. These include detecting menstrual blood by using its inhibitory effect on lupin seedlings and on yeast fermentation, sex diagnosis of bloodstains using colour reactions, using solubility and enzyme activity to evaluate the age of traces and using colorimetry to estimate the amount of blood. Detection methods based on characteristic constituents of bodily fluids were developed for various bodily fluid traces . Epithelial glycogen content can be used to identify vaginal secretions (4/1). There are 4 papers on sperm detection, including a new microscopy-based detection method using Baecchi’s stain (27/143). There is also a review describing a number of older tests for detecting dried saliva (11/211). At the time, it was possible to determine AB0 groups both from semen and from other bodily fluids (23/186). The majority of papers on hair traces is concerned with using morphological features for species differentiation. Other topics include determining developmental stage and sex, and demonstrating damage to the hair. The papers on traces of faeces discuss serological identification based on coliform bacteria and a presumptive assignment of blood group. In the period under review, family relationship testing relied on fertility testing, assessment of the period of gestation, a comparison of morphological characteristics and blood typing. There is one paper on fertility testing (26/64) and one on the legal and medical problems involved in assessing the period of gestation (14/11). The papers on hereditary biology deal with various physical characteristics for assessing similarity and with dactyloscopic criteria. By far the largest number of articles are on the results of blood group testing. As well as helping to clarify cases of disputed paternity, blood group analysis also served as evidence in a case of babies switched at birth (25/79). There is also a report by a veterinarian on “Blood Groups and Paternity Determination in Horses” (25/231). Scientific-technical criminalistics There are numerous areas of overlap between legal medicine and scientific-technical criminalistics. In the period under review, these went well beyond the kindred forensic biology role performed by the two disciplines. Table shows a summary of the number of publications on scientific-technical criminalistics. In his article “Institutes of Legal Medicine and Criminalistic-Technical Activities Performed within Them” (14/411), Martin Nippe explicitly advocates for legal medical institutions to take over criminalistic tasks. His argument that the courts and police lacked the necessary expertise seems entirely reasonable in light of advances in forensic technology in Germany at the time. The urgency of this need is demonstrated by a collection of case studies from Göttingen College of Legal Medicine (14/26). It features cases which indisputably fall within the purview of forensic technology. One field in which there were particular deficits was theory of forensics , on which only a single paper (34/404) was published. This paper, by distinguished Berlin criminalist Hans Schneickert, deals with the key insight that the prevalence of an identifying feature determines its value for identification purposes. The picture is very different for the science-focused sub-disciplines of forensic technology. Almost exclusively as a result of research performed at institutes of legal medicine, a considerable body of knowledge had already been amassed in the field of forensic biology . In Germany, this sub-specialism was not an established academic discipline in the period under review, so that in the majority of cases the investigation of biological trace evidence was contracted to institutes of legal medicine. Accordingly, the theory behind many of the methods used in forensic genetics is equally applicable in the field of forensic biology. Twelve publications relate to the key criminalistic task of finding and securing bloodstains at a crime scene and on objects. Two studies on the morphology of bloodstains deal with fundamental insights into blood splatter patterns (16/272) and drip patterns (22/387). A further 2 papers are concerned with non-human material. Of the publications on forensic chemistry , there are 8 papers on chemical analysis for the purpose of determining the cause of fire. In relation to 2 attempted fatal poisonings, there is a report on demonstrating the presence of poison in food. In addition, there are papers on the chemical/physical chemical analysis of glass dust, metal, wax, paint and paper (one paper on each). Of the technical sub-specialisms, by far the largest number of papers in this area are dedicated to ballistics . Of these, 20 papers deal with different methods for determining firing range, making this a major thematic focus in this area. The second large group is made up of 11 publications on firearm identification from firearm trace evidence. There are just single publications on determining bullet trajectory in long-range shots and on evidence on the firing hand. The objectives of forensic writing examination are to determine the authenticity or otherwise of a document and to identify the person responsible for a piece of handwriting or typescript. Nine papers are concerned with identifying an author based on individual handwriting features. Three papers look at criteria which enable handwriting and typescript to be falsified. One paper looks at typewriter identification and one at determining the age of a piece of handwriting or typescript. There is also a critical analysis which calls for proof of competence to be required before a court accepts the evidence of a writing expert (27/364). The publications on document examination are concerned with the examination of paper scraps, betting slips and paintings. One of the main areas of application of dactyloscopy is the comparative examination of fingerprints to identify individuals. Five papers describe the conditions required and evaluation criteria for fingerprint identification. There are 3 articles (10/372, 22/54, 27/323) on methods for visualising latent fingerprints. Two articles examine fingerprint identification in dead bodies in an advanced stage of decomposition (13/256, 29/426). The field of shape traces deals with various impressions on a wide range of media. The publications in this field deal with tool, shoe and bite impressions and methods for preserving a trace. In the field of forensic photography , it is essential that images are informative and authentic, as they constitute evidence admissible in court. The publications in this field therefore deal with photographic and technical requirements for producing usable photographs. The sole work on odorology is a case report on the evidentiary value of olfactory traces of specific wood shavings (6/1). The papers on investigative techniques deal with the application of UV light, spectrography and X-rays. Clinical legal medicine There are two papers on age estimation in living subjects, one exploring diagnostic criteria in minors and one exploring criteria in adults. Leningrad author W. A. Nadeshdin produced a detailed discussion of features of the face and dentition useful for age estimation in adults (6/121). In a second work on age estimation in minors, he proposes growth, tooth eruption and signs of sexual maturity as a reliable combination of features for this purpose (8/273). The papers analysed include just 2 early reports on drug addiction . The first, a case study (12/285) on multiple drug use with a preference for chloroform in a pharmacist, was published in 1928. The second report (35/17) from 1941 discusses the medical history of a patient who, following morphine addiction, had developed pethidine addiction. A detailed discussion of the legal and social significance of child abuse (13/159) is the only publication on this subject. The investigation of self-inflicted injury always aims to reveal the motivation. The 2 published cases are both cases of attempted insurance fraud, in one by severing the left thumb (23/352), in the other through many years of self-harm involving repeated injury to the left arm (38/244). Forensic psychiatry and psychology The vast majority of publications on this topic are concerned with psychopathology . Of these, most are concerned with individual forensically significant psychopathological phenomena, discussed in 26 articles. There are 11 papers specifically on the forensic significance of psychiatric disorders. These include various psychoses, paralysis and dementia. Four case studies consider previous trauma in relation to the assessment of psychotic disorders. There are 3 papers on the principles and diagnostic value of graphology. There are a further 3 case reports on the psychopathology of prominent figures, including the sensational case of railroad killer Szilveszter Matuska (20/53). Of a paper by Roland Freisler, then Secretary of State in the Reich Ministry of Justice, entitled “The Psychology Underlying Polish Atrocities, Illustrated through the Development of the Polish National Character”, only the title was reproduced (34/7). The paper takes a social psychology approach, and the full text was published in a legal journal. A themed group of 13 papers covers various subjects in the field of the psychology of testimony . Topics include the impairment of memory capacity, in particular through hypnosis and suggestion, recognising false testimony and assessing testimony given by children and adolescents. The interdisciplinary field of suicidology is, in the selected context of forensic psychiatry and psychology, reduced to problems in this area. The publications classified as belonging to this field are therefore primarily concerned with motives and risk factors for suicide. There are 14 articles on these topics, which, in addition to the sequelae of diseases and head injuries, discuss individual constitution as a causal factor. Epidemiological surveys on suicide in Russia, in Finland and in Berlin are the subject of 3 papers. In a paper entitled, “Contribution on the Psychology of Suicide” (12/346), the phenomenon of imitation is discussed in the context of a popular suicide location. Sexual medicine Of the many and varied references to human sex life subsumed into the category sexual medicine, one extensive group is the paraphilias —sexual perversions in the nomenclature of the time. Topics covered by the 23 publications in this area range from sexually motivated self-mutilation, to rare potentially criminal forms such as fetishism, to clearly criminal acts such as incest and paedophilia. There is also one article calling, as part of a pending reform of criminal law, for the abolition of Sect. 175 of the German Criminal Code (StGB), which at the time made consensual sexual acts between men a criminal offence. The first appeals for sterilisation for deviant sexual behaviour were published as early as the 1920s. The earliest work on “Castration and Sterilisation as a Means of Treatment” (3/162) from 1924 is concerned with applying these measures both in the event of a “mental disorder triggered by functions of the sexual organs” and for the elimination of “a criminal activity”. A later article (14/432) from 1930 rejects the idea of surgical castration of sex offenders, but advocates radiation treatment of recidivist sex offenders. These two publications are followed by 9 further papers, all relating to the “Law against Dangerous Habitual Criminals and on Measures of Reform and Prevention” ( Gesetz gegen gefährliche Gewohnheitsverbrecher und über Maßregeln der Sicherung und Besserung ) of November 24, 1933 and in particular to the newly introduced measure “castration of dangerous sex offenders” in Sect. 42a (5) of the German Criminal Code (StGB). The last of these papers, entitled “Observations on Castrated Sex Offenders” (33/248) concludes with the assertion that, “the success of castration has far exceeded expectations.” Two papers discuss the physical consequences of castration, and a further 2 papers fertility in the context of the Law for the Prevention of Offspring with Hereditary Diseases ( Gesetz zur Verhütung erbkranken Nachwuchses ). There are 7 papers on hymen examination . These publications illustrate the extraordinary variability of the healthy hymen and to an even greater extent the pathologically altered hymen. Depending on shape and elasticity, where sexual violence is suspected, consideration must be given to the possibility of vaginal penetration without rupture of the hymen (14/265). Fatal sexual activity without a sexual partner is the common element uniting the phenomenon of autoerotic fatalities . Classification of a death as an autoerotic fatality is solely dependent on the circumstances at the site of discovery and is independent of the cause of death. Of 6 cases reported, 3 were deaths by hanging, and of the remainder the cause of death was only able to be determined in one case. Other sexual medicine topics are treated in 3 papers on male sexual dysfunction and 2 publications on intersexuality . Traffic medicine Almost all publications in this field are concerned with the causes of and consequences of injuries incurred in road traffic accidents . These include 7 papers on the biomechanics of road traffic accidents. Injuries to the head and extremities are discussed with respect to their value for investigative purposes. Two papers discuss trace evidence after road traffic accidents and their use in accident reconstruction. A further 7 articles are concerned with the effects of alcohol on driving performance and the resulting accident risk. Two papers are concerned with other causes of accidents. Just one publication, titled accordingly, discusses the important question in rail traffic accidents of whether a victim was “Alive or Dead when Run over by a Train” (25/147). Social medicine There are just 11 publications relating to social medicine. Seven articles are primarily concerned with occupational problems relating to diseases with social sequelae. There are also 3 articles concerned with public health and a positively classic social medicine paper entitled “The Effect of Social Situation on the Health Status of Welfare Recipients” (20/535). Criminology Of the sub-specialisms historically forming part of the field of criminology, 19 publications on criminal biology , originally known as criminal anthropology, make it the best represented. Six papers in the then emerging research field of “blood groups and criminality” (9/426) form part of a series with 6 other papers on constitutional biology, most of which are concerned with identifying physical signs of a tendency to criminal activity. A prime example of forensic genealogical research is a case study entitled “A Criminal Family” (25/7). The increased focus on criminal biology, while pre-dating 1933, reaches its climax in the Nazi era. On this subject, Theodor Viernstein, head of the Bayerische kriminalbiologische Sammelstelle (Bavarian Biological Criminality Detention Centre) wrote, “The Law for the Prevention of Offspring with Hereditary Diseases of July 14, 1933 and the Law on Measures against Dangerous Habitual Criminals of November 23, 1933 are, for our question, two decisive landmarks on this road to the renewal of the living race of our people. What until the seizure of power remained, in the minds of scientists, psychiatrists, hereditary biologists and racial hygienists, a mere hope for the future, was given, at the very moment that the people faced their greatest hardship, practical form and legislative expression” (26/3). There are 12 publications in the field of criminal psychology . In addition to a review on the concept and scope of the discipline from a 1940s perspective (36/119), there are 9 articles dealing with the psychology of specific offences, the most common of these being arson, to which 4 articles are dedicated. Finally, there are 2 papers discussing crimes committed under hypnosis. Criminal phenomenology encompasses publications on the phenomenology of various groups of offences. The only group on which multiple papers are published is youth criminality. There are 5 papers on this subject, in one of which the supposedly modern term “hooliganism” is used in a wide-ranging sense (12/361). Heroin misuse, already widespread in the mid-1920s, is used to illustrate the consequences of drug addiction (8/81). Other topics include homicide, sexual offences and arson. The 3 papers in the field of victimology look at the physical and psychological sequelae experienced by female minors who have been the victim of a sexual offence. There are 2 publications on crime prevention , both discussing the basic idea that social welfare is an important component of crime prevention. In the criminal aetiology field, there is one article on the effects of war on criminality (1/697) and another on “Reading as a Stimulus to the Commission of Crime” (13/209). Of 20 publications on prisons and imprisonment , 8 concern self-harming by prisoners. The incidents described range from inducing symptoms of illness and swallowing foreign objects, to significant self-mutilation. A number of papers are also dedicated to the topic of illness and its treatment in prisoners. The 6 articles on this issue are primarily concerned with physical illness. Four papers tackle the subject of the role of medical personnel in prisons. Two further articles discuss the particular problems faced by adolescents in prison (4/121) and support for prison leavers in Germany (11/423). In keeping with the broad-based concept underpinning the Deutsche Zeitschrift , which extends well beyond the narrow confines of legal medicine, in addition to core legal medicine topics, our analysis of the content also covers topics from related forensic science fields. Other topics covered include historical and epistemological matters, plus laudations, obituaries, etc. The classification on which the analysis is based and an overview of the number of publications on individual topics over the period under review is shown in Table . Legal medicine is one of the oldest medical specialities. Its position on the boundary of medicine and law means that it has historical links to a wide range of other disciplines (24/1). The literature on the history of legal medicine does not reflect this variety. This deficiency, as he saw it, prompted Ferdinand von Neureiter to call for collaboration on a “History of German Legal Medicine as a Subject for Research and Teaching” (28/60). The interdisciplinary nature of legal medicine means that, over the years, the evolution of the subject has often been closely scrutinised. In 1924, Willy Vorkastner spoke at length on “The Status and Role of Legal Medicine” (5/89). In two key articles from 1928, both entitled “Old and New Directions in Legal Medicine” (11/1, 11/14), Richard Kockel and Fritz Reuter explored the different directions in which the discipline was evolving. The proceedings of the scientific association , which were printed in Volume 1 and later at irregular intervals in the Deutsche Zeitschrift , offer a useful overview of topical issues in legal medicine. Between 1933 and 1940, the journal published edited versions of talks at the 21 st and 24 th to 29 th congresses of the scientific association. On many occasions, these expert gatherings were accompanied by political declarations reflecting the zeitgeist. As chairman of the German Society for Legal and Social Medicine, Berthold Mueller, at the time a Professor in Göttingen, concluded his opening speech at the 26 th congress in spring 1937 (29/133) with the words, “As with all collective endeavours, today we would like to turn our thoughts to the man who, despite differences of opinion large and small, beyond parties and hatred, has brought the German people together to form a unified whole. We are conscious that, in our academic work too, we are answerable to him and to the German people. Heil Hitler!”. The opening speech at the 28 th congress in spring 1939 (32/191) was given by Arthur Gütt, Undersecretary at the Reich Ministry of the Interior. He gave delegates a detailed elucidation of the Health Service Standardisation Law of July 3, 1934. The Law replaced district medical officers with Health Authorities which, under Sect. 3(1)(III) of the Law, were made responsible for all legal medicine activities. As one advantage of this reform, Gütt emphasised the close relationship between the role performed by legal medicine and “ Erb- und Rassenpflege ” (“genetic and racial hygiene”). The welcome message delivered by Gerhard Buhtz, Chairman of the Society and a Professor at Breslau, at the 29 th congress in spring 1940 (34/1), was also driven by the prevailing zeitgeist. In conjunction with a declaration of loyalty to the Nazi leadership and the war goals, he discussed endeavours to link together legal medicine and forensic science. In keeping with this trend, papers on requirements for university teaching focus particularly on teaching for detectives. Endeavours to achieve effective collaboration between legal medicine specialists, forensic scientists and lawyers are a key topic in Volume 18. Articles range from the role of legal medicine specialists at crime scenes, to issues relating to training, to the practice of law enforcement. From the beginning, the Deutsche Zeitschrift served as the official journal of the scientific association. Leading personalities of the discipline were therefore celebrated on notable birthdays or in obituaries. In addition, a number of issues took the form of commemorative issues dedicated to Albin Haberda (1/581), Richard Kockel (5/3), Carl Ipsen (7/137), Ernst Ziemke (10/147), Fritz Strassmann (12/1), Emil Ungar (14/3) and Hermann Merkel (21/57). Within the category legal issues , the most popular subject by far is criminal law and criminal procedure. The largest group is made up of 13 papers from the 1920s, in which medics and criminal lawyers discuss the oft-revised draft of the Allgemeines Deutsches Strafgesetzbuch (German General Criminal Code). With respect to the General Part of the Criminal Code in force at the time, most notable are articles on guilt and criminal responsibility. The majority of discussions relating to the Special Part of the Criminal Code deal with clauses on violent and sexual offences. Of the papers concerned with criminal procedure, 3 of 7 papers deal with police interrogation. The period after 1933 sees the publication of 10 works explicitly referring to “Nazi criminal legislation”. There are two reviews exploring the realignment of criminal law principles. In one, a senior public prosecutor from Stendal states, “We absolutely may view the law as representing binding principles and at the same time an order from the government as to how things should be arranged. The legal system thus becomes a system of formal orders from the Führer. Criminal legislation in particular acquires meaning and purpose in the great work of the new concept of the ethnic body politic, which began with detoxification, purification and healing and progresses to defence and education” (24/8). The second review was penned by Berthold Mueller, who analysed some of the individual provisions of the future criminal law and criminal procedure. Medico-legal facets of his review include such problems as the principle of strict liability, whereby the harmful consequences of a criminal act are of less significance than the criminal will of the offender, Rasseverrat (“racial betrayal”), which asks forensic medical specialists to decide whether the identification of certain non-German characteristics in a child can be used to conclude with sufficient certainty that sexual intercourse with someone from “another race” has taken place, and, his final example, euthanasia, “The proposals set out in the Prussian memorandum [“Nazi criminal law”] would also make possible the destruction of lebensunwertes Leben (“life unworthy of life”). I do not believe any objections to this can be raised from either a racial or medical point of view” (24/114). Other articles on Nazi criminal law deal with other far-reaching new legislation, such as the Gesetz zur Verhütung erbkranken Nachwuchses (Law for the Prevention of Offspring with Hereditary Diseases) or the Verordnung gegen Volksschädlinge (roughly translatable as Regulation against Parasites on the Body of the People). Articles on other areas of law are primarily concerned with family law and medical law. A review entitled “Legal Issues in Road Traffic Accidents” (28/30) deals with current legislation and case law. There are also isolated essays on the Code of Civil Procedure, the Prison Act and social legislation. Papers on civil law include an essay by Heinrich Többen on “Farmer Eligibility under the Reich Farm Inheritance Act” (29/317), a characteristic piece of Nazi legislation. Papers on expert witness work can be divided into two groups. There is a general group comprising 10 articles on methodological problems in appraisals and on fundamental questions such as causality or expert witness professional confidentiality. There is also a more specific group of 39 articles concerned with the significance of medical opinions in specific questions. Most of the published expert witness reports deal with an assessment of the causal relationship between physical injuries and preceding trauma. Thirteen of these relate to insurance medicine, and there are a further 4 case reports on simulation. Three of the case studies on simulated symptoms of illness include descriptions of experiences from the First World War, which in some cases attained renewed significance in the context of insurance assessments. The majority of papers on post-mortem inspection and forensic autopsy are concerned with the performance of these two basic post-mortem diagnostic techniques. Seven articles deal with domestic and foreign regulations on autopsy and the purpose and objectives of court-ordered forensic autopsies. A further 7 articles concern specific autopsy techniques. Additional investigations performed after autopsies span a wide spectrum of techniques and are accordingly found in a number of volumes. Most papers are concerned with either toxicology or histological examination. The 14 publications in this area include both microscopy techniques and forensically significant microscopy findings from histological examination of organ material. There are also 3 papers on bacteriological and 3 on biochemical analysis, 2 on X-ray examinations and 3 on a range of other techniques. Twelve papers discuss expert evaluation of autopsy results. Problems of competing causes of death and the capacity to act of the victim following various traumas are considered in a number of papers. Eight papers deal with exhumation, with repeated emphasis on its diagnostic value. Finally, in 1922 and 1936, several authors called for the introduction of coroner’s post-mortem examinations. Twenty-seven papers, almost half of all papers on thanatology, are concerned with early post-mortem changes . The phenomenon of supravital reactions includes studies on early post-mortem pupil changes (6/22, 6/32, 20/144), on tissue changes as a result of autolysis (3/359, 16/61), and individual studies on the pattern (11/317) and colour (13/261) of livor mortis. There are 17 articles on problems relating to rigor mortis, forming the largest group of articles relating to thanatology. The main topic considered is the physiology of rigor mortis and the lactic acid theory (2/1). There are also earnest discussions of the special case of catalepsy in both a detailed literature review (2/647) and a collection of case studies (3/349). In addition, there are 3 reports on cases of cataleptic spasm (3/357, 3/562, 13/13). There are a total of 28 papers on late post-mortem changes . Dictated by practical concerns, the dominant theme is decomposition of the body, with 7 papers primarily concerned with differentiating between putrefactive gases and air embolism. A series of publications (4/562, 6/650, 9/459) on this diagnostic difficulty were published by Berlin forensic pathologist Felix Dyrenfurth, whose earlier findings were nonetheless overlooked in a later study (6/379). There is also a paper on gas analysis in putrefying lungs (16/459). Three papers explore the aetiology of coffin birth, which at the time had not been definitively explained. There are 9 papers on preservative post-mortem changes and 5 papers on animal bites on bodies. In the field of thanatochemistry , there are 6 papers on the production and detection of putrefaction products such as sulphhaemoglobin and biogenic amines. The utility of individual post-mortem changes and the diagnostic value of stomach contents and stomach mucosa are discussed as methodologies for time of death estimation . A review summarising the current state of knowledge was penned by Hermann Merkel in 1930 (15/285). Still in the 1930s appeared the first papers on correlations between post-mortem rectal temperature and time of death (28/172, 29/158, 31/256). With respect to the phenomenon of vital reactions , by far the largest number of papers are on general reactions. In particular there are 19 papers on embolism, with the largest number of papers on air embolism, followed by fat embolism, and finally tissue embolism. There is one review devoted to the medico-legal significance of these types of embolism when presenting as a paradoxical embolism (23/338). The general vital reactions described include individual studies on haemorrhage, shock, aspiration and swallowing. There is relatively little on local vital reactions. In addition to 2 papers on thrombosis (15/330, 21/147), there are a variety of post-mortem biochemical analyses intended to identify vital changes in post-mortem blood, muscle tissue, bronchi, nerve cells and cerebrospinal fluid. Injuries from blunt force trauma are as varied as the mechanisms by which they are produced. It is not uncommon that external findings are able to shed light on the event that caused them. This is true for the shape of epithelial abrasions (37/33), the morphology of the wound margin (22/299) and “ischaemic impact patterns” (29/408). Scalp wounds also have practical diagnostic significance, and specific features of these wounds are presented in 3 papers. If we look at the distribution of violence by body region, records of head injuries are by far the most common. Of the 11 papers which focus specifically on the shape of skull fractures, one is concerned with the sequence of skull fractures (8/430) and one with the fracture mechanism in babies (32/461). The 30 papers on the wide variety of brain injuries range from concussion and intracranial haemorrhages to delayed consequences of brain injuries. The mechanism of coup-contrecoup injuries is the most frequent subject, discussed in 7 papers, with some debate over Lenggenhager’s centrifugal theory as a potential mechanism (31/61, 31/278, 31/280). There are also several papers on the consequences of blunt force to the trunk and extremities. The most common subject, with 12 papers, is blunt abdominal trauma and resulting injuries to the parenchymal organs and gastrointestinal tract. There are detailed discussions of the difficulties of reconstructing events in the context of a fall in the mountains (28/90) and a fall from height (30/334). Boxing-related deaths form a special category, with 6 case studies published. The causes of death were intracranial haemorrhage (12/392, 16/341, 25/41), one case of “death from shock”, in the sense of a sudden cardiac event (1/481, 19/415), and suffocation due to acute laryngeal oedema (1/695). Finally, a further example of death from blunt force is the legendary case report “A new method of insurance fraud: the Tetzner case” (21/112) by Richard Kockel. Of publications on the consequences of sharp force, by far the largest number are concerned with stab wounds. Almost all of the 21 papers on the subject deal with knife wounds and the possibility of using the morphology of the wound to draw conclusions on the implement used or the events of the crime. Far less common are cuts. Five of 6 case reports are concerned with the question of whether a death has occurred as a result of murder or suicide in cases where the throat has been slit. A further small group of cases consists of 5 papers on wounds inflicted with slashing weapons, in which once again the focus is on the question of whether the injuries were inflicted by the victim or a third party. In addition to 2 collections of case studies (2/412, 22/407), there are case reports on 3 suicides involving a blow from an axe (20/64, 27/308, 28/422). A special form of penetrating trauma is bite wounds. Two of 3 papers on this subject are concerned with the value of bite wounds for identification purposes (16/89, 29/453). There are 4 case studies on criminal corpse dismemberment with cutting tools, 2 of which deal with medico-legal aspects of serial murders by Karl Großmann (3/147) and Karl Denke (8/703). Other reports on high-profile murders committed using penetrating force include the widely reported sexually motivated murder of high school graduate Helmut Daube (14/158) and serial murders by sadist Peter Kürten (17/247). There are 13 works on asphyxiation , including a number of unusual case studies and several experimental studies discussing morphological features diagnostic of asphyxiation. Gerhard Schrader penned a review (28/134) in which he attempts to produce a systematic overview of then (mid-1930s) known diagnostic criteria for violent asphyxiation. Six of the 7 reports on homicides through manual strangulation concern crimes where the perpetrator has attempted to disguise the crime. One further paper deals with the significance of the carotid sinus reflex ( Hering reflex) in strangulation. Following a comparative analysis of human corpses and experimental animal corpses, the author expresses the opinion “that a powerful stranglehold around the neck can lead to unconsciousness as a result of the carotid sinus reflex, without leaving localised signs of strangulation on the victim” (20/361). There are only 5 articles on ligature strangulation , with 3 case reports highlighting the possibility of self-strangulation. A report on a homicide triggered an expert debate on the carotid sinus reflex during ligature strangulation (15/419, 15/572). There are 25 case reports on hanging . The most significant from a pathophysiological perspective are 3 extensive studies (1/686, 4/165, 11/145) on the mechanism of death by hanging. Further insights into the dying process are provided by “Observations on Death in Executions by Hanging” (22/192) from 1917/18. The vast majority of papers on death in water are concerned with drowning. These include 8 papers on the pathophysiology of drowning and 3 on morphological findings. Erich Fritz describes a new sign of death by drowning in the form of tears in the mucous membrane of the stomach in people who have died by drowning (18/285), and Gyula Balázs’ collection of case studies represents the first report on “ischaemic impact patterns” after falling into water (21/515). Nine papers describe microscopic and biochemical techniques for diagnosing death by drowning. There are 11 papers concerned with post-mortem changes in bodies recovered from water and the phenomenon by which a body will float back up to the surface some time after death. There are individual treatises on the causes of atypical drowning and death when diving. An instructive overview of death by drowning (18/557) was produced by Gerhard Buhtz. Investigation of gunshot wounds necessitates close cooperation with forensic investigators, with the result that the section on ballistics includes a number of such publications from the forensic science field. From a medico-legal perspective, the focus is on the effects of and expert assessment of gunshots in humans. A key diagnostic objective is recognising and distinguishing the entry and exit wounds. There are 17 papers on this topic. With respect to determining the range of fire, there are 13 articles on signs indicative of gunshot from close range, including Anton Werkgartner’s first paper in the Deutsche Zeitschrift on muzzle imprint in contact wounds (11/154). There is just one paper (23/375) on determining the direction of fire from the body. In terms of body region, the vast majority, 18 papers, deal with specific features of gunshot wounds to the skull, including one case of a Krönlein shot (1/141). There are just 5 articles on gunshots passing through the chest, causing fatal injuries to the heart and lungs. There are 10 case reports on homicides involving firearms, including an interpretation of findings relating to the shots that killed Communist Party of Germany co-founder Karl Liebknecht (5/247). From 1940, there are 3 extensive reports on large-scale investigations into multiple gunshot wounds. Two legal medicine specialists working separately detail their findings in relation to the victims of Bloody Sunday in Bromberg (34/7) and “from the hostage processions in the Warthegau” (34/54). A forensic science approach is taken in reporting the results of an “Investigation into Polish Atrocities against Ethnic German” (34/90). Seven papers describe rarely used weapon systems (animal stunning equipment, shotguns, terzeroles, Karabiner 98 k) and the characteristic wounds they produce. Finally, 3 reports on explosion injuries can also be considered to belong to this group of topics. These include just a single study on bomb and shrapnel injuries from hostilities during the Second World War (35/173). There are 16 papers on death from electricity , encompassing accidents, suicides and homicides. There are also articles on non-fatal electrical injuries and on animal experiments to investigate electricity-specific effects. There are 5 papers on the appearance and identification of electrical burn marks from the electrothermal effect on the skin. The state of knowledge at the time is summarised in a review by leading expert Stefan Jellinek “On Electropathological Semiotics and Causality” (12/104) and by Friedrich Pietrusky on morphological findings after exposure to technical electricity (29/135). There is also a collection of case studies (32/407) concerned specifically with the consequences of lightning strike. Thermal damage caused by heat , with a focus on morphological findings in burns victims, is the subject of 5 publications. Hermann Merkel published a survey of diagnostic options for burnt corpses (18/232). Of the other publications, there are 3 papers on microscopy findings in human corpses (19/293, 23/19, 23/281) and one study using animal experiments (20/445). There is also one paper on delayed death from burns (35/75) and one on the pathogenesis of scalding (3/401). In addition, there is a case report on a self-scalding by a psychiatric patient in a boiling pan (36/49). A collection of case studies discusses 3 homicides by burning (21/120). The disposal of corpses by burning is the subject of a further 2 papers, and there is also an article entitled “Self-immolation of the Human Body” (18/437) which takes a historical criticism approach. Two papers deal with injuries caused by cold . One reports on the clinical progression of fatal hypothermia (18/270), while the other reports on results from animal studies on general hypothermia (30/199). Physical changes in the corpse after starvation are described in detail in the context of a death in connection with a fasting regime (6/520). Reports on single or multiple deaths involving various types of external violence are not easily assigned to any of the above groups. These 9 papers are therefore recorded separately. In addition there are two collections of case studies on fatal sport-related accidents occurring in the course of a wide range of different sport activities. Table shows a summary of the number of publications on all types of violent death. Forensic obstetrics as a speciality is concerned with forensic medical problems relating to pregnancy and childbirth and to the physical condition of neonates. The first group of 14 papers deals with health risks posed by mechanical contraceptive devices, pregnancy testing, complications of pregnancy and indications for terminating pregnancy. Eleven articles on complications of childbirth affecting mother or child are primarily focused on reliably distinguishing spontaneous injury from the results of criminal acts. Six papers on equivocal findings in neonates concern the same objective. Section 90 of the German Code of Criminal Procedure (StPO), which sets out the regulations on autopsies on neonates, raises several questions for the pathologist performing the autopsy. These are discussed in a total of 38 publications. One recurring theme is determining whether an infant was alive at birth, whereby in addition to a number of papers on the lung float test (2/31, 2/267, 5/47, 6/5, 14/7), a cord vessel test (36/21) is cited as a novel method for answering this question. The then Sect. 218 of the German Criminal Code (StGB), which dealt with termination of pregnancy, meant that from the 1920s to the 1940s abortion was one of the primary issues in forensic obstetrics. Consequently, the 30 publications on the methods and consequences of illegal procedures on pregnant women represent a further major topic for the volumes of the Deutsche Zeitschrift examined by this review. While doctors were motivated to oppose illegal abortion because of the health risks for the pregnant women, the Nazi era saw the focus of opposition shift to population policy objectives. Thus we have a 1940 publication entitled, “The Fight against Abortion as a Political Task” (32/226). There were 18 publications on infanticide as defined in Sect. 217 of the German Criminal Code (StGB). These illustrate the difficulties of forensic diagnostics, which consisted primarily of using morphological features to distinguish maternal attempts to facilitate delivery from actions performed with the intention of killing the child. Publications on medical malpractice are concerned with medical interventions in both surgical and non-surgical specialities. The most common topic is reports on adverse drug events, of which there are 10 cases. There are 4 reports on anaesthesia-related adverse events during surgery and an equal number alleging post-operative medical errors. There are also 3 case reports on radiation injuries, 2 on complications during diagnostic procedures and 2 on vaccine-related injuries. There is also a review tackling the subject of the “History and Concept of Malpractice” (20/161), primarily from a legal perspective. With respect to papers on disease and sudden unexpected natural death , there is a dramatic imbalance in the number of cases concerning the two age groups generally analysed. While there are 65 publications on adults, there are only 13 papers on sudden unexpected natural death in children. The most frequently cited groups of fatal diseases in adults are diseases of the central nervous system, on which there are 15 publications, and cardiovascular disease with 13 articles. There were far fewer reports on other groups of fatal diseases. There are 3 reports on deceased children, each of which describes malformations or infections as the cause of death. Other publications on childhood consist of individual case reports on children with rare diagnoses and reports on experience gleaned from large numbers of autopsies. The largest group of publications in this subject area consists of descriptions of the clinical features of various types of poisoning , on which there are 125 papers. In addition to 9 collections of case studies on various poisons and 3 cases of combined poisoning, there are 68 papers on inorganic poisons and 45 on organic poisons. Of the inorganic poisons, by far the most widely discussed are carbon monoxide, on which there are 13 papers, arsenic, on which there are 9, and thallium, on which there are 7. Of the organic poisons, there are notable clusters of 17 papers on plant toxins and 13 reports on pharmaceutical poisoning. With respect to illegal drugs, widespread today, there is just one article on the suicide by cocaine of a male addict (4/40) and one on fatal poisoning with heroin after being mistaken for cocaine by a cocaine-dependent individual (12/112). The identification of pathological changes in tissue and organ anatomy plays a role in the diagnosis of poisoning, though it does have some natural limitations. Of 28 papers on this subject, primarily on organ toxicity, 13 are concerned with the central nervous system, which is particularly vulnerable to poisoning. There are also 2 articles on characteristic morphological changes in specific types of poisoning. These are Mees’ lines and coma blisters, both first described in the Deutsche Zeitschrift (34/307). The second, smaller group on pathological anatomical changes consists of 8 reports on histological changes in various organs caused by one or more poisons. The gold standard for diagnosing poisoning is toxicological analysis . In the period under review, 68 papers were published on this subject, of which 59 are concerned with the detection of a single poison or group of substances. As might be expected given the prevalence of carbon monoxide poisoning in that era, by far the most common, with 16 examples, are papers on various techniques for detecting CO or CO-Hb. Of groups of substances, alkaloid detection is described in 4 papers. In contrast to analysis of specific substances, there are 9 papers on the technical details of a variety of analytical methods, including spectrophotometry (1/411) and microscopy-based melting point determination (34/353). One area of toxicology which has evolved into its own extensive speciality is the toxicology of alcohol . Of 33 publications on alcohol metabolism, 26 studies are concerned with factors which might affect the absorption, distribution and elimination of alcohol, such as age, trauma and chemicals. Just 5 publications focus on the correlation between blood alcohol concentration and the effects of alcohol. These include a review entitled “On the Terminology and Forensic Assessment of Alcoholic Intoxication” (14/296), which outlines the state of knowledge in 1930. The only paper on fatal alcohol poisoning (33/44) is primarily concerned with developmental peculiarities of alcohol clearance in children. There are 2 papers specifically on methanol poisoning. Thirty-one publications deal with methods for determining alcohol concentration in body fluids and tissues. In addition to two reviews (10/377, 11/134), there are papers on individual methods, including the Nicloux method (18/638) and the Widmark method (19/513). Between these papers and the end of the war, there are 18 further studies on Widmark’s micromethod. All of these papers deal with subtle technical points for overcoming shortcomings in the method and eliminating sources of error. There are a total of 21 publications on then common medico-legal identification methods . Six publications in the field of osteology discuss ways of differentiating between species, sex diagnosis and age estimation from skeletal remains. One of these papers, which takes a detailed look at the forensic medical significance of the humerus, includes basic data on several identification procedures (22/332). There are also 4 papers specifically on skull identification, 2 of which include detailed descriptions of techniques for the superprojection of an unknown skull onto a photograph of a missing person (20/33, 27/335). Five papers describe odontological features which can be used for identification, and 2 linked articles discuss identifying features visible using radiological techniques. A case study on the particular utility of the frontal sinuses for identification (7/625) is the first paper on X-ray comparison in the German-language literature (10/81). There are 2 papers dealing with methods for specific investigatory queries and combinations of methods required for problem cases. Table shows a summary of the number of publications on forensic genetics. The papers on the topic of blood groups cover test methods, prevalence, distribution and inheritance. The first paper on blood groups to appear in the Deutsche Zeitschrift was published in 1924 and was entitled “On the Forensic Significance of Isoagglutination of Red Blood Cells in Humans” (3/42). The papers on test methods discuss the production, use and storage of test sera and other reactants. The first publication on the prevalence and distribution of A subgroups in the Deutsche Zeitschrift was by Karl Landsteiner, the discoverer of the main blood groups. Another pioneer of blood group research, Berlin serologist Fritz Schiff , published the first paper on “The Medico-legal Significance of Landsteiner and Levine’s M and N Serological Properties” (18/41). Seventeen further articles on the M, N and P blood group systems followed. The publications on the inheritance of blood groups deal with Bernstein’s theory and of the inheritance model devised by von Dungern and Hirszfeld. There is also one experimental study on the secretion of blood group substances (28/234), the results of which confirmed that secretor status is inherited in a dominant manner. The practical use of identificatory blood groups is the subject of medico-legal trace evidence analysis . The contemporary arsenal of methods necessitated a hierarchical sequence of examinations: test for the presence of blood − determine species specificity – determine blood group. Four articles describe peroxidase and crystal tests for blood detection . The specificity of these tests varies and depending on the test method should be considered a presumptive test only. The Uhlenhuth test has been used to determine species specificity since 1901, predominantly to distinguish whether trace material is human or animal . The precipitation principle is the subject of 11 publications, and there are a further 2 papers on alternative serological methods for differentiating between species. The discovery of human blood groups was soon followed by the development of the first methodical approach to the individuation of bloodstains . Landsteiner and Richter’s agglutination test represents the beginning of the use of inherited blood groups for forensic purposes . The small quantities and ageing of trace material require modifications to test techniques. To achieve optimum results under these conditions, Leone Lattes introduced the coverslip method (9/402) and Franz Josef Holzer further refined Schiff’s agglutinin binding test (16/445). As the publications on this topic show, of the then known blood group systems, forensic trace evidence analysis determined both AB0 system and MN group groups. Other publications deal with specific trace evidence-related issues. These include detecting menstrual blood by using its inhibitory effect on lupin seedlings and on yeast fermentation, sex diagnosis of bloodstains using colour reactions, using solubility and enzyme activity to evaluate the age of traces and using colorimetry to estimate the amount of blood. Detection methods based on characteristic constituents of bodily fluids were developed for various bodily fluid traces . Epithelial glycogen content can be used to identify vaginal secretions (4/1). There are 4 papers on sperm detection, including a new microscopy-based detection method using Baecchi’s stain (27/143). There is also a review describing a number of older tests for detecting dried saliva (11/211). At the time, it was possible to determine AB0 groups both from semen and from other bodily fluids (23/186). The majority of papers on hair traces is concerned with using morphological features for species differentiation. Other topics include determining developmental stage and sex, and demonstrating damage to the hair. The papers on traces of faeces discuss serological identification based on coliform bacteria and a presumptive assignment of blood group. In the period under review, family relationship testing relied on fertility testing, assessment of the period of gestation, a comparison of morphological characteristics and blood typing. There is one paper on fertility testing (26/64) and one on the legal and medical problems involved in assessing the period of gestation (14/11). The papers on hereditary biology deal with various physical characteristics for assessing similarity and with dactyloscopic criteria. By far the largest number of articles are on the results of blood group testing. As well as helping to clarify cases of disputed paternity, blood group analysis also served as evidence in a case of babies switched at birth (25/79). There is also a report by a veterinarian on “Blood Groups and Paternity Determination in Horses” (25/231). There are numerous areas of overlap between legal medicine and scientific-technical criminalistics. In the period under review, these went well beyond the kindred forensic biology role performed by the two disciplines. Table shows a summary of the number of publications on scientific-technical criminalistics. In his article “Institutes of Legal Medicine and Criminalistic-Technical Activities Performed within Them” (14/411), Martin Nippe explicitly advocates for legal medical institutions to take over criminalistic tasks. His argument that the courts and police lacked the necessary expertise seems entirely reasonable in light of advances in forensic technology in Germany at the time. The urgency of this need is demonstrated by a collection of case studies from Göttingen College of Legal Medicine (14/26). It features cases which indisputably fall within the purview of forensic technology. One field in which there were particular deficits was theory of forensics , on which only a single paper (34/404) was published. This paper, by distinguished Berlin criminalist Hans Schneickert, deals with the key insight that the prevalence of an identifying feature determines its value for identification purposes. The picture is very different for the science-focused sub-disciplines of forensic technology. Almost exclusively as a result of research performed at institutes of legal medicine, a considerable body of knowledge had already been amassed in the field of forensic biology . In Germany, this sub-specialism was not an established academic discipline in the period under review, so that in the majority of cases the investigation of biological trace evidence was contracted to institutes of legal medicine. Accordingly, the theory behind many of the methods used in forensic genetics is equally applicable in the field of forensic biology. Twelve publications relate to the key criminalistic task of finding and securing bloodstains at a crime scene and on objects. Two studies on the morphology of bloodstains deal with fundamental insights into blood splatter patterns (16/272) and drip patterns (22/387). A further 2 papers are concerned with non-human material. Of the publications on forensic chemistry , there are 8 papers on chemical analysis for the purpose of determining the cause of fire. In relation to 2 attempted fatal poisonings, there is a report on demonstrating the presence of poison in food. In addition, there are papers on the chemical/physical chemical analysis of glass dust, metal, wax, paint and paper (one paper on each). Of the technical sub-specialisms, by far the largest number of papers in this area are dedicated to ballistics . Of these, 20 papers deal with different methods for determining firing range, making this a major thematic focus in this area. The second large group is made up of 11 publications on firearm identification from firearm trace evidence. There are just single publications on determining bullet trajectory in long-range shots and on evidence on the firing hand. The objectives of forensic writing examination are to determine the authenticity or otherwise of a document and to identify the person responsible for a piece of handwriting or typescript. Nine papers are concerned with identifying an author based on individual handwriting features. Three papers look at criteria which enable handwriting and typescript to be falsified. One paper looks at typewriter identification and one at determining the age of a piece of handwriting or typescript. There is also a critical analysis which calls for proof of competence to be required before a court accepts the evidence of a writing expert (27/364). The publications on document examination are concerned with the examination of paper scraps, betting slips and paintings. One of the main areas of application of dactyloscopy is the comparative examination of fingerprints to identify individuals. Five papers describe the conditions required and evaluation criteria for fingerprint identification. There are 3 articles (10/372, 22/54, 27/323) on methods for visualising latent fingerprints. Two articles examine fingerprint identification in dead bodies in an advanced stage of decomposition (13/256, 29/426). The field of shape traces deals with various impressions on a wide range of media. The publications in this field deal with tool, shoe and bite impressions and methods for preserving a trace. In the field of forensic photography , it is essential that images are informative and authentic, as they constitute evidence admissible in court. The publications in this field therefore deal with photographic and technical requirements for producing usable photographs. The sole work on odorology is a case report on the evidentiary value of olfactory traces of specific wood shavings (6/1). The papers on investigative techniques deal with the application of UV light, spectrography and X-rays. There are two papers on age estimation in living subjects, one exploring diagnostic criteria in minors and one exploring criteria in adults. Leningrad author W. A. Nadeshdin produced a detailed discussion of features of the face and dentition useful for age estimation in adults (6/121). In a second work on age estimation in minors, he proposes growth, tooth eruption and signs of sexual maturity as a reliable combination of features for this purpose (8/273). The papers analysed include just 2 early reports on drug addiction . The first, a case study (12/285) on multiple drug use with a preference for chloroform in a pharmacist, was published in 1928. The second report (35/17) from 1941 discusses the medical history of a patient who, following morphine addiction, had developed pethidine addiction. A detailed discussion of the legal and social significance of child abuse (13/159) is the only publication on this subject. The investigation of self-inflicted injury always aims to reveal the motivation. The 2 published cases are both cases of attempted insurance fraud, in one by severing the left thumb (23/352), in the other through many years of self-harm involving repeated injury to the left arm (38/244). The vast majority of publications on this topic are concerned with psychopathology . Of these, most are concerned with individual forensically significant psychopathological phenomena, discussed in 26 articles. There are 11 papers specifically on the forensic significance of psychiatric disorders. These include various psychoses, paralysis and dementia. Four case studies consider previous trauma in relation to the assessment of psychotic disorders. There are 3 papers on the principles and diagnostic value of graphology. There are a further 3 case reports on the psychopathology of prominent figures, including the sensational case of railroad killer Szilveszter Matuska (20/53). Of a paper by Roland Freisler, then Secretary of State in the Reich Ministry of Justice, entitled “The Psychology Underlying Polish Atrocities, Illustrated through the Development of the Polish National Character”, only the title was reproduced (34/7). The paper takes a social psychology approach, and the full text was published in a legal journal. A themed group of 13 papers covers various subjects in the field of the psychology of testimony . Topics include the impairment of memory capacity, in particular through hypnosis and suggestion, recognising false testimony and assessing testimony given by children and adolescents. The interdisciplinary field of suicidology is, in the selected context of forensic psychiatry and psychology, reduced to problems in this area. The publications classified as belonging to this field are therefore primarily concerned with motives and risk factors for suicide. There are 14 articles on these topics, which, in addition to the sequelae of diseases and head injuries, discuss individual constitution as a causal factor. Epidemiological surveys on suicide in Russia, in Finland and in Berlin are the subject of 3 papers. In a paper entitled, “Contribution on the Psychology of Suicide” (12/346), the phenomenon of imitation is discussed in the context of a popular suicide location. Of the many and varied references to human sex life subsumed into the category sexual medicine, one extensive group is the paraphilias —sexual perversions in the nomenclature of the time. Topics covered by the 23 publications in this area range from sexually motivated self-mutilation, to rare potentially criminal forms such as fetishism, to clearly criminal acts such as incest and paedophilia. There is also one article calling, as part of a pending reform of criminal law, for the abolition of Sect. 175 of the German Criminal Code (StGB), which at the time made consensual sexual acts between men a criminal offence. The first appeals for sterilisation for deviant sexual behaviour were published as early as the 1920s. The earliest work on “Castration and Sterilisation as a Means of Treatment” (3/162) from 1924 is concerned with applying these measures both in the event of a “mental disorder triggered by functions of the sexual organs” and for the elimination of “a criminal activity”. A later article (14/432) from 1930 rejects the idea of surgical castration of sex offenders, but advocates radiation treatment of recidivist sex offenders. These two publications are followed by 9 further papers, all relating to the “Law against Dangerous Habitual Criminals and on Measures of Reform and Prevention” ( Gesetz gegen gefährliche Gewohnheitsverbrecher und über Maßregeln der Sicherung und Besserung ) of November 24, 1933 and in particular to the newly introduced measure “castration of dangerous sex offenders” in Sect. 42a (5) of the German Criminal Code (StGB). The last of these papers, entitled “Observations on Castrated Sex Offenders” (33/248) concludes with the assertion that, “the success of castration has far exceeded expectations.” Two papers discuss the physical consequences of castration, and a further 2 papers fertility in the context of the Law for the Prevention of Offspring with Hereditary Diseases ( Gesetz zur Verhütung erbkranken Nachwuchses ). There are 7 papers on hymen examination . These publications illustrate the extraordinary variability of the healthy hymen and to an even greater extent the pathologically altered hymen. Depending on shape and elasticity, where sexual violence is suspected, consideration must be given to the possibility of vaginal penetration without rupture of the hymen (14/265). Fatal sexual activity without a sexual partner is the common element uniting the phenomenon of autoerotic fatalities . Classification of a death as an autoerotic fatality is solely dependent on the circumstances at the site of discovery and is independent of the cause of death. Of 6 cases reported, 3 were deaths by hanging, and of the remainder the cause of death was only able to be determined in one case. Other sexual medicine topics are treated in 3 papers on male sexual dysfunction and 2 publications on intersexuality . Almost all publications in this field are concerned with the causes of and consequences of injuries incurred in road traffic accidents . These include 7 papers on the biomechanics of road traffic accidents. Injuries to the head and extremities are discussed with respect to their value for investigative purposes. Two papers discuss trace evidence after road traffic accidents and their use in accident reconstruction. A further 7 articles are concerned with the effects of alcohol on driving performance and the resulting accident risk. Two papers are concerned with other causes of accidents. Just one publication, titled accordingly, discusses the important question in rail traffic accidents of whether a victim was “Alive or Dead when Run over by a Train” (25/147). There are just 11 publications relating to social medicine. Seven articles are primarily concerned with occupational problems relating to diseases with social sequelae. There are also 3 articles concerned with public health and a positively classic social medicine paper entitled “The Effect of Social Situation on the Health Status of Welfare Recipients” (20/535). Of the sub-specialisms historically forming part of the field of criminology, 19 publications on criminal biology , originally known as criminal anthropology, make it the best represented. Six papers in the then emerging research field of “blood groups and criminality” (9/426) form part of a series with 6 other papers on constitutional biology, most of which are concerned with identifying physical signs of a tendency to criminal activity. A prime example of forensic genealogical research is a case study entitled “A Criminal Family” (25/7). The increased focus on criminal biology, while pre-dating 1933, reaches its climax in the Nazi era. On this subject, Theodor Viernstein, head of the Bayerische kriminalbiologische Sammelstelle (Bavarian Biological Criminality Detention Centre) wrote, “The Law for the Prevention of Offspring with Hereditary Diseases of July 14, 1933 and the Law on Measures against Dangerous Habitual Criminals of November 23, 1933 are, for our question, two decisive landmarks on this road to the renewal of the living race of our people. What until the seizure of power remained, in the minds of scientists, psychiatrists, hereditary biologists and racial hygienists, a mere hope for the future, was given, at the very moment that the people faced their greatest hardship, practical form and legislative expression” (26/3). There are 12 publications in the field of criminal psychology . In addition to a review on the concept and scope of the discipline from a 1940s perspective (36/119), there are 9 articles dealing with the psychology of specific offences, the most common of these being arson, to which 4 articles are dedicated. Finally, there are 2 papers discussing crimes committed under hypnosis. Criminal phenomenology encompasses publications on the phenomenology of various groups of offences. The only group on which multiple papers are published is youth criminality. There are 5 papers on this subject, in one of which the supposedly modern term “hooliganism” is used in a wide-ranging sense (12/361). Heroin misuse, already widespread in the mid-1920s, is used to illustrate the consequences of drug addiction (8/81). Other topics include homicide, sexual offences and arson. The 3 papers in the field of victimology look at the physical and psychological sequelae experienced by female minors who have been the victim of a sexual offence. There are 2 publications on crime prevention , both discussing the basic idea that social welfare is an important component of crime prevention. In the criminal aetiology field, there is one article on the effects of war on criminality (1/697) and another on “Reading as a Stimulus to the Commission of Crime” (13/209). Of 20 publications on prisons and imprisonment , 8 concern self-harming by prisoners. The incidents described range from inducing symptoms of illness and swallowing foreign objects, to significant self-mutilation. A number of papers are also dedicated to the topic of illness and its treatment in prisoners. The 6 articles on this issue are primarily concerned with physical illness. Four papers tackle the subject of the role of medical personnel in prisons. Two further articles discuss the particular problems faced by adolescents in prison (4/121) and support for prison leavers in Germany (11/423). The period under review commences in 1922, shortly after the establishment of the Weimar Republic, and ends in the spring of 1944, in the final phase of the Second World War. The Nazi seizure of power in early 1933 represents a profound political and historical rupture. For legal medicine, a discipline with close links to the state, this social upheaval inevitably had an impact on issues and outlooks within the discipline. A comparison of the 19 volumes published from 1922 to 1932 with the 19 volumes published from 1933 to 1944 shows that, in addition to the expected effects on legislation and jurisprudence, there was also a transformation in the sociological and even in some cases medical topics within the discipline. Particularly striking is the link between legal medicine and the Nazi creed of “ Erb- und Rassenpflege ” (“genetic and racial hygiene”). The start of the period under review saw legal medicine as a discipline attains a long-awaited goal. Following repeated calls from prominent experts in the field (2/75, 19/264), in 1924 the discipline was finally included in the state medical examination, having been a compulsory subject for students of medicine since 1901 . On the question of how teaching should be organised, there was little agreement across German universities. With the exception of the core field of anatomy, teaching had developed in two quite different directions. One emphasised the psychological and psychiatric side of the discipline, an approach promoted by, for example, Willy Vorkastner in Greifswald and Heinrich Többen in Münster. The other favoured a scientific-criminalistic approach, promoted by, for example, Richard Kockel in Leipzig and Theodor Lochte in Göttingen. Under the influence of Nazi higher education policy, the integration of criminalistic content determined the profile of the discipline. The visible expression of this development was the introduction in 1940 of a standard designation, “Institute of Legal Medicine and Criminalistics” , across the Reich, and the increasing amount of work that was not strictly medico-legal in nature. Articles in the Deutsche Zeitschrift discuss a number of pieces of legislation which emphatically highlight the consequences of Nazi ideology for the judiciary. While the vast majority of laws were rendered legally void by Control Council Law No. 1—Repealing of Nazi Laws of September 20, 1945, the provisions introduced in 1934 by the Law against Dangerous Habitual Criminals and on Measures of Reform and Prevention—with the exception of the provision on castration in Sect. 42a of the old version of the German Criminal Code—remained largely unchanged. The provisions on preventive detention were incorporated into the Criminal Code in 1953 (BGBl. I p. 735) and tightened in 1969 (BGBl. I p. 645). As a result, the Law on Habitual Criminals introduced a dual system of sanctions into German criminal law which still exists today. At the 11 th congress of the scientific association held in Erlangen in 1921, the main topic was the introduction of coroners autopsies. Three speeches were dedicated to the reasons why it was necessary to have this option available (1/1, 1/9, 1/12). This unanimous appeal by all three speakers was further reinforced by the speech which followed them, entitled “Experiences with Coroners Autopsies in Hamburg” (1/17). Despite the endeavours of the scientific association, no uniform legislation on this was forthcoming, with the result that in 1936 calls for the introduction of coroners autopsies were again the topic of the opening speech (28/1) at the 25 th congress in Dresden. The text drafted by speaker Hermann Merkel entitled, “Reich Legislation to Introduce Coroners Autopsies” was unanimously adopted by congress participants. Despite this, corresponding legislation was still not on the statute books by the end of the war. It was not until 1949 that legislation was passed in East Germany to enable coroners autopsies to be performed in cases where the cause of death was unclear. Forty years of positive experiences did not, however, lead to the adoption of a uniform legal framework for coroners autopsies in the Federal Republic of Germany. Post-mortem rectal temperature measurement has proven to be highly valuable for estimating time of death. The first studies on the rate of cooling of dead bodies were published in 1937. Successive rectal temperature measurement produced very regular exponential decay curves. The temperature initially drops only modestly, followed by a period in which it drops more quickly, followed by a further period of slower decline (28/172, 29/158). Though first described many years ago, this sigmoidal cooling curve still forms the basis of standard methods of temperature-based time of death estimation even today . Through a case report on the examination of an exhumed skull (8/430), Kurt Walcher drew expert attention to the rule on the sequence of skull fractures first published by Georg Puppe a few years earlier. Evaluation of fracture lines showed that the victim had suffered at least one hard blow before being killed by a gunshot to the head. As in the above case, it has subsequently proved possible to use this principle to determine the sequence of multiple skull fractures, including fractures arising through different means. Puppe’s rule is now regarded as essentially proven . A wide variety of views have evolved on the mechanism of coup-contrecoup brain injuries. Lenggenhager theorised that blunt force at the point of impact where the brain flattens against the skull causes blood and cerebrospinal fluid to be squeezed out of the inside of the skull. At the point opposite the impact, a severe reduction in pressure—in severe cases even a vacuum between the cranium and the brain—can occur. This causes the characteristic pressure differential or suction haemorrhage in the contrecoup area. Shortly after publication Carl Franz entered into a debate with its originator in the pages of the Deutsche Zeitschrift . Franz was of the opinion that the theory must be wrong, as the speed of the process meant that it was not possible for the posited outflow of blood and cerebrospinal fluid to occur (31/61). In his reply (31/278), Lenggenhager explained his theory using his original publication. Though in his concluding remarks (31/280) Franz maintained his objection, the suction theory is today considered the most scientifically reasonable explanation for contrecoup injuries . By the early 1920s, there had already been a number of works on the mechanism of death by hanging. Nonetheless, it remained unclear “to what moment—whether it be the closure of the airways, the obstruction of the cervical vasculature or the effect on the vagus nerve—is to be attributed the main role in death by hanging” (1/686). Georg Strassmann studied the effect of strangulation on the airways and found that, for normal hangings, it was impossible to overcome the obstruction of the airways by even “the most strenuous inhalation or exhalation”. Where the noose was applied in an atypical position, it appeared that there was at the least “greater difficulty in obtaining air” (4/165). Walther Schwarzacher performed experiments to determine the tensile force required to obstruct the cervical vasculature (11/145). With a standard symmetrical noose position and a constant internal vascular pressure of 170 mmHg, a force of 3.5 kg is sufficient to obstruct the common carotid arteries, while a compression force of 16.6 kg is required to obstruct the vertebral arteries. The results of these two series of experiments remain part of the corpus of knowledge on the mechanism of death by hanging even today. Techniques for diagnosing death by drowning were enriched by the repeated observation by Erich Fritz of a specific morphological finding (18/285). He identified tears in the mucous membrane of the stomach in several victims of drowning in which there was no evidence of any violence to the body prior to drowning. With respect to the origin of this indication of drowning, after some initial uncertainty the opinion prevailed that the tears are caused by overstretching of the stomach primarily through swallowing the drowning medium . Gyula Balázs presented a collection of case studies on “Unusual Skin Findings Following a Fall into Water” (21/515). This paper contains the first description of “ischaemic impact patterns”, also of diagnostic value in the form of a double stripe on skin beaten with a stick. As with the numerous forensic science publications on ballistics, forensic traumatology papers have similarly enriched our understanding of the effects of gunshots. This is particularly true for the first paper by Anton Werkgartner in the Deutsche Zeitschrift on muzzle imprint in contact wounds (11/154). A muzzle imprint proved to be a reliable sign of a contact wound, and its characteristic shape can also provide information on the type of weapon and the position of the weapon on firing . In the forensic obstetrics field, most publications are concerned with questions arising in the context of Sect. 90 of the German Code of Criminal Procedure (StPO). The main focus here is on determining whether an infant was alive at birth. In addition to the lung float test, for the abolition of which Albin Haberda appeals in a piece unequivocally entitled, “Out with the Lung Float Test!” (14/7), János Prievara presented the cord vessel test (36/21) after Jankovich as an additional method of determination. The test is based on the principle that the umbilical arteries in stillborn babies remain open and are therefore permeable to water, whereas the umbilical arteries in babies which are alive at birth do not. In the author’s practical experience, a positive result from the umbilical vessel test is a very reliable supplement to standard tests for live birth. There are various, generally non-specific signs on a body that may lead to a suspicion of poisoning. In some cases, specific pathological anatomical changes may act as an initial indication of potential poisoning. In a paper entitled “Post-mortem Findings Following Sedative Poisoning” (34/307), Franz Josef Holzer reported his systematic observations on the topic. He identified characteristic violet patches, often symmetrical, especially in particular areas on the legs. These lesions, today known as coma blisters, are observed particularly in barbiturate poisoning. In 1932, almost contemporaneously with the German edition of Erik M. P. Widmark’s ground-breaking monograph , the Deutsche Zeitschrift carried a paper entitled “On a Technique for Quantitative Determination of Blood Alcohol Levels after Widmark’s Method” (19/513). Validated by numerous studies, Widmark’s micromethod established itself as the standard technique for determining blood alcohol. Over the period under review, advances in methodologies for the identification of unknown bodies occurred primarily in the fields of anatomy, anthropology and osteology. In addition to publishing an article on plastic reconstruction of the physiognomy (8/365), anatomist Friedrich Stadtmüller also subsequently developed a graphical method for verifying skull identification (20/33, 27/335) based on Welcker’s method . Dionys Schranz’s extensive study on identifying features of the humerus (22/332) contains ground-breaking data. His results on age estimation in particular are still standard criteria today . Following the discovery of the main blood groups in 1901, the discipline of forensic serology developed into a more and more significant field . In the period under review, scientific interest was focused, in addition to AB0 blood groups, on A-subgroups and the MN system. This enabled the gradual resolution of key problems relating to the use of blood groups in trace evidence analysis and in family relationship evaluation. Lattes’ coverslip method (9/402) and Schiff and Holzer’s agglutinin binding test (16/445) became established methods for the individuation of bloodstains in the 1920s. Sufficiently reliable methods were also available at the time for other questions arising during the examination of bloodstains. Peroxidase and crystal tests were available for detecting blood, and the now well-established Uhlenhuth test was available for determining species. For family relationship evaluation, genetic and serological test methods would continue to coexist for some time. Although a 1926 review by Fritz Schiff stressed the significant advantages of blood groups over morphological characteristics (7/360), it was not until serological testing was able to rule out the paternity of a significantly higher proportion of non-fathers that it became the dominant method. The scientific disciplines of forensic technology have the greatest overlap with legal medicine, both in theory and in practice. Consequently, legal medicine specialists have published numerous scientific articles and case reports on forensic biology and forensic chemistry. The publications evaluated here include significant excursions into the quite separate field of criminalistics, for example papers on traces of blood in snow (31/213) and the analysis of incendiary agents (36/245). Of the scientific sub-disciplines, the most prominent is ballistics, on which there are many papers which belong thematically to the field of criminalistics. This is particularly clearly illustrated by publications on firearm identification from trace evidence on munitions (18/350) and on cartridge cases (21/190). Similarly, numerous publications on writing examination can without exception be filed under forensic technology. Martin Nippe’s call for legal medicine institutions to take over criminalistic tasks (14/411) reflected a contemporary imperative aimed at providing greater legal certainty, as there were simply not enough experts in forensic technology. In the long term, however, legal medicine needed to be liberated from such a non-medico-legal role, and this occurred only gradually in the post-war period. Clinical legal medicine is significantly underrepresented compared to more recent times. There are only a handful of publications on the subject, and there are no papers on a number of significant health-related issues. Particularly notable by their absence are articles on the physical examination of victims and perpetrators of violent and sexual offences. Child abuse is another area which is notably underrepresented, with insufficient consideration of the complex fabric of causation and the many different patterns of harm. In an extensive analysis (13/159), Ernst Ziemke laments the fact that “reports from experienced legal medicine specialists and my own experience suggest that it is relatively rare for cases of child abuse to be subject to a medico-legal assessment. Where they are, this is almost always cases in which the outcome of the abuse has been the death of the child.” The publications analysed here deal with a wide range of forensic psychiatry and psychology topics. Legal reforms and the effects of changes to our understanding of medical psychology mean that the technical content has changed repeatedly since. Terms such as neurosis (1/325) and hysteria (2/523) are now considered outdated, and significant progress has been made in areas such as intelligence testing (8/580). The psychology of testimony, one of the cornerstones of interrogation theory, has evolved into an extensive sub-specialism within the forensic psychology field. In sexual medicine, there has been a quite contradictory development. While the sexology side was working towards liberalisation, controversial legislation on sexual offences remained in place. The quite reasonable call for the repeal of Sect. 175 of the German Criminal Code (StGB) (13/59) was not heeded until the end of the war. Even more draconian were the newly introduced “Measures of Reform and Prevention”. Under the 1933 Law on Habitual Criminals, a court could order, under Sect. 42a (5) of the Criminal Code, the “castration of dangerous sex offenders”. As long as certain statutory conditions were met, Sect. 42 k (1) of the Criminal Code extended the groups covered by this provision to include sex offenders with repeated convictions for exhibitionism. While there were many publications on the draconian measure of castration and its consequences, there is by contrast a notable absence of publications on sexual therapy for the afflicted. Increasing motorisation of road traffic saw the emergence of traffic medicine as a new sub-specialism within the legal medicine field. Publications on road traffic accidents are concerned with either the causes of accidents or the consequences of injuries. With the advent of motorised road traffic, it was not long before alcohol consumption by drivers (29/11) and pedestrians (32/312) was identified as a notable cause of accidents. There followed experimental studies aimed at obtaining concrete data on how alcohol affects driving performance, for example the effects of alcohol on eye and ear function (32/301). Results from these studies also represented an academic foundation on which to base road traffic legislation. With respect to the consequences of injuries, there was a need to clarify common patterns of injury and their origin (24/379, 33/124). Insights gained into mechanisms of trauma offered an opportunity to use medico-legal findings to contribute to accident reconstruction. Given that for many years the name of the German scientific association included the words “social medicine”, the paucity of publications in this field is particularly surprising. The small number of works present are not even vaguely representative of the interdisciplinary field of social medicine. In keeping with the concerns of the period, most of the papers in the field of criminology are related to criminal biology. Studies of offenders inspired by constitutional biological investigations claimed to find features characteristic of born criminals and thereby to justify a robust approach to the fight against crime. In addition to studies on morphological features, efforts were also made to identify serological markers. A significant field of research at the time was the relationship between blood groups and crime. Results from such studies were, however, somewhat contradictory. In a study on more than 1000 convicts, Kurt Böhmer found “a general increase in blood group B. This increased further in recidivists and the irreformable” (9/426). Augustin Foerster’s accurate conclusion unambiguously disagreed. He was unable to confirm “that blood group B is more common in the prison inmates I examined. It was not possible to demonstrate any link between blood group and criminality” (11/487). A key area in criminal psychology is arson, the explanation for which in one case shifted from the classic pyromania to a “simmering subconscious sexuality” (33/52). To obtain usable insights into significant phenomenologies of criminality, criminal phenomenology requires up-to-date studies. Only if this condition is met can results from studies be used to formulate policy on crime or be applied to forensic science investigations. For this reason, publications on a range of phenomenological criminology topics are of historical interest only. Articles on female minors who have been the victim of a sexual offence represent the beginnings of victimology, which evolved into an academic discipline only after the Second World War. In conclusion, it is worth noting that, despite the many papers from a variety of fields closely allied with but distinct from legal medicine, most of the published papers are concerned with the traditional key fields of traumatology and toxicology. There are also large numbers of articles on various topics in the thanatology and forensic obstetrics fields. An emerging sub-specialism is the study of blood groups, which found increasing use in trace evidence analysis and in family relationship evaluation.
Exploring factors influencing nurses’ attitudes towards their role in dental care
db4b35b5-85f0-4c74-86b3-ad046c44d52a
10358942
Dental[mh]
Oral diseases constitute a huge burden on people’s daily lives, with dental caries and periodontal diseases being the most prevalent on a global basis . Optimal health cannot be achieved without good oral health with the later impacting individuals’ wellbeing and quality of life. In Saudi Arabia, the prevalence and severity of dental caries is high , reaching 80 to 90% among children 15 and younger similarly, the prevalence of periodontal disease in Saudi Arabia is higher compared to western countries , with recent estimates up to 50% . The burden, distribution, and consequences of oral disease in the community necessitates coordination between all health care providers . Nurses are frequently exposed to patients and are involved in all aspects of healthcare provision from health education, promotion, screening to simple treatments as such they can play a vital role in the prevention and early detection of oral diseases [ , – ]. Studies evaluating nurses’ knowledge, participation, and attitudes about dental care for patients showed contradictory findings [ – ]. Nurses were reported in some studies to participate in the oral care and prophylaxis of hospitalized and nursing homes patients [ – ]. While Ahmed et al., observed that nurses did not regularly refer expecting mothers or children for dental check-ups and did not inform them about the importance and the need of oral hygiene and dental care . Where else in India it was found that nurses despite the lack in basic oral health knowledge, they were still found to have positive attitudes . In the same context, Andargie and Kassahun in a study conducted in Ethiopia found that nurses had inadequate knowledge and attitudes towards dental care with the later being influenced by factors such as level of education, experience, and oral health training . Good oral health knowledge among nurses is essential to provide patients with proper dental care; however sound knowledge may not lead to sound practices unless nurses exhibit positive attitudes towards their role in dental care. A previous study from Eastern Saudi Arabia (KSA) reported that nurses had the most negative attitudes among health professionals and reported e.g. heavy workloads, lack of skills, and knowledge of oral care as barriers that hindered their involvement in dental care . On the other hand, Al-Jobair and colleagues concluded that pediatric nurses in Riyadh had positive attitudes to provide dental care to hospitalized children despite their limited oral health knowledge . The theories of reasoned action and planned behavior have always been referred to in explaining the interplay between knowledge, attitudes, and practices. Initially it was thought that the link between knowledge and behavior/practice is a linear progression, the more the knowledge the more are the favorable behaviors/practices . On the other hand, the theory of planned behavior postulated that it is the person’s attitude that will lead to a certain behavior which is usually mediated by previous experience (positive or negative) . From a psychological point of view, attitude is treated as an invisible construct influencing a person’s decisions to act in a particular way . As such, it is crucial to explore the factors that influence nurses’ attitudes regarding their active role in individuals’ oral health. We hypothesized that nurses with sound knowledge or those who were engaged in regular dental care would have more positive attitudes. Therefore, this study aimed to investigate nurses’ attitudes towards their role in dental care and factors associated with it. Study design and setting This was a cross-sectional, survey-based study conducted in Eastern Saudi Arabia from September to October 2021. The Eastern Province, with more than 36% of Saudi Arabia’s total area, is the largest province and the third most populous. The administrative and territorial division of the Eastern Province includes 10 districts: the largest towns include Dammam, Al-Hasa, Al-Jubail, Ras Tanura, Dhahran, Al-Khobar, and Al-Qatif. Study participants The study included registered nurses practicing in Eastern Saudi Arabia in both the public and private sectors, who agreed to participate in the study. Sample size and sampling technique Sample size was calculated to be 377 nurses, based on an assumption of 50 percent of nurses with the correct knowledge and practices to control oral disease, and an estimated number of practicing nurses of about 20,000: a margin of error 5% and confidence level 95% . Data collection tool A snowball sampling technique was implemented. We collected data using a pre-validated 40-item questionnaire developed from the literature [ , , , , ] and focus group discussions. The questionnaire was pilot tested before commencing the study on 15 nurses who were not part of the main study. The questionnaire had four domains: demographic and background information, knowledge, attitudes, and participants’ current practices. The questionnaire was distributed online through social media (WhatsApp, Twitter, and Facebook) and was administered in both Arabic and English languages. Given the nature of the sampling technique the distribution of the online survey was done by multiple participants at the same time. The survey link was kept active for one month and all responses were then considered. The study’s independent variables were nurses’ demographics, oral health knowledge and practices, while the study’s outcome was nurses’ attitudes about their role/participation in the provision of dental care. Demographics and background information These included 1) Participants’ age, categorized as the twenties (from 20 to 29 years old), the thirties (from 30 to 39 years old), the forties (from 40 to 49 years old), and the fifties and above. 2) Gender was answered as male or female. 3) Current affiliation as to where participants choose public hospitals, the Ministry of Health, teaching institutes, private hospitals, or both the private and public sectors. 4) Years of experience categorized into less than 3 years, between 3 and 6 years, more than 6 years and less than 10 years, or more than 10 years. Participants were also asked about the source of their oral health information by choosing one or more of the following options: formal education (as part of their nursing studies), social media, scientific publications, ministry of health website/materials or can choose no previous oral health knowledge. Assessment of knowledge Nurses’ knowledge about oral health was assessed with 13 questions, and each correct answer was scored as one point, but the wrong answer was scored as zero. The overall knowledge score was determined by the sum of all correct answers based on 25 points. Given the mean, scores were categorized into good knowledge (20–25), average knowledge (12.5–19), and poor knowledge (less than 12.5) Assessment of practices Nurses’ dental care practices were assessed through four open-ended questions. These included: 1) if they provided oral health education; 2) if they provided oral screening questions (answered as always, sometimes, or never); 3) if they responded to patients’ questions about oral health problems as yes or no; 4) if they referred patients to dentists and answered as yes or no. Those who referred patients to dentists were also asked about reasons for the referral. Assessment of attitudes Nurses’ attitudes were assessed through 14 statements using a Likert scale. The Likert score ranged from five points for strongly agreeing to one point for strongly disagreeing, with the total score being 70 points. Based on the mean, attitude scores were categorized into positive (56 or more) or negative (less than 35). Statistical analysis Data analyses were performed using SPSS version 22.0 (IBM Corp., Released 2013 or IBM-SPSS Statistics for Windows, Version 22.0., Armonk, NY, IBM Corp.). We presented descriptive data in the form of frequencies/percentages and/or mean (±SD). The Chi-square test was applied to compare demographic characteristics, knowledge, and oral health practices between groups of participants with positive or negative attitudes towards this realm. A P-value of ≤ 0.05 was considered statistically significant. This was a cross-sectional, survey-based study conducted in Eastern Saudi Arabia from September to October 2021. The Eastern Province, with more than 36% of Saudi Arabia’s total area, is the largest province and the third most populous. The administrative and territorial division of the Eastern Province includes 10 districts: the largest towns include Dammam, Al-Hasa, Al-Jubail, Ras Tanura, Dhahran, Al-Khobar, and Al-Qatif. The study included registered nurses practicing in Eastern Saudi Arabia in both the public and private sectors, who agreed to participate in the study. Sample size was calculated to be 377 nurses, based on an assumption of 50 percent of nurses with the correct knowledge and practices to control oral disease, and an estimated number of practicing nurses of about 20,000: a margin of error 5% and confidence level 95% . A snowball sampling technique was implemented. We collected data using a pre-validated 40-item questionnaire developed from the literature [ , , , , ] and focus group discussions. The questionnaire was pilot tested before commencing the study on 15 nurses who were not part of the main study. The questionnaire had four domains: demographic and background information, knowledge, attitudes, and participants’ current practices. The questionnaire was distributed online through social media (WhatsApp, Twitter, and Facebook) and was administered in both Arabic and English languages. Given the nature of the sampling technique the distribution of the online survey was done by multiple participants at the same time. The survey link was kept active for one month and all responses were then considered. The study’s independent variables were nurses’ demographics, oral health knowledge and practices, while the study’s outcome was nurses’ attitudes about their role/participation in the provision of dental care. Demographics and background information These included 1) Participants’ age, categorized as the twenties (from 20 to 29 years old), the thirties (from 30 to 39 years old), the forties (from 40 to 49 years old), and the fifties and above. 2) Gender was answered as male or female. 3) Current affiliation as to where participants choose public hospitals, the Ministry of Health, teaching institutes, private hospitals, or both the private and public sectors. 4) Years of experience categorized into less than 3 years, between 3 and 6 years, more than 6 years and less than 10 years, or more than 10 years. Participants were also asked about the source of their oral health information by choosing one or more of the following options: formal education (as part of their nursing studies), social media, scientific publications, ministry of health website/materials or can choose no previous oral health knowledge. Assessment of knowledge Nurses’ knowledge about oral health was assessed with 13 questions, and each correct answer was scored as one point, but the wrong answer was scored as zero. The overall knowledge score was determined by the sum of all correct answers based on 25 points. Given the mean, scores were categorized into good knowledge (20–25), average knowledge (12.5–19), and poor knowledge (less than 12.5) Assessment of practices Nurses’ dental care practices were assessed through four open-ended questions. These included: 1) if they provided oral health education; 2) if they provided oral screening questions (answered as always, sometimes, or never); 3) if they responded to patients’ questions about oral health problems as yes or no; 4) if they referred patients to dentists and answered as yes or no. Those who referred patients to dentists were also asked about reasons for the referral. Assessment of attitudes Nurses’ attitudes were assessed through 14 statements using a Likert scale. The Likert score ranged from five points for strongly agreeing to one point for strongly disagreeing, with the total score being 70 points. Based on the mean, attitude scores were categorized into positive (56 or more) or negative (less than 35). These included 1) Participants’ age, categorized as the twenties (from 20 to 29 years old), the thirties (from 30 to 39 years old), the forties (from 40 to 49 years old), and the fifties and above. 2) Gender was answered as male or female. 3) Current affiliation as to where participants choose public hospitals, the Ministry of Health, teaching institutes, private hospitals, or both the private and public sectors. 4) Years of experience categorized into less than 3 years, between 3 and 6 years, more than 6 years and less than 10 years, or more than 10 years. Participants were also asked about the source of their oral health information by choosing one or more of the following options: formal education (as part of their nursing studies), social media, scientific publications, ministry of health website/materials or can choose no previous oral health knowledge. Nurses’ knowledge about oral health was assessed with 13 questions, and each correct answer was scored as one point, but the wrong answer was scored as zero. The overall knowledge score was determined by the sum of all correct answers based on 25 points. Given the mean, scores were categorized into good knowledge (20–25), average knowledge (12.5–19), and poor knowledge (less than 12.5) Nurses’ dental care practices were assessed through four open-ended questions. These included: 1) if they provided oral health education; 2) if they provided oral screening questions (answered as always, sometimes, or never); 3) if they responded to patients’ questions about oral health problems as yes or no; 4) if they referred patients to dentists and answered as yes or no. Those who referred patients to dentists were also asked about reasons for the referral. Nurses’ attitudes were assessed through 14 statements using a Likert scale. The Likert score ranged from five points for strongly agreeing to one point for strongly disagreeing, with the total score being 70 points. Based on the mean, attitude scores were categorized into positive (56 or more) or negative (less than 35). Data analyses were performed using SPSS version 22.0 (IBM Corp., Released 2013 or IBM-SPSS Statistics for Windows, Version 22.0., Armonk, NY, IBM Corp.). We presented descriptive data in the form of frequencies/percentages and/or mean (±SD). The Chi-square test was applied to compare demographic characteristics, knowledge, and oral health practices between groups of participants with positive or negative attitudes towards this realm. A P-value of ≤ 0.05 was considered statistically significant. A total of 525 participants responded to the survey. In our sample most participants 255 (48.6%) were in their twenties, 419 were males (79.8%), and 381 were Saudis (72.6%). The Ministry of Health hired most employees (42.1%) with less than 3 years of experience, while 267 (50.9%) reported having an oral health undergraduate curriculum, and 305 (57.8%) did not attend any educational establishment in oral health. shows the sources of oral health information (knowledge) for study participants. Formal education (38%) was followed by social media (18.5%) as the main sources of oral health knowledge, with 11.7% reporting no previous oral health knowledge. shows participant’s attitude towards oral health. The mean (±SD) attitude of participants was 52.8 (± 8.2), indicative of favorable attitudes towards oral health. The most positive attitude regarding the impact of oral health on overall well-being (454, 86.8%). Participants were positively inclined to get trained in oral health education (398, 76.4%) as well as getting trained to provide oral health screenings (388, 74.3%). Slightly more than half of participants felt knowledgeable or confident enough to provide oral health screening (281, 53.5%) and (299, 57.0%) respectively. An almost equal number of participants agreed they were willing to provide oral health education or screening if they were paid extra (250, 47.8%), with a lower number of patients (239, 45.8%). On the other hand, 195 participants (37.3%) felt there was no time to provide oral health education or screening. indicates that no demographic characteristics were associated with nurses’ attitudes on their role in dental care. The mean (±SD) knowledge score among participants was 13.4 (±3.9), indicative of an average knowledge level on oral health. shows the association between participants’ oral health knowledge, previous oral health knowledge, training, and their attitudes. Regarding nurses’ knowledge about dental caries, their best scores were about how they could be prevented for 484 participants (92.2%), that caries can occur at any age, and that dental caries may lead to tooth loss, as seen with 445 participants (84.8%). Knowledge of these factors were significantly associated with positive attitudes among nurses (P ≤ 0.05) ( ). In terms of nursing awareness of periodontal disease, the highest knowledge score pertained to periodontitis leading to tooth loss in 421 participants (80.2%), and how gingivitis affects 375 participants (71.4%), the latter showing a statistically significant association with nurses’ attitudes (P = 0.029) ( ). Attending lectures on oral health along with a formal education was associated with nurses’ positive attitudes about their role in dental care (P ≤ 0.001). shows nurses’ current practices in terms of dental care and its association with attitudes. More than two thirds of nurses (70.3%) replied that they responded to patients’ questions about oral health conditions, but that this was not associated with positive attitudes towards dental care (P = 0.108). Yet, conducting oral health screening with an education was practiced to a lesser extent by nurses, at 47.1% and 19.7%, respectively; these practices were significantly associated with positive attitudes among nurses (P = 0.001). Referral practices were reported by half the participants (266, 50.7%); reasons for referrals are shown in ; however, all referral practices were not necessarily associated with nurses’ attitudes ( ). The current study investigated nurses’ attitudes towards their role in dental care and factors that influenced positive attitudes. Nurses showed overall positive attitudes towards their role in dental care, and the majority were willing to provide oral health education and/or screening. Having an oral health aspect as part of undergraduate curriculum and attending continuous education in oral health typically led to the acquisition of positive attitudes. In the same context, nurses who provided oral health screening showed more positive attitudes than those who did not. None of the demographic factors had an influence on attitudes. As such, the study’s hypothesis was partially supported. Nurses play an essential role in health care and are the staff with the most exposure to patients; their role in oral health care and promotion cannot be overlooked. Yet, active involvement requires favorable attitudes by promoting this role. Nurses in the current study had a generally positive attitude towards their role in dental care, with the most favorable attitude on the impact of oral health on individuals’ general health and well-being (86.8%), consistent with reports from Finland (100% of all interviewed nurses) , Korea (83.2%) and Eritrea (89%) . Poor oral health can aggravate many systemic conditions as well as impairment of children’s growth and quality of life for the elderly ; this cites the impact of poor oral health on individuals’ general health, which can improve the medical care provided and treatment outcomes. Providing proper health care is imperative in every medical field, as such efficient collaboration between those in medical practice is fundamental for proper patient services. Nurses in the current study were willing to provide oral health education and screening, although they had concerns related to workload and financial compensation. Factors such as limited time, number of nurses, and resources have been highlighted as barriers in previous studies . Attitudes can be formed through conditioning, in which consumption of the attitude object is reinforced. One type of reinforcement is called positive reinforcement, in which the response is strengthened, and appropriate behavior occurs when provided with a reward . Demographic factors in the current study showed no association with nurses’ attitudes; a similar finding was also reported by Dagnew et al. . The link to demographic or personal characteristics with nurses’ attitudes varied in other studies: one conducted in the capital city of Saudi Arabia found that gender, nationality, and previous training were powerful predictors of favorable attitudes towards patient oral health assessment . Another study conducted in Taif (a city in Western Saudi Arabia), found significant gender differences in knowledge and practices, but not attitudes. The same study also found that experience significantly influenced nurses’ attitudes . Differences in these reports could be due to varied settings in which the studies were conducted; for example, nurses caring for elderly or psychotic patients may have had more tasks than those in other departments. In terms of attitudes and demographics, younger males and those with less education were usually viewed as risk takers who still showed favorable attitudes not accepted by the majority . Observed differences in the demographic influence on attitudes call for assessments of the environment and organizational support. Nurses in the current study showed average oral health knowledge. Findings from previous studies have been controversial, whereas some studies observed a lack of oral health knowledge ; Shimpi et al. reported a good level of oral health knowledge in nurses . Any shortfall in knowledge could be attributed to a lack of continued oral health training in the workforce, reflected in the lack of confidence to conduct oral health screenings as reported by study participants. Proper oral health knowledge among nurses and awareness of dental disease had a significant implication for patient care, as oral problems could be detected at earlier stages with better prognoses . We found that proper knowledge about dental disease (their clinical manifestations and consequences) was associated with positive attitudes among nurses. Attending educational sessions on oral health in undergraduate studies was associated with positive attitudes. Philip et al. observed that nurses who received oral health education in university but not in clinical practice had overall less oral health knowledge . We highlight the need to reinforce this knowledge in foundational training and being informed about current best practices, with the need to improve the dental component -if any- of the nursing curriculum. In our study, half the participants mentioned how they sometimes provided oral health education to patients and referred their patients to a dentist; a majority of participants reported responding to patients’ questions about oral health conditions. In a similar study in the USA, more than two thirds claimed they frequently performed oral health assessments , while in Saudi Arabia half the nurses assessed patient oral health in their departments while in a Nigerian study, almost all the nurses never referred a patient to a dentist . Sound knowledge is the first step in establishing regular practices , which explains the gap in the current study. Studies conducted in Saudi Arabia highlighted the need for mandatory workshops and informative seminars by the Saudi Ministry of Health, which encouraged nurses to perform oral health screening as part of their daily tasks (given the high prevalence of dental disease in the country). A previous study found that providing nurses with education on ways to implement oral health assessments increased nursing practice to document oral health assessments of elderly residents . Another factor found to enhance nurses’ oral health assessment and care was policy within their institution , which can be accurate about patient referrals. In the current study, more than half the nurses did not refer patients to a dentist. The lack of a clear loop for referrals and guidelines was documented as the main barrier that hindered referrals in previous studies . Another factor linked to less oral health assessment and care was inadequate training . This is seen at both undergraduate and graduate levels, while onsite and online workshops provided nurses with skills to guide their practices. There are a few limitations in the current study that we want to acknowledge. The cross-sectional design of the study only corroborates an association, but not causation. However, this design is most appropriate to investigate current attitudes, knowledge, and behavior. This study relied on self-reported data, which can lead to overreporting of favorable practices. Further studies may use patient’s medical records to record these practices to confirm report authenticity. Also, differences in quantifying and categorizing knowledge between studies may contribute to conflicting results. Although the nature of sampling techniques may result in some self-selection bias, this design was most appropriate to collect large data from different institutions, within a limited time frame. Despite these limitations, the validated questionnaire and the adequate sample size allowed for the generalizability of the study findings. This directs attention to the nurses’ role in oral health care in Eastern Saudi Arabia, as well as in decision and policy for future planning. Nurses in the current study had positive attitudes, average knowledge, and minimum practices in regards to dental care. Positive attitudes were noted among those who had an undergraduate oral health component, were involved in continuous education, and provided oral health screening. There is a need for inclusion of oral health training in nurses’ under- and postgraduate education. Future studies should explore the effects of institutional guidelines/policies and type of department on nurses’ attitudes. The integration of oral and general health should be the cornerstone of policy approaches for the prevention and control of oral disease. S1 Data (XLS) Click here for additional data file.
How common is remission in rheumatoid factor-positive juvenile idiopathic arthritis patients? The multicenter Pediatric Rheumatology Academy (PeRA) research group experience
1ec6ef78-ddde-4b2f-9d9c-9b7ba282dcdf
10360344
Internal Medicine[mh]
Juvenile idiopathic arthritis (JIA) is the most common chronic rheumatic disease in children . There are seven JIA subtypes according to the ILAR (International League of Associations for Rheumatology) classification system, each with a distinct epidemiological presentation, pathogenesis, genetic background, and clinical manifestations. The treatment approach, prognosis and associated morbidity vary according to JIA subtype . Rheumatoid factor (RF)-positive polyarthritis is the least common JIA subtype with a frequency of 5% of JIA patients . It is defined as involvement of ≥ 5 joints and by the presence of RF positivity on 2 occasions ≥ 3 months apart, both within the first 6 months following disease onset. As the disease may have a devastating course when not handled appropriately, the delay in diagnosis and treatment can be catastrophic by irreversible joint destruction as well as extra-articular complications . Among conventional disease-modifying anti-rheumatic drugs (DMARD), MTX is the most commonly used in pediatric rheumatology; however, there are only few evidence-based treatment guidelines for polyarticular JIA . Biological therapies, which were introduced after 2000, decreased both morbidity and the mortality rate in patients with rheumatic diseases . But despite these advances, a large number of JIA patients continue to have active disease in the long term, with or without damage. In all, 42%-67% of patients with JIA have active disease while transitioning to adult care, and 45%-50% have functional limitations . Although remission and clinically inactive disease have been inconsistently defined in JIA, it is a known fact that patients with RF-positive polyarthritis have the lowest remission rates among JIA patients. Moreover, functional disability in RF-positive polyarthritis patients is much more severe than in patients with other subtypes; therefore, it is essential to evaluate disease activation, and articular and extra-articular damage during the disease course . The literature contains limited data on the RF-positive polyarthritis subtype. Although several studies on JIA have been published, most provided general JIA data and didn’t concentrate on RF-positive polyarthritis as a distinct clinical entity. The present study aimed to analyze the demographic and clinical characteristics, treatment modalities, and response in thed long-term follow-up in a large multicentric cohort of RF-positive polyarthritis patients. This multicenter, retrospective, cross-sectional, observational cohort study analyzed data from RF-positive polyarthritis patients followed at 10 pediatric rheumatology centers between 2017 and 2022 (Pediatric Rheumatology Academy [PeRA]-Research Group [RG]). The general methods of the PeRA-RG study have been described previously . Inclusion criteria were diagnosis of RF-positive polyarthritis according to the ILAR criteria, and follow-up of ≥ 6 months. Data on patient gender and age, age at disease onset, and age at diagnosis were obtained from medical charts. An information form for collecting data on patients’ number of active joints, number of joints with limited motion, acute-phase reactant levels (erythrocyte sedimentation rate [ESR] and C-reactive protein [CRP]), RF titer and anti-cyclic citrullinated peptide (anti-CCP) levels, patient/parent VAS pain scores (0–10 cm) and physician VAS pain scores were completed at the time of diagnosis and throughout the follow-up period. JIA treatments and the duration of treatments were also recorded. Follow-ups were conducted at 3 months, 6 months, 12 months, 18 months, 24 months, and 36 months. JIA status was determined according to the Wallace et al. inactive disease is defined criteria system. Initially, all patients were treated with a 15 mg/m2/week subcutaneous injection of methotrexate. The patients were divided into 2 groups based on MTX response, as follows: group 1: MTX responsive group; group 2: MTX unresponsive group (MTX unresponsive group was defined as an active disease according to Wallace et al.criteria after the 3rd month of MTX treatment). Clinical and laboratory findings were compared between the 2 groups. Disease damage was measured using the Juvenile Arthritis Damage Index (JADI), which is composed of 2 parts 1 for the assessment of articular damage (JADI-A) and 1 for the assessment of extra-articular damage (JADI-E). The maximum JADI-A score is 72, and JADI-E is 17 . The study protocol was approved by the hospital ethics committee. Statistical analysis Statistical analysis was performed using IBM SPSS Statistics for Windows v.21.0 (IBM Corp., Armonk, NY). The study variables were investigated using visual (histograms and probability plots) and analytic methods (Kolmogorov–Smirnov and Shapiro–Wilk tests) to determine the normality of their distribution. Categorical parameters are presented as percentages. Parametric parameters are expressed as mean ± SD and non-parametric parameters are expressed as median (IQR). Categorical variables were compared using the chi-squared test or Fisher’s exact test, as appropriate. Differences in continuous data between the two groups were evaluated via the Student’s t-test or the Mann–Whitney U test, as appropriate. The Friedman test was used to compare the change in WBC, PLT, ESR, CRP, and VAS values, and the active joint counts between baseline, initiation of treatment, and after 6 months of treatment. The level of statistical significance was set at P < 0.05. Statistical analysis was performed using IBM SPSS Statistics for Windows v.21.0 (IBM Corp., Armonk, NY). The study variables were investigated using visual (histograms and probability plots) and analytic methods (Kolmogorov–Smirnov and Shapiro–Wilk tests) to determine the normality of their distribution. Categorical parameters are presented as percentages. Parametric parameters are expressed as mean ± SD and non-parametric parameters are expressed as median (IQR). Categorical variables were compared using the chi-squared test or Fisher’s exact test, as appropriate. Differences in continuous data between the two groups were evaluated via the Student’s t-test or the Mann–Whitney U test, as appropriate. The Friedman test was used to compare the change in WBC, PLT, ESR, CRP, and VAS values, and the active joint counts between baseline, initiation of treatment, and after 6 months of treatment. The level of statistical significance was set at P < 0.05. The study included 56 patients with RF-positive polyarthritis that were followed up between 2017 and 2022. Among the patients, 45 (80.4%) were female and 11 (19.6%) were male. The median age of the patients was 18.0 years (IQR: 6.0–25.0 years). The median age at symptom onset and diagnosis was 13.2 years (IQR: 9.0–15.0 years) and 13.9 years (IQR: 9.2–15.1 years), respectively. The median duration of follow-up was 41.5 months (IQR: 19.5–75.7 months) and the median duration from symptom onset to diagnosis was 4.0 months (IQR: 2.0–6.0 months). Baseline clinical and laboratory patient characteristics are given in Table . All patients (100%) were initially treated with MTX. The median duration of MTX treatment was 12 months (IQR: 3–120 months). Concomitantly with MTX, 47 (83.9%) patients received non-steroid anti-inflammatory drugs (NSAIDs) and 42 (75%) used oral corticosteroids. Four patients were required to switch MTX to leflunomide (LFN) due to gastrointestinal intolerance. However, none of the patients receiving LFN reached remission. In total, 17 (30.4%) patients achieved remission within a median of 4 months (IQR: 3–12 months), whereas MTX was ineffective in 39 (69.6%) patients. Among these 39 patients, 34 (60.7%) required a biological agent (BA) (5 patients’ parents refused biologic therapy, but continued oral corticosteroid therapy). Among these 34 patients the first-choice BA was as follows: etanercept (ETN): n = 22 (64.7%); adalimumab (ADA): n = 8 (23.5%); tocilizumab (TOC) n = 4 (11.8%) patients. In addition, 27 (79.4%) of these 34 patients achieved remission a median of 3 months (IQR: 1–4 months) after BA initiation. The initial BA was switched to another BA in 7 (20.6%) patients, as follows: initial BA (anti-TNF drug) to another BA (anti-TNF drug) of the same class: n = 2; initial BA to a BA with different mechanisms of action (TOC): n = 5. The disease-related parameters during follow-up are summarized in Table . In all, 40 patients had a 12-month follow-up and 25 patients had a 24-month follow-up. Whereas, 42.9% of the patients who were followed up for 12 months had inactive disease, and 44% of the patients who were followed up for 24 months had inactive disease. There weren’t significant differences in age, gender, age at diagnosis, diagnostic delay, active joint count at diagnosis, patient/parent VAS pain scores at diagnosis, physician VAS pain score at diagnosis, acute-phase reactant levels (ESR and CRP) at diagnosis, or RF titer and anti-CCP levels between the patients with inactive disease and active disease at 24 months. Among the 9 patients that achieved inactive disease, treatment was ceased after a median of 20 months (IQR: 9–36 months) of inactive disease. In 4 of these 9 patients flare-ups occurred a median of 3 months (IQR: 3–5.2 months) after treatment cessation. Treatment was discontinued in 4 of 17 patients with MTX response. Flare occurred in 1 patient 6 months after treatment was discontinued. The treatment is schematized summarized in Fig. . There were no significant differences in age, gender, age at diagnosis, antinuclear antibody (ANA) positivity, the active joint count at diagnosis, patient/parent VAS pain scores at diagnosis, physician VAS pain score at diagnosis, acute-phase reactant levels (ESR and CRP) at diagnosis, or RF titer and anti-CCP levels between the patients that did and did not respond to MTX (Comparison of clinical and laboratory characteristics of between the 2 groups are given Table ). Furthermore, there were no significant differences between the patients that did and did not respond to MTX in the JADI-A and JADI-E scores at the 6-, 12-, and 24-month follow-up. The data regarding the outcome of JIA is increasing worldwide; however, published data on the outcome of RF-positive polyarthritis remain scarce. Although advances in JIA treatment have led to a reduction in disease-related joint damage and an increase in physical function and quality of life, some patients with RF-positive polyarthritis still seem to be in active disease. In the present study, 44% of RF-positive polyarthritis patients had inactive disease at the 24-month follow-up. The median age of onset of RF-positive polyarthritis is 9–11 years (range: 1.5–15 years) and affected females outnumber males (from 4:1–13:1) in large series . Consistent with the literature, in the present study, the median age at diagnosis was 13.2 years and female predominance was noted (4:1). It was previously noted that the upper and lower extremity large and small joints are affected, as well as the cervical spine and temporomandibular joint (TMJ), whereas the thoracic and lumbar spine and sacroiliac joints are spared. Although large joints are commonly involved, the characteristic pattern is symmetrical arthritis affecting the metacarpophalangeal (MCP) and proximal interphalangeal (PIP) joints of the hands, the wrists, and the metatarsophalangeal (MTP) and PIP joints of the feet . Similarly, in the present study, symmetrical arthritis affecting the MCP and PIP joints of the hands were commonly observed (82%). Compared to RF-negative polyarthritis, TMJ involvement is less common, but can occur in up to 30% of RF-positive polyarthritis patients . In the present cohort hip involvement was noted in 6 (10.7%) patients, cervical spine involvement was noted in 5 (8.9%) patients, and TMJ involvement was noted in 3 (5.3%) patients. Furthermore, the sternoclavicular joint was affected in 1 patient, the sacroiliac joint was affected in 1 patient, and the coxofemoral joint was affected in 1 patient. Subcutaneous nodules, uveitis, and other extraarticular disease manifestations of the disease were not observed in the present cohort. Among all RF-positive polyarthritis patients, 42%-56% have ANA positivity . Similarly in the present study, ANA positivity was noted in 41.1% of the patients. In the literature, the frequency of ACPA in RF-positive polyarthritis patients varies from 57 to 90%. ACPA correlates with disease severity and joint damage evidenced by radiographs, ; however, in the present study ACPA was not associated with disease activity, estimation of MTX response, or JADI scores. This may be due to the fact that ACPA was not studied in all patients. In 2005 Viola et al. studied 158 JİA patients with a mean follow-up period of 7.3 years, and reported a median JADI-A score of 0 (IQR: 0–39) and median JADI-E score of 0 (IQR: 0–7)). Subsequently, Menon et al. studied patients JIA with a mean follow-up of 2 years, reporting a median JADI-A score of 0 (IQR: 0–52) and a median JADI-E score of 0 (IQR: 0–6). In the present study the maximum scores were be found lower than those that were previously reported. This may be due to the increased use of biologic drugs in recent years. Current treatment recommendations for RF-positive polyarthritis MTX treatment should be initiated at the time of diagnosis unless contraindicated . Persistent high or moderate disease activity despite MTX treatment necessitates prompt switch to biological therapy . In one study, patients with RF-positive polyarthritis had the lowest drug-free remission rate among children with chronic arthritis . Therefore, early aggressive treatment has long been accepted for RF-positive polyarthritis. In this study, all patients received MTX as first-line therapy, but 60.7% of the patients did not achieve remission with MTX and biologics were added to their treatment. There has been a rapid expansion of biologic therapies that effectively treat JIA, including RF-positive polyarthritis. The first biologic DMARD studied in JIA patients was etanercept. The efficacy and acceptable safety profile of etanercept was demonstrated in a randomized controlled trial (RCT) published in 2000 that included patients with polyarticular JIA that were resistant or intolerant to MTX . Over the following years other anti-TNF inhibitors, including adalimumab, infliximab, and golimumab, the IL-6 inhibitor tocilizumab, and costimulatory disruption abatacept were tested in RCTs that included patients with polyarticular JIA . These approaches were reported to result in clinically inactive disease in a significant proportion of polyarticular JIA patients . In the present study, 34 (60.7%) of the patients were treated with a BA, of which 25 were followed-up for 24 months; however, ( n = 14) 56% still had active disease at 24 months. In all, only 44% of our cohort achieved remission during the 24-month follow-up, suggesting that the window of opportunity for early intervention might be missed. Additional larger-scale international studies are needed to predict non-responsiveness to MTX in RF-positive polyarthritis patients more accurately. Although this study is limited in its retrospective design and small sample, to our knowledge it is the largest RF-positive polyarthritis series of children in Turkey evaluating the remission status as a real-life data. Even so, considering the limited number of studies on RF-positive polyarthritis, we believe that our study, which was carried out at the largest pediatric rheumatology clinics in Turkey, is valuable. Not all RF-positive polyarthritis patients treated with MTX are going to achieve remission and there is unfortunately no known way to predict MTX resistance in this group of patients; this may be due to the small number of patients. But currently in the literature there does not appear to be a marker to predict MTX resistance.The present findings indicate the importance of developing targeted therapy strategies and identifying parameters that can predict MTX resistance in patients with RF-positive polyarthritis. We suggest that RF positive JIA be considered as a separate entity.
Phase of firing coding of learning variables across the fronto-striatal network during feature-based learning
e5604d56-3681-4ed6-9e31-e6075480f1b1
7495418
Physiology[mh]
The lateral prefrontal cortex (LPFC) and anterior cingulate cortex (ACC) are key brain regions for adjusting to changing environmental task demands , . Both regions project to partly overlapping regions in the anterior striatum (STR), which feeds back projections via the thalamus and thereby close recurrent fronto-striatal-thalamic loops . Neurons in this recurrent network encode multiple learning variables during goal-directed behaviors, including the value of currently received outcomes, a memory of recently experienced outcomes, and a reward prediction error that indicates how unexpected currently received outcomes were given prior experiences , . The multiplexing of outcomes, outcome history and outcome unexpectedness (prediction errors) within the same neuronal population is evident in firing rate modulations in fronto-striatal brain areas , but how this firing is temporally organized within the larger network is unresolved – . A large body of evidence has shown that ACC and LPFC synchronize their local activities at a characteristic beta oscillation frequency – , and that both areas engage in transient beta rhythmic oscillatory activity with the STR during complex goal-directed tasks – . However, whether this beta oscillatory activity is informative for learning and behavioral adjustment has remained unresolved – . Prior studies have documented that beta activity emerges specifically during the processing of outcomes following correct trials during habit learning , and that, following error trials, overall beta activity is larger when the committed error is smaller . However, these studies did not quantify whether neuronal spiking activity synchronizing to these beta oscillations contains learning-relevant outcome information. We, therefore, aimed to test how outcome-related beta rhythmic spiking activity relates to the behavioral learning of reward rules in ACC, LPFC, and STR. First, we quantified firing rate information about current outcomes, prediction errors of these outcomes, and the history of recent reward. These variables might be conveyed independently of network-level beta oscillatory activity. However, theoretical studies suggest that neuronal coding utilizing temporal organization can be efficient, high in capacity, and robust to noise , , , . In addition, coding of information in the temporal activity pattern has been linked to mechanisms of efficient communication among neuronal groups, suggesting that coherently synchronized groups can exchange information by phase aligning their disinhibited activity periods – . To test the role of temporal coding, we recorded from LPFC, ACC, and STR while macaque monkeys engaged in trial-and-error reversal learning of feature reward rules. We found that during outcome processing, each area contains segregated ensembles of neurons whose firing rates encode the current Outcome (firing differently for correct vs. errors), the Reward Prediction Error of those outcomes (firing differently to an outcome when it differed versus was the same than in the previous trial, as in e.g., , ), and the recent Outcome History (increasing firing when the current outcomes matched previous outcomes). A large proportion of rate coding neurons phase-synchronized long-range to remote areas of the fronto-striatal network at a shared 10–25 Hz beta frequency range. We found that for those neurons that phase-synchronize long-range, the three learning variables are encoded more precisely for spikes elicited at narrow oscillation phases in the beta band. This phase-of-firing gain of encoding significantly enhances the firing rate code and occurs at phases that were partly away from the neurons’ preferred spike phase. These findings document that neural coding of learning variables is enhanced through the phase of firing across the ACC, LPFC, and STR of nonhuman primates. Previous outcomes guide choice Animals performed a feature-based reversal-learning task . Subjects were shown two stimuli with opposite colors and had to learn which of them led to reward (Fig. ). The same color remained associated with reward for at least 30 trials before an uncued reversal switched the color-reward association (Fig. ). During each trial, the subjects monitored the stimuli for transient dimming events in the colored stimuli. They received a fluid reward when making a saccade in response to the dimming of the stimulus with the reward associated color, while the dimming of the non-reward associated color had to be ignored. A correct, rewarded saccade to the dimming of the rewarded stimulus had to be made in the up- or downward direction of motion of that stimulus. This task required covert selective attention to one of two peripheral stimuli based on color, while the overt choice was based on the motion direction of the covertly attended stimulus. Correct responses were those that occurred according to the motion direction of the rewarded target in the correct response window, whereas errors were responses made to the incorrect, non-rewarded target, or in the incorrect response window in response to the distractor . In 110 and 51 sessions from monkey’s HA and KE, respectively, we found that subjects attained plateau performance of on average 80.2% (HA: 78.8%, KE: 83.6%) within 10 trials after color-reward reversal (Fig. ). Using a binomial General Linear Model (GLM) to predict current choice outcomes (outcomes from correct or erroneous choices, excluding fixation breaks), we found that for both subjects, outcomes from up to three trials into the past significantly predicted the current choice’s outcome (Fig. ; Wilcoxon signrank test, p < 0.05, multiple comparison corrected), closely matching previous findings . ACC, LPFC, and STR neurons encode outcomes, their history and their prediction error To test how previous and current outcomes are encoded at the single neuron level, we analyzed a total of 1460 neurons, with 332/227 (monkey HA/KE) neurons in LPFC, 268/182 neurons in ACC, and 221/230 neurons in anterior STR (Fig. , Supplementary Fig. ). These regions have previously been shown to encode outcome, outcome history, and prediction error information , – . We found multiple example neurons encoding different types of outcome variables. Some cells responded differently to correct versus erroneous trial outcomes irrespective of previous outcomes (Fig. ), while others responded strongest when the current outcome deviated from the previous trials’ outcome (signaling reward prediction error) (Fig. ), or when the current outcome was similar to the previous trials’ outcome, i.e., following a sequence of correct trials or a sequence of error trials (Fig. ). We quantified these types of outcome encoding using a LASSO Poisson GLM that predicted the spike counts during the outcome period (0.1–0.7 s after reward onset) and extracted the characteristic patterns of beta weights across the past and current trial outcomes that distinguished different types of outcome encoding (Fig. ). Neurons that encoded mostly the current trials’ outcome showed large weights only for the current trial (Outcome encoding type). Neurons encoding a prediction error showed beta weights for previous trials that were opposite in sign to the current trial’s outcome (Reward Prediction Error (RPE) encoding type). In neurons encoding the history of recent rewards, beta weights ramped up over recent trials toward the current trial outcome (Outcome History encoding type) (for examples, see also insets in Fig. ). We used a clustering analysis to test whether the three types of outcome encoding were separable from each other and prevalent in each of the recorded brain areas (Fig. , Supplementary Fig. ). Clustering showed that neurons encoding each the three variables were statistically separable with reliable cluster assignments of neurons evident in an average Silhouette measure of cluster separability of 0.81 for LPFC, 0.57 for ACC, and 0.75 for STR (Supplementary Fig. ) . The clustering does not preclude the possibility of a more continuous encoding space, but it statistically justifies focusing analysis on three sets of neurons with well distinguishable encoding pattern (see Supplementary Fig. ). Across the population the proportion of neurons with significant encoding, Outcome cells were the most populous (~59%, 234/384 in monkey HA and 185/329 in monkey KE), followed by ~26% of neurons encoding Reward Prediction Errors (64/231 in monkey HA and 39/170 in monkey KE) and ~32% of neurons encoding Outcome History (76/206 in monkey HA and 33/139 in monkey KE) (Fig. ; χ 2 test, χ 2 = 86.02, p ~0). The relative frequency of these encoding types did not differ between areas ( χ 2 test, χ 2 = 3.64, p = 0.46). On the other hand, the strength of encoding differed on the basis of area for Outcome cells (Kruskal–Wallis test, χ 2 = 26.6, p ~0), with stronger encoding in ACC than LPFC or STR, as well as for Outcome History encoding cells ( χ 2 -test, χ 2 = 19.7, p ~0) with stronger encoding in ACC and LPFC than in STR, whereas the strength of RPE encoding was similar across areas ( χ 2 = 2.49, p = 0.29) (see Supplementary Fig. for all pair-wise comparisons). In ACC, LPFC and STR, Outcome, RPE and Outcome History encoding emerged shortly (within 0.3 s) after outcomes were received (see “Methods”; Supplementary Fig. , Wilcoxon signrank test, p«0.05). Neurons encoding Outcome, RPE, or Outcome History showed similar overall firing rates (Supplementary Fig. ; ANOVA; LPFC, F = 1.32, p = 0.27; ACC, F = 0.58, p = 0.58; STR, F = 1.05, p = 0.35). Neurons synchronize at a 10–25 Hz beta band across ACC, LPFC, and STR We found similar proportions and activation time courses of encoding neurons in ACC, LPFC and STR (Supplementary Fig. ), which raised the question how these neuronal populations are functionally connected. One possibility is that neuronal firing patterns are organized temporally, such that spikes in one area phase synchronized to neuronal population activity in remote areas. We assessed synchrony as the phase consistency of neuronal spikes with local field potential (LFP) fluctuations in distally recorded areas using the pairwise-phase consistency (PPC) metric, and converting the PPC values into an effect size , (Fig. ; see “Methods”). Across all ( n = 7938) spike-LFP pairs, we found a pronounced peak of phase synchronization in the beta band (10–25 Hz), with neurons firing on average ~1.15 times more spikes on their preferred, average phase than at the opposite phase when considering the population average in the beta band (Fig. ), and ~1.39 times more spikes on the preferred phase when selecting for each neuron the beta frequency with peak synchrony (Supplementary Fig. ). Prominent beta-band synchrony was evident for neurons that encoded outcome variables in their firing rates and for those that did not show encoding (Fig. ), with the peak synchrony being stronger for cell-LFP pairs with non-coding rather than coding cells (unpaired t -test, T = 7.67, p ~0 ; Supplementary Fig. ). Overall 53% (4230/7938) of the spike-LFP pairs showed significant phase synchronization within the 10–25 Hz range (Fig. ; Rayleigh test, p < 0.05, see “Methods” for prominence criteria), with similar proportions across all three areas (LPFC, 1506/2961, 50.9%; ACC, 1473/2442, 60.3%; STR, 1292/2524, 51.2%). Consistent with these results we found that the synchrony effect (the proportion of spikes at preferred over non-preferred phases) were similarly high for spike-LFP pairs with neurons encoding Outcome (1.37 ± 0.007), RPE (1.35 ± 0.013), and Outcome History (1.34 ± 0.011). There was only a trend for phase synchronization to be different between encoding clusters (ANOVA, F = 2.8, p = 0.061), which post-hoc analysis revealed to be driven primarily by differences between Outcome History and Outcome clusters ( p = 0.078, multiple comparison corrected), rather than Outcome History and RPE ( p = 0.80) or RPE and Outcome clusters ( p = 0.37). Next, we tested whether spike-LFP synchronization showed area-specificity for neurons that significantly encoded Outcome, RPE or Outcome History in their firing rates. For each spike-LFP pair, we selected the beta band frequency with the most prominent PPC value. Within-area beta synchrony differed on the basis of area (ANOVA, F = 32.6, p ~0), with the strongest synchrony within ACC, compared to LPFC (multiple comparison corrected, p = 0.014) or STR ( p ~0) (Fig. ). Synchrony also differed when assessing spikes and LFPs from different areas (ANOVA, F = 12.7, p ~0), with neurons in ACC showing stronger between-area spike-LFP synchrony, as compared to LPFC ( p ~0) and STR ( p = 0.042). We found a trend for stronger between-area synchrony with spikes originating in STR, as compared to LPFC ( p = 0.058) (Fig. ). Testing for the reciprocity of beta-band phase synchrony showed that ACC spikes phase synchronized more strongly to LFP beta activity in the LPFC than vice versa ( p = 0.047) (Fig. ). LPFC and STR pairs showed statistically indistinguishable spike-phase synchrony strength ( p = 0.92), as did ACC and STR pairs ( p = 0.26). The findings were similar when inter-areal synchrony was analyzed separately at each frequency (Supplementary Fig. ). For both monkeys, neurons in ACC showed the strongest spike synchronization compared to neurons from LPFC and STR (the area difference is significant in monkey HA, and trends the same way in monkey KE; see Supplementary Fig. ). Moreover, across all three areas, the strength of 15–25 Hz phase synchronization was statistically indistinguishable in the [−1 0] s. pre-outcome period compared to the [0.1 1] s. post-outcome period (paired t -test, abs( T ) < 1.57, p > 0.12; Supplementary Fig. ). The baseline period ([−1 0] before stimulus onset) and the post-outcome period showed similar PPC values in ACC and STR (abs( T ) <0.49, p > 0.6), while LPFC showed stronger phase synchronization in the post-outcome period ( T = 2.82, p = 0.0049; Supplementary Fig. ). Phase-of-firing at 10–25 Hz enhances encoding outcome, prediction error and outcome history Neurons that synchronized to the LFP elicit more spikes at their mean spike-LFP phase, which we denote as the neurons’ preferred spike phase . This preferred spike-phase might thus be important to encode information shared among areas of the network , . We tested this hypothesis by quantifying how much Outcome, RPE, and Outcome History information is available to neurons at different phase bins relative to their mean phase. If the phase-of-firing conveys information, then differences in spike counts between conditions should vary across phases, as opposed to a pure firing rate code that predicts equal information for spike counts across phase bins (Fig. ) , . Figure shows an example neuron exhibiting such phase-of-firing coding (with spikes from ACC and LFP beta phases from STR). This neuron exhibited increased firing on error trials compared to correct trials, but only when considering spikes near its preferred spike phase, with firing at the opposite phase showing no difference. To quantify this increase of coding when considering the phase-of-firing code for all three information types, we selected for each neuron the frequency within 10–25 Hz that showed maximal spike-LFP synchrony, subtracted the mean (preferred) spike phase from all phases (i.e., setting the preferred phase to zero, to allow for comparison between neurons), and binned spikes on the basis of the LFP beta phases. To prevent an influence of overall firing rate changes between phase bins, we adjusted the width of each of the six-phase bins to have equal spike counts across bins (see “Methods”). We then fitted a GLM to the firing rates of each phase bin separately to quantify the Outcome, RPE, and Outcome History encoding for each phase bin and compared this phase specific encoding to a null distribution obtained by randomly shuffling the spike phases prior to binning. Figure illustrates example neurons for which the encoding systematically varied as a function of phase (for more examples, see Supplementary Fig. ). The example spike-LFP pair from Fig. encoded the trial Outcome significantly stronger than a phase-blind rate code in spikes within ~[-π/2, π/2] radians relative to its preferred spike phase and weaker than a phase-blind rate code at opposite phases (Fig. , left); Enhanced phase-of-firing encoding was similarly evident for RPE and Outcome History as independent variable (Fig. , middle and right panels ) . We estimated the strength of this phase modulation of rate encoding for each spike-LFP pair as the amplitude of a cosine that was fit to the phase-binned encoding metric, normalized by the mean encoding across phase bins, which we term the Phase-of-Firing Gain (PFG ) . We further accounted for the positive bias in cosine amplitude estimation by normalizing this quantity by the cosine amplitude obtained from fitting the phase-binned metric after randomly shuffling spike phases. We refer to this difference of the observed to the randomly shuffled phase modulation of encoding as the Encoding Phase-of-Firing Gain (EPFG ; see “Methods”). This metric reflects an unbiased ratio of firing rate differences between preferred and anti-preferred encoding-phase bins. Of the 876 spike-LFP pairs that significantly synchronized in the 10–25 Hz band and encoded information in their firing rate, we found that 139 (16%) spike-LFP pairs showed significant phase-modulation, i.e., these pairs encoded significantly more information when taking into account the phase of firing than their average, phase-blind firing rate (randomization test, p < 0.05). A significant EPFG was evident for neurons whose firing encoded Outcome (Wilcoxon signrank test, Z = 8.41, p~0), RPE, ( Z = 3.27, p = 0.011), and Outcome History ( Z = 3.24, p = 0.012). The EPFG did not differ between these three functional clusters (Kruskal–Wallis test, χ 2 = 0.283, p = 0.87) (Fig. ). Similarly, EPFG was evident for spike-LFP pairs when the spiking neuron was in ACC ( Z = 7.5, p ~0), in STR ( Z = 4.98, p ~0), or in LPFC ( Z = 3.86, p = 0.0001) (Fig. ), but the EPFG strength differed between areas (Kruskal–Wallis test, χ 2 = 7.87, p = 0.02). Neurons in ACC showed significantly larger EPFG than neurons in LPFC ( χ 2 = 7.66, p = 0.0056) and a trend for larger EPFG than neurons in STR ( χ 2 = 2.89, p = 0.089). Similarly, spike-LFP pairs with spikes from an ACC neuron were more likely to show individually significant EPFG ( χ 2 test, χ 2 = 17.7, p = 0.0014; Supplementary Fig. ). When considering encoding strength on the basis of the LFP site of the spike-LFP pairs, EPFG was above chance in each of the three areas (Fig. ; ACC, Z = 5.02, p~0; LPFC, Z = 5.62, p~0; and STR, Z = 5.8, p~0), but did not vary by the LFP area (Kruskal–Wallis test, χ 2 = 0.192, p = 0.91). EPFG differences were more pronounced when selecting for each encoding metric the 25% of spike-LFP pairs with the largest EPFG . This selection revealed stronger EPFG encoding of RPE compared to Outcome ( χ 2 = 11.3, p ~0) and Outcome History ( χ 2 = 11.3, p = ~0). It also provided additional confirmation that EPFG was larger for neurons in ACC than in LPFC ( χ 2 = 10.4, p = 0.0013), with a similar trend for STR ( χ 2 = 2.41, p = 0.12). Likewise, EPFG did not vary on the basis of LFP area (Kruskal–Wallis, χ 2 = 0.192, p = 0.91). These results were similar in each monkey (Supplementary Fig. ). EPFG was similar for neurons that showed narrow (N) and broad (B) action potential waveforms that correspond putatively to distinct cell classes with their encoding phase-of-firing gain statistically indistinguishable in the ACC ( N N = 70, mean = −0.026 ± 0.029; N B = 48, mean = −0.0047 ± 0.06; Kruskal–Wallis test for equal median, p = 0.49), LPFC ( N N = 85, mean = 0.11 ± 0.04; N B = 54, mean = 0.0057 ± 0.080; p = 0.28), and STR ( N N = 37, mean = 0.014 ± 0.08; N B = 41, mean = 0.017 ± 0.11; p = 0.40) (see “Methods”). We next asked whether the EPFG for RPE encoding distinguishes the rewarded color that animals learned within a reversal block. Previously we showed that the firing rate of subsets of neurons encoded not only a scalar RPE signal but additionally showed stronger RPE signaling for only one or the other color in the task . These color-specific RPE signals can boost the reversal learning because they carry information not only about how much updating should take place (which scalar RPE’s signal) but the specific content of what needs to be updated (one or the other color during reversal learning). We quantified this feature-specific RPE encoding by separately testing whether the EPFG is significant when considering only trials when one or the other color was rewarded. We found that of all cell-LFP pairs encoding RPE’s, 3% (3/102) showed individually significant EPFG in both conditions (in other words, a non-feature-specific RPE), and ~15% (15/102), showed a significant EPFG only for one of two colors (a feature-specific RPE). The frequency of cell-LFP pairs where the EPFG was significant for neither, both, or only one color condition differed from chance ( χ 2 test, χ 2 = 109, p ~0). Importantly, feature-specific EPFG was more common than feature non-specific EPFG ( χ 2 = 6.72, p = 0.01). The proportion of colour-specific EPFG tended to be most prevalent in ACC with ~27% (9/34) of cases, compared to 10% (5/50) in LPFC and 6% (1/18) for STR ( χ 2 test, χ 2 = 5.26, p = 0.07). Robustness of phase-of-firing modulation of encoding The EPFG is an effect size measure for how strong firing rate is modulated by LFP phase between conditions. However, it does not take into account the variability of firing rates across trials, leaving open the question of whether such mean firing rate changes may be effectively decoded. To address this question, we performed additional tests at the same beta frequencies at which neurons maximally synchronized. Firstly, we calculated how much the percent explained deviance varied as a function of phase, which quantifies how well the model fit the data with spikes extracted on individual phase bins. We term this quantity EPFG D2 (see “Methods”). We found that across areas and all spike-LFP pairs with significant encoding, EPFG D2 was significantly larger than chance (Wilcoxon signrank test, p ~0). EPFG D2 was significantly above chance for Outcome (Wilcoxon signrank test, p ~0) and RPE ( p = 0.001) clusters, but not for spike-LFP pairs with neurons from the Outcome History cluster ( p = 0.24) (Supplementary Fig. ). In a second approach, we tested whether the EPFG is evident even when the statistical testing preserves the within-trial correlation of spike phases. So far, we tested for significance of EPFG by constructing a random distribution that shuffled all spike phases irrespective of the trials in which they occurred. While this preserves the overall degree of synchrony, it destroys any within-trial correlation of spikes. When constructing null distributions by randomly perturbing the phase of spikes on each trial by the same amount, we found an overall significant EPFG of 0.080 ± 0.018 (Wilcoxon signrank test, Z = 7.6, p ~0). As in the other statistics, EPFG was significant for Outcome ( p ~0) , Outcome History ( p = 0.002), and RPE (0.039) (Supplementary Fig. ). Similarly, phase-of-firing modulation significantly differed by spike area (Kruskal–Wallis test, p = 0.032), with spike-LFP pairs with spikes of neurons in ACC showing higher EPFG than LPFC ( p = 0.012) and a trend for higher EPFG in ACC than STR ( p = 0.069) (Supplementary Fig. ). Thus, the observed phase gain for the firing rate information is evident even when within-trial autocorrelation is preserved. In a third approach of analyzing the robustness of the EPFG finding, we considered an alternative normalization of our main encoding metric. The EPFG is a normalized quantity that accounts for the fact that simply fitting a cosine will result in positive amplitudes, implying that a cosine amplitude on its own has an upwards bias. A similar bias is evident in the null distribution of cosine amplitudes. As a consequence, the EPFG should be a considered a lower bound on the degree of modulation. This is evident when normalizing the cosine modulation not by the null distribution of the cosine, but by the encoding strength determined using all spikes. With such a normalization, encoding strength is ~0.61 ± 0.03, implying encoding is ~61% stronger on preferred vs anti-preferred phases. Similarly, normalizing the cosine modulation by the over-all firing rate of the cell, we obtained a median EPFG of 0.18 ± 0.010, implying that encoding is on average ~18% stronger on preferred rather than anti-preferred phases. In a final set of analyses, we considered the stability of encoding. Encoding designation was stable across phase bins, with ~90% of spike-LFP pairs exhibiting similar beta coefficient signs across all phase bins, and was not dependent on the number of phase bins used (no correlation of EPFG with [4, 6, 8, and 10] number of bins (Spearman rank correlation, R = 0.023, p = 0.18). Relation of phase-of-firing encoding modulation to the strength and phase of synchronization We next tested whether EPFG was specific to the beta frequency band and how the strength of EPFG related to the strength of synchronization. First, we found that EPFG was strongest and significant at the population level in the same beta frequency band that showed the strongest spike-LFP synchronization (Fig. ; Wilcoxon signrank test, p < 0.05, multiple comparison corrected). Overall, EPFG was most prevalent and significantly larger in spike-LFP pairs that showed significant phase synchronization (Fig. ; Kruskal–Wallis test, χ 2 = 31.2, p ~0). These results indicate that EPFG was evident when neurons encoded Outcome, RPE, and Outcome History in their firing rate and when they synchronized at beta-band frequencies. Second, we tested whether the phase-of-firing modulation of encoding is associated with stronger spike-LFP synchronization in one task condition than in another condition (e.g., in error trials versus correct trials). Such site-specific selectivity of neuronal synchronization has been reported in previous studies (e.g., , , .). To test this possibility, we correlated the phase-of-firing encoding with the difference in spike-LFP synchronization (indexed with the PPC) of those two trial conditions that were predicted to have the maximal firing rate difference. For Outcome encoding we calculated the PPC difference for correct versus error trials; for RPE encoding we compared correct trials following error trials versus error trials following correct trials; and for Outcome History encoding we compared correct trials following correct, versus errors following errors. We then correlated the absolute difference in PPC in the beta band between two conditions with the EPFG. We found that the EPFG was uncorrelated with the PPC differences between conditions for neurons encoding RPE (Spearman correlation, R = 0.083, p = 0.36), or Outcome History ( R = 0.074, p = 0.41). For Outcome encoding cells we found a moderate positive correlation with higher EPFG associated with larger differences in spike-LFP synchronization for correct versus error trials ( R = 0.11, p = 0.0067). In addition to the strength of synchronization, phase-of-firing modulation of encoding might also become evident as a difference of the preferred phase of synchronization between conditions. To test this possibility, we compared the average phase between conditions for each encoding cluster. We found that for neurons in the Outcome encoding cluster, the mean firing phase in correct and error trials did not differ (mean phase difference = −0.026 ± 0.0021 SE radians, bootstrap randomization test, p = 0.57). On the other hand, Outcome History cells significantly synchronized on average at different phases between conditions (−0.20 ±  0.0059 SE radians, p = 0.011), with a similar trend for RPE cells (−0.22 ± 0.014 SE radians, p = 0.059). We next asked whether the synchronizing phases that carried information were endogenously generated or whether they were externally triggered by the reward onset. We calculated the EPFG with and without subtracting the reward-onset aligned evoked LFP response (see “Methods”). We found that the EPFG was not different with (median = 0.096 ± 0.012 SE) versus without (median = 0.10 ± 0.019 SE) subtraction of the time-locked, evoked potential, suggesting that the beta oscillation events providing informative phases were endogenously generated (Kruskal–Wallis test, χ 2 = 0.03, p = 0.86). In line with this, we found that band-limited power in the beta band was a prominent and sustained component of the LFP after reward onset (Supplementary Fig. ) but without a reward-onset locked phase consistency (Supplementary Fig. ). We also tested whether LFP power variations or overall firing rate fluctuations influenced the phase-of-firing modulation of encoding. We found that overall the EPFG did not correlate with beta band power variations (Spearman rank correlation, R = 0.050, p = 0.14), but positively correlated with the overall firing rates of neurons (Spearman rank correlation, R = 0.13, p ~0). In addition to overall variations of power and firing rates, recent studies have shown that beta-band activity emerges in individual trials as transient bursts that can be linked to behavioral success in working memory and perceptual recognition paradigms – . To test whether such burst occurrences may underlie the significant EPFG we report so far, we restricted the analysis of the EPFG to those beta band periods that were part of a suprathreshold, oscillatory burst event (see “Methods”). This analysis was performed for spike-LFP pairs when neurons fired sufficient numbers of spikes (here: ≥30 spikes) per condition. The beta burst rate sharply increased after reward onset, as compared to a pre-reward onset period (see Supplementary Fig. ). We found that for spikes occurring within bursts, the median EPFG was 0.067 ± 0.034, which was significantly above chance (Supplementary Fig. ; n = 191; Wilcoxon signrank test, Z = 2.40, p = 0.016). EPFG for spikes outside bursts was 0.038 ± 0.017, which was also above chance (Supplementary Fig. ; n = 769; Z = 4.51, p ~0). Although encoding was higher inside rather than outside of bursts, this difference was not significant (Kruskal–Wallis test, χ 2 = 0.057, p = 0.81). Preferred spiking-phase and encoding spiking-phase differ for prediction error The previous result suggests that the encoding gain through the phase-of-firing is only weakly or not systematically associated with the strength of spike-LFP phase synchronization. This finding is consistent with a scenario in which the spiking-phase at which neurons maximally synchronize does not always coincide with the spiking-phase at which encoding of task variables is maximal. Indeed, we often observed that the phase with maximal encoding was not at the zero-phase bin, i.e., it deviated from the preferred spike-phase (see examples in Fig. ; Supplementary Fig. ). We tested this scenario by first calculating the preferred spike-phase for each neuron, and then quantifying the phase with maximal encoding relative to that phase. We found that all encoding neurons synchronized on average at similar phases, above what would be expected by chance (Outcome, average phase: −0.28 ± 0.0034 SE radians; Hodges–Ajne test for non-uniformity, p ~0; RPE . average phase: 0.35 ± 0.0034 SE radians, p = 0.00084; Outcome History. average phase: −0.68 ±  0.0045 SE radians, p = 0.0013) (Fig. ). The preferred spike-phase differed between the three encoding classes (Watson–Williams test, p ~0, F = 12.8; each pairwise comparison showed: Watson–Williams test, F > 7.7, p < 0.02; Fig. ). Next, we quantified for each cluster whether the phases showing maximal encoding were consistent across spike-LFP pairs, because the phase heterogeneity can be informative about possible readout strategies , (Fig. ). To this end, we extracted the phase offset from our cosine fit, which represents the phase at which encoding was maximal relative to the preferred spike-phase. Outcome encoding neurons showed preferred encoding phases that varied across the whole oscillation cycle (average phase: −0.92 ± 0.34 SE radians; Hodges–Ajne test, p = 0.38), as did Outcome History neurons (average phase: 1.19 ± 0.49 SE radians, p = 0.66) (Fig. ). In contrast, RPE encoding neurons significantly encoded at similar phase-offsets relative to the neuron’s synchronizing phases (average phase: −2.76 ± 0.047 SE radians, p = 0.0004, corresponding to 27 ms away from the mean spike phase at a 15 Hz oscillation cycle), which was significantly different than the mean spike phase (Median test, p = 0.027). This effect was particularly pronounced for RPE cells in ACC (Supplementary Fig. ), and was consistent across both monkeys (Supplementary Fig. ). Qualitatively similar results were obtained when extracting the preferred encoding phases derived from model deviances (Supplementary Fig. ). The phase of maximal encoding did not differ with varying number (4, 6, 8, 10) of phase bins used (Circular–Linear correlation, R = 0.013, p = 0.76). We next compared the relative phases showing maximal encoding between neurons encoding Outcome, RPE, and Outcome History and found that their average, relative encoding phases significantly differed (Watson–Williams test; F = 83.4, p~0; Fig. ). These results show that the preferred spike phase and the encoding phases are typically dissociated from one another, and—for RPE’s—were systematically offset from the preferred (mean) spike phase. Given these results, we next tested whether the dissociation of spike- and encoding- phases is not based on possible systematic phase shifts due to differences in the peak oscillation frequencies within the beta band. We validated that this was not the case and found that the three sets of neuronal encoding clusters synchronized on average at the same ~15 Hz center frequency (Kruskal–Wallis, χ 2 =0.95, p = 0.62; Supplementary Fig. ), and that they showed maximal phase-of-firing encoding at similar frequencies (also ~15 Hz) (Kruskal–Wallis test, χ 2 = 0.39, p = 0.82; Supplementary Fig. ). Moreover, the frequency showing strongest spike-LFP synchronization and the frequency showing maximal encoding-phase gain matched closely (median frequency ratio: 1 ± 0.01 SE; Supplementary Fig. ). This similarity of synchronization and encoding frequency did not differ on the basis of the functional designation (Kruskal–Wallis test, χ 2 = 0.047, p = 0.98), nor the area from which the spikes were sampled (Kruskal–Wallis test, χ 2 = 0.53, p = 0.77). Animals performed a feature-based reversal-learning task . Subjects were shown two stimuli with opposite colors and had to learn which of them led to reward (Fig. ). The same color remained associated with reward for at least 30 trials before an uncued reversal switched the color-reward association (Fig. ). During each trial, the subjects monitored the stimuli for transient dimming events in the colored stimuli. They received a fluid reward when making a saccade in response to the dimming of the stimulus with the reward associated color, while the dimming of the non-reward associated color had to be ignored. A correct, rewarded saccade to the dimming of the rewarded stimulus had to be made in the up- or downward direction of motion of that stimulus. This task required covert selective attention to one of two peripheral stimuli based on color, while the overt choice was based on the motion direction of the covertly attended stimulus. Correct responses were those that occurred according to the motion direction of the rewarded target in the correct response window, whereas errors were responses made to the incorrect, non-rewarded target, or in the incorrect response window in response to the distractor . In 110 and 51 sessions from monkey’s HA and KE, respectively, we found that subjects attained plateau performance of on average 80.2% (HA: 78.8%, KE: 83.6%) within 10 trials after color-reward reversal (Fig. ). Using a binomial General Linear Model (GLM) to predict current choice outcomes (outcomes from correct or erroneous choices, excluding fixation breaks), we found that for both subjects, outcomes from up to three trials into the past significantly predicted the current choice’s outcome (Fig. ; Wilcoxon signrank test, p < 0.05, multiple comparison corrected), closely matching previous findings . To test how previous and current outcomes are encoded at the single neuron level, we analyzed a total of 1460 neurons, with 332/227 (monkey HA/KE) neurons in LPFC, 268/182 neurons in ACC, and 221/230 neurons in anterior STR (Fig. , Supplementary Fig. ). These regions have previously been shown to encode outcome, outcome history, and prediction error information , – . We found multiple example neurons encoding different types of outcome variables. Some cells responded differently to correct versus erroneous trial outcomes irrespective of previous outcomes (Fig. ), while others responded strongest when the current outcome deviated from the previous trials’ outcome (signaling reward prediction error) (Fig. ), or when the current outcome was similar to the previous trials’ outcome, i.e., following a sequence of correct trials or a sequence of error trials (Fig. ). We quantified these types of outcome encoding using a LASSO Poisson GLM that predicted the spike counts during the outcome period (0.1–0.7 s after reward onset) and extracted the characteristic patterns of beta weights across the past and current trial outcomes that distinguished different types of outcome encoding (Fig. ). Neurons that encoded mostly the current trials’ outcome showed large weights only for the current trial (Outcome encoding type). Neurons encoding a prediction error showed beta weights for previous trials that were opposite in sign to the current trial’s outcome (Reward Prediction Error (RPE) encoding type). In neurons encoding the history of recent rewards, beta weights ramped up over recent trials toward the current trial outcome (Outcome History encoding type) (for examples, see also insets in Fig. ). We used a clustering analysis to test whether the three types of outcome encoding were separable from each other and prevalent in each of the recorded brain areas (Fig. , Supplementary Fig. ). Clustering showed that neurons encoding each the three variables were statistically separable with reliable cluster assignments of neurons evident in an average Silhouette measure of cluster separability of 0.81 for LPFC, 0.57 for ACC, and 0.75 for STR (Supplementary Fig. ) . The clustering does not preclude the possibility of a more continuous encoding space, but it statistically justifies focusing analysis on three sets of neurons with well distinguishable encoding pattern (see Supplementary Fig. ). Across the population the proportion of neurons with significant encoding, Outcome cells were the most populous (~59%, 234/384 in monkey HA and 185/329 in monkey KE), followed by ~26% of neurons encoding Reward Prediction Errors (64/231 in monkey HA and 39/170 in monkey KE) and ~32% of neurons encoding Outcome History (76/206 in monkey HA and 33/139 in monkey KE) (Fig. ; χ 2 test, χ 2 = 86.02, p ~0). The relative frequency of these encoding types did not differ between areas ( χ 2 test, χ 2 = 3.64, p = 0.46). On the other hand, the strength of encoding differed on the basis of area for Outcome cells (Kruskal–Wallis test, χ 2 = 26.6, p ~0), with stronger encoding in ACC than LPFC or STR, as well as for Outcome History encoding cells ( χ 2 -test, χ 2 = 19.7, p ~0) with stronger encoding in ACC and LPFC than in STR, whereas the strength of RPE encoding was similar across areas ( χ 2 = 2.49, p = 0.29) (see Supplementary Fig. for all pair-wise comparisons). In ACC, LPFC and STR, Outcome, RPE and Outcome History encoding emerged shortly (within 0.3 s) after outcomes were received (see “Methods”; Supplementary Fig. , Wilcoxon signrank test, p«0.05). Neurons encoding Outcome, RPE, or Outcome History showed similar overall firing rates (Supplementary Fig. ; ANOVA; LPFC, F = 1.32, p = 0.27; ACC, F = 0.58, p = 0.58; STR, F = 1.05, p = 0.35). We found similar proportions and activation time courses of encoding neurons in ACC, LPFC and STR (Supplementary Fig. ), which raised the question how these neuronal populations are functionally connected. One possibility is that neuronal firing patterns are organized temporally, such that spikes in one area phase synchronized to neuronal population activity in remote areas. We assessed synchrony as the phase consistency of neuronal spikes with local field potential (LFP) fluctuations in distally recorded areas using the pairwise-phase consistency (PPC) metric, and converting the PPC values into an effect size , (Fig. ; see “Methods”). Across all ( n = 7938) spike-LFP pairs, we found a pronounced peak of phase synchronization in the beta band (10–25 Hz), with neurons firing on average ~1.15 times more spikes on their preferred, average phase than at the opposite phase when considering the population average in the beta band (Fig. ), and ~1.39 times more spikes on the preferred phase when selecting for each neuron the beta frequency with peak synchrony (Supplementary Fig. ). Prominent beta-band synchrony was evident for neurons that encoded outcome variables in their firing rates and for those that did not show encoding (Fig. ), with the peak synchrony being stronger for cell-LFP pairs with non-coding rather than coding cells (unpaired t -test, T = 7.67, p ~0 ; Supplementary Fig. ). Overall 53% (4230/7938) of the spike-LFP pairs showed significant phase synchronization within the 10–25 Hz range (Fig. ; Rayleigh test, p < 0.05, see “Methods” for prominence criteria), with similar proportions across all three areas (LPFC, 1506/2961, 50.9%; ACC, 1473/2442, 60.3%; STR, 1292/2524, 51.2%). Consistent with these results we found that the synchrony effect (the proportion of spikes at preferred over non-preferred phases) were similarly high for spike-LFP pairs with neurons encoding Outcome (1.37 ± 0.007), RPE (1.35 ± 0.013), and Outcome History (1.34 ± 0.011). There was only a trend for phase synchronization to be different between encoding clusters (ANOVA, F = 2.8, p = 0.061), which post-hoc analysis revealed to be driven primarily by differences between Outcome History and Outcome clusters ( p = 0.078, multiple comparison corrected), rather than Outcome History and RPE ( p = 0.80) or RPE and Outcome clusters ( p = 0.37). Next, we tested whether spike-LFP synchronization showed area-specificity for neurons that significantly encoded Outcome, RPE or Outcome History in their firing rates. For each spike-LFP pair, we selected the beta band frequency with the most prominent PPC value. Within-area beta synchrony differed on the basis of area (ANOVA, F = 32.6, p ~0), with the strongest synchrony within ACC, compared to LPFC (multiple comparison corrected, p = 0.014) or STR ( p ~0) (Fig. ). Synchrony also differed when assessing spikes and LFPs from different areas (ANOVA, F = 12.7, p ~0), with neurons in ACC showing stronger between-area spike-LFP synchrony, as compared to LPFC ( p ~0) and STR ( p = 0.042). We found a trend for stronger between-area synchrony with spikes originating in STR, as compared to LPFC ( p = 0.058) (Fig. ). Testing for the reciprocity of beta-band phase synchrony showed that ACC spikes phase synchronized more strongly to LFP beta activity in the LPFC than vice versa ( p = 0.047) (Fig. ). LPFC and STR pairs showed statistically indistinguishable spike-phase synchrony strength ( p = 0.92), as did ACC and STR pairs ( p = 0.26). The findings were similar when inter-areal synchrony was analyzed separately at each frequency (Supplementary Fig. ). For both monkeys, neurons in ACC showed the strongest spike synchronization compared to neurons from LPFC and STR (the area difference is significant in monkey HA, and trends the same way in monkey KE; see Supplementary Fig. ). Moreover, across all three areas, the strength of 15–25 Hz phase synchronization was statistically indistinguishable in the [−1 0] s. pre-outcome period compared to the [0.1 1] s. post-outcome period (paired t -test, abs( T ) < 1.57, p > 0.12; Supplementary Fig. ). The baseline period ([−1 0] before stimulus onset) and the post-outcome period showed similar PPC values in ACC and STR (abs( T ) <0.49, p > 0.6), while LPFC showed stronger phase synchronization in the post-outcome period ( T = 2.82, p = 0.0049; Supplementary Fig. ). Neurons that synchronized to the LFP elicit more spikes at their mean spike-LFP phase, which we denote as the neurons’ preferred spike phase . This preferred spike-phase might thus be important to encode information shared among areas of the network , . We tested this hypothesis by quantifying how much Outcome, RPE, and Outcome History information is available to neurons at different phase bins relative to their mean phase. If the phase-of-firing conveys information, then differences in spike counts between conditions should vary across phases, as opposed to a pure firing rate code that predicts equal information for spike counts across phase bins (Fig. ) , . Figure shows an example neuron exhibiting such phase-of-firing coding (with spikes from ACC and LFP beta phases from STR). This neuron exhibited increased firing on error trials compared to correct trials, but only when considering spikes near its preferred spike phase, with firing at the opposite phase showing no difference. To quantify this increase of coding when considering the phase-of-firing code for all three information types, we selected for each neuron the frequency within 10–25 Hz that showed maximal spike-LFP synchrony, subtracted the mean (preferred) spike phase from all phases (i.e., setting the preferred phase to zero, to allow for comparison between neurons), and binned spikes on the basis of the LFP beta phases. To prevent an influence of overall firing rate changes between phase bins, we adjusted the width of each of the six-phase bins to have equal spike counts across bins (see “Methods”). We then fitted a GLM to the firing rates of each phase bin separately to quantify the Outcome, RPE, and Outcome History encoding for each phase bin and compared this phase specific encoding to a null distribution obtained by randomly shuffling the spike phases prior to binning. Figure illustrates example neurons for which the encoding systematically varied as a function of phase (for more examples, see Supplementary Fig. ). The example spike-LFP pair from Fig. encoded the trial Outcome significantly stronger than a phase-blind rate code in spikes within ~[-π/2, π/2] radians relative to its preferred spike phase and weaker than a phase-blind rate code at opposite phases (Fig. , left); Enhanced phase-of-firing encoding was similarly evident for RPE and Outcome History as independent variable (Fig. , middle and right panels ) . We estimated the strength of this phase modulation of rate encoding for each spike-LFP pair as the amplitude of a cosine that was fit to the phase-binned encoding metric, normalized by the mean encoding across phase bins, which we term the Phase-of-Firing Gain (PFG ) . We further accounted for the positive bias in cosine amplitude estimation by normalizing this quantity by the cosine amplitude obtained from fitting the phase-binned metric after randomly shuffling spike phases. We refer to this difference of the observed to the randomly shuffled phase modulation of encoding as the Encoding Phase-of-Firing Gain (EPFG ; see “Methods”). This metric reflects an unbiased ratio of firing rate differences between preferred and anti-preferred encoding-phase bins. Of the 876 spike-LFP pairs that significantly synchronized in the 10–25 Hz band and encoded information in their firing rate, we found that 139 (16%) spike-LFP pairs showed significant phase-modulation, i.e., these pairs encoded significantly more information when taking into account the phase of firing than their average, phase-blind firing rate (randomization test, p < 0.05). A significant EPFG was evident for neurons whose firing encoded Outcome (Wilcoxon signrank test, Z = 8.41, p~0), RPE, ( Z = 3.27, p = 0.011), and Outcome History ( Z = 3.24, p = 0.012). The EPFG did not differ between these three functional clusters (Kruskal–Wallis test, χ 2 = 0.283, p = 0.87) (Fig. ). Similarly, EPFG was evident for spike-LFP pairs when the spiking neuron was in ACC ( Z = 7.5, p ~0), in STR ( Z = 4.98, p ~0), or in LPFC ( Z = 3.86, p = 0.0001) (Fig. ), but the EPFG strength differed between areas (Kruskal–Wallis test, χ 2 = 7.87, p = 0.02). Neurons in ACC showed significantly larger EPFG than neurons in LPFC ( χ 2 = 7.66, p = 0.0056) and a trend for larger EPFG than neurons in STR ( χ 2 = 2.89, p = 0.089). Similarly, spike-LFP pairs with spikes from an ACC neuron were more likely to show individually significant EPFG ( χ 2 test, χ 2 = 17.7, p = 0.0014; Supplementary Fig. ). When considering encoding strength on the basis of the LFP site of the spike-LFP pairs, EPFG was above chance in each of the three areas (Fig. ; ACC, Z = 5.02, p~0; LPFC, Z = 5.62, p~0; and STR, Z = 5.8, p~0), but did not vary by the LFP area (Kruskal–Wallis test, χ 2 = 0.192, p = 0.91). EPFG differences were more pronounced when selecting for each encoding metric the 25% of spike-LFP pairs with the largest EPFG . This selection revealed stronger EPFG encoding of RPE compared to Outcome ( χ 2 = 11.3, p ~0) and Outcome History ( χ 2 = 11.3, p = ~0). It also provided additional confirmation that EPFG was larger for neurons in ACC than in LPFC ( χ 2 = 10.4, p = 0.0013), with a similar trend for STR ( χ 2 = 2.41, p = 0.12). Likewise, EPFG did not vary on the basis of LFP area (Kruskal–Wallis, χ 2 = 0.192, p = 0.91). These results were similar in each monkey (Supplementary Fig. ). EPFG was similar for neurons that showed narrow (N) and broad (B) action potential waveforms that correspond putatively to distinct cell classes with their encoding phase-of-firing gain statistically indistinguishable in the ACC ( N N = 70, mean = −0.026 ± 0.029; N B = 48, mean = −0.0047 ± 0.06; Kruskal–Wallis test for equal median, p = 0.49), LPFC ( N N = 85, mean = 0.11 ± 0.04; N B = 54, mean = 0.0057 ± 0.080; p = 0.28), and STR ( N N = 37, mean = 0.014 ± 0.08; N B = 41, mean = 0.017 ± 0.11; p = 0.40) (see “Methods”). We next asked whether the EPFG for RPE encoding distinguishes the rewarded color that animals learned within a reversal block. Previously we showed that the firing rate of subsets of neurons encoded not only a scalar RPE signal but additionally showed stronger RPE signaling for only one or the other color in the task . These color-specific RPE signals can boost the reversal learning because they carry information not only about how much updating should take place (which scalar RPE’s signal) but the specific content of what needs to be updated (one or the other color during reversal learning). We quantified this feature-specific RPE encoding by separately testing whether the EPFG is significant when considering only trials when one or the other color was rewarded. We found that of all cell-LFP pairs encoding RPE’s, 3% (3/102) showed individually significant EPFG in both conditions (in other words, a non-feature-specific RPE), and ~15% (15/102), showed a significant EPFG only for one of two colors (a feature-specific RPE). The frequency of cell-LFP pairs where the EPFG was significant for neither, both, or only one color condition differed from chance ( χ 2 test, χ 2 = 109, p ~0). Importantly, feature-specific EPFG was more common than feature non-specific EPFG ( χ 2 = 6.72, p = 0.01). The proportion of colour-specific EPFG tended to be most prevalent in ACC with ~27% (9/34) of cases, compared to 10% (5/50) in LPFC and 6% (1/18) for STR ( χ 2 test, χ 2 = 5.26, p = 0.07). The EPFG is an effect size measure for how strong firing rate is modulated by LFP phase between conditions. However, it does not take into account the variability of firing rates across trials, leaving open the question of whether such mean firing rate changes may be effectively decoded. To address this question, we performed additional tests at the same beta frequencies at which neurons maximally synchronized. Firstly, we calculated how much the percent explained deviance varied as a function of phase, which quantifies how well the model fit the data with spikes extracted on individual phase bins. We term this quantity EPFG D2 (see “Methods”). We found that across areas and all spike-LFP pairs with significant encoding, EPFG D2 was significantly larger than chance (Wilcoxon signrank test, p ~0). EPFG D2 was significantly above chance for Outcome (Wilcoxon signrank test, p ~0) and RPE ( p = 0.001) clusters, but not for spike-LFP pairs with neurons from the Outcome History cluster ( p = 0.24) (Supplementary Fig. ). In a second approach, we tested whether the EPFG is evident even when the statistical testing preserves the within-trial correlation of spike phases. So far, we tested for significance of EPFG by constructing a random distribution that shuffled all spike phases irrespective of the trials in which they occurred. While this preserves the overall degree of synchrony, it destroys any within-trial correlation of spikes. When constructing null distributions by randomly perturbing the phase of spikes on each trial by the same amount, we found an overall significant EPFG of 0.080 ± 0.018 (Wilcoxon signrank test, Z = 7.6, p ~0). As in the other statistics, EPFG was significant for Outcome ( p ~0) , Outcome History ( p = 0.002), and RPE (0.039) (Supplementary Fig. ). Similarly, phase-of-firing modulation significantly differed by spike area (Kruskal–Wallis test, p = 0.032), with spike-LFP pairs with spikes of neurons in ACC showing higher EPFG than LPFC ( p = 0.012) and a trend for higher EPFG in ACC than STR ( p = 0.069) (Supplementary Fig. ). Thus, the observed phase gain for the firing rate information is evident even when within-trial autocorrelation is preserved. In a third approach of analyzing the robustness of the EPFG finding, we considered an alternative normalization of our main encoding metric. The EPFG is a normalized quantity that accounts for the fact that simply fitting a cosine will result in positive amplitudes, implying that a cosine amplitude on its own has an upwards bias. A similar bias is evident in the null distribution of cosine amplitudes. As a consequence, the EPFG should be a considered a lower bound on the degree of modulation. This is evident when normalizing the cosine modulation not by the null distribution of the cosine, but by the encoding strength determined using all spikes. With such a normalization, encoding strength is ~0.61 ± 0.03, implying encoding is ~61% stronger on preferred vs anti-preferred phases. Similarly, normalizing the cosine modulation by the over-all firing rate of the cell, we obtained a median EPFG of 0.18 ± 0.010, implying that encoding is on average ~18% stronger on preferred rather than anti-preferred phases. In a final set of analyses, we considered the stability of encoding. Encoding designation was stable across phase bins, with ~90% of spike-LFP pairs exhibiting similar beta coefficient signs across all phase bins, and was not dependent on the number of phase bins used (no correlation of EPFG with [4, 6, 8, and 10] number of bins (Spearman rank correlation, R = 0.023, p = 0.18). We next tested whether EPFG was specific to the beta frequency band and how the strength of EPFG related to the strength of synchronization. First, we found that EPFG was strongest and significant at the population level in the same beta frequency band that showed the strongest spike-LFP synchronization (Fig. ; Wilcoxon signrank test, p < 0.05, multiple comparison corrected). Overall, EPFG was most prevalent and significantly larger in spike-LFP pairs that showed significant phase synchronization (Fig. ; Kruskal–Wallis test, χ 2 = 31.2, p ~0). These results indicate that EPFG was evident when neurons encoded Outcome, RPE, and Outcome History in their firing rate and when they synchronized at beta-band frequencies. Second, we tested whether the phase-of-firing modulation of encoding is associated with stronger spike-LFP synchronization in one task condition than in another condition (e.g., in error trials versus correct trials). Such site-specific selectivity of neuronal synchronization has been reported in previous studies (e.g., , , .). To test this possibility, we correlated the phase-of-firing encoding with the difference in spike-LFP synchronization (indexed with the PPC) of those two trial conditions that were predicted to have the maximal firing rate difference. For Outcome encoding we calculated the PPC difference for correct versus error trials; for RPE encoding we compared correct trials following error trials versus error trials following correct trials; and for Outcome History encoding we compared correct trials following correct, versus errors following errors. We then correlated the absolute difference in PPC in the beta band between two conditions with the EPFG. We found that the EPFG was uncorrelated with the PPC differences between conditions for neurons encoding RPE (Spearman correlation, R = 0.083, p = 0.36), or Outcome History ( R = 0.074, p = 0.41). For Outcome encoding cells we found a moderate positive correlation with higher EPFG associated with larger differences in spike-LFP synchronization for correct versus error trials ( R = 0.11, p = 0.0067). In addition to the strength of synchronization, phase-of-firing modulation of encoding might also become evident as a difference of the preferred phase of synchronization between conditions. To test this possibility, we compared the average phase between conditions for each encoding cluster. We found that for neurons in the Outcome encoding cluster, the mean firing phase in correct and error trials did not differ (mean phase difference = −0.026 ± 0.0021 SE radians, bootstrap randomization test, p = 0.57). On the other hand, Outcome History cells significantly synchronized on average at different phases between conditions (−0.20 ±  0.0059 SE radians, p = 0.011), with a similar trend for RPE cells (−0.22 ± 0.014 SE radians, p = 0.059). We next asked whether the synchronizing phases that carried information were endogenously generated or whether they were externally triggered by the reward onset. We calculated the EPFG with and without subtracting the reward-onset aligned evoked LFP response (see “Methods”). We found that the EPFG was not different with (median = 0.096 ± 0.012 SE) versus without (median = 0.10 ± 0.019 SE) subtraction of the time-locked, evoked potential, suggesting that the beta oscillation events providing informative phases were endogenously generated (Kruskal–Wallis test, χ 2 = 0.03, p = 0.86). In line with this, we found that band-limited power in the beta band was a prominent and sustained component of the LFP after reward onset (Supplementary Fig. ) but without a reward-onset locked phase consistency (Supplementary Fig. ). We also tested whether LFP power variations or overall firing rate fluctuations influenced the phase-of-firing modulation of encoding. We found that overall the EPFG did not correlate with beta band power variations (Spearman rank correlation, R = 0.050, p = 0.14), but positively correlated with the overall firing rates of neurons (Spearman rank correlation, R = 0.13, p ~0). In addition to overall variations of power and firing rates, recent studies have shown that beta-band activity emerges in individual trials as transient bursts that can be linked to behavioral success in working memory and perceptual recognition paradigms – . To test whether such burst occurrences may underlie the significant EPFG we report so far, we restricted the analysis of the EPFG to those beta band periods that were part of a suprathreshold, oscillatory burst event (see “Methods”). This analysis was performed for spike-LFP pairs when neurons fired sufficient numbers of spikes (here: ≥30 spikes) per condition. The beta burst rate sharply increased after reward onset, as compared to a pre-reward onset period (see Supplementary Fig. ). We found that for spikes occurring within bursts, the median EPFG was 0.067 ± 0.034, which was significantly above chance (Supplementary Fig. ; n = 191; Wilcoxon signrank test, Z = 2.40, p = 0.016). EPFG for spikes outside bursts was 0.038 ± 0.017, which was also above chance (Supplementary Fig. ; n = 769; Z = 4.51, p ~0). Although encoding was higher inside rather than outside of bursts, this difference was not significant (Kruskal–Wallis test, χ 2 = 0.057, p = 0.81). The previous result suggests that the encoding gain through the phase-of-firing is only weakly or not systematically associated with the strength of spike-LFP phase synchronization. This finding is consistent with a scenario in which the spiking-phase at which neurons maximally synchronize does not always coincide with the spiking-phase at which encoding of task variables is maximal. Indeed, we often observed that the phase with maximal encoding was not at the zero-phase bin, i.e., it deviated from the preferred spike-phase (see examples in Fig. ; Supplementary Fig. ). We tested this scenario by first calculating the preferred spike-phase for each neuron, and then quantifying the phase with maximal encoding relative to that phase. We found that all encoding neurons synchronized on average at similar phases, above what would be expected by chance (Outcome, average phase: −0.28 ± 0.0034 SE radians; Hodges–Ajne test for non-uniformity, p ~0; RPE . average phase: 0.35 ± 0.0034 SE radians, p = 0.00084; Outcome History. average phase: −0.68 ±  0.0045 SE radians, p = 0.0013) (Fig. ). The preferred spike-phase differed between the three encoding classes (Watson–Williams test, p ~0, F = 12.8; each pairwise comparison showed: Watson–Williams test, F > 7.7, p < 0.02; Fig. ). Next, we quantified for each cluster whether the phases showing maximal encoding were consistent across spike-LFP pairs, because the phase heterogeneity can be informative about possible readout strategies , (Fig. ). To this end, we extracted the phase offset from our cosine fit, which represents the phase at which encoding was maximal relative to the preferred spike-phase. Outcome encoding neurons showed preferred encoding phases that varied across the whole oscillation cycle (average phase: −0.92 ± 0.34 SE radians; Hodges–Ajne test, p = 0.38), as did Outcome History neurons (average phase: 1.19 ± 0.49 SE radians, p = 0.66) (Fig. ). In contrast, RPE encoding neurons significantly encoded at similar phase-offsets relative to the neuron’s synchronizing phases (average phase: −2.76 ± 0.047 SE radians, p = 0.0004, corresponding to 27 ms away from the mean spike phase at a 15 Hz oscillation cycle), which was significantly different than the mean spike phase (Median test, p = 0.027). This effect was particularly pronounced for RPE cells in ACC (Supplementary Fig. ), and was consistent across both monkeys (Supplementary Fig. ). Qualitatively similar results were obtained when extracting the preferred encoding phases derived from model deviances (Supplementary Fig. ). The phase of maximal encoding did not differ with varying number (4, 6, 8, 10) of phase bins used (Circular–Linear correlation, R = 0.013, p = 0.76). We next compared the relative phases showing maximal encoding between neurons encoding Outcome, RPE, and Outcome History and found that their average, relative encoding phases significantly differed (Watson–Williams test; F = 83.4, p~0; Fig. ). These results show that the preferred spike phase and the encoding phases are typically dissociated from one another, and—for RPE’s—were systematically offset from the preferred (mean) spike phase. Given these results, we next tested whether the dissociation of spike- and encoding- phases is not based on possible systematic phase shifts due to differences in the peak oscillation frequencies within the beta band. We validated that this was not the case and found that the three sets of neuronal encoding clusters synchronized on average at the same ~15 Hz center frequency (Kruskal–Wallis, χ 2 =0.95, p = 0.62; Supplementary Fig. ), and that they showed maximal phase-of-firing encoding at similar frequencies (also ~15 Hz) (Kruskal–Wallis test, χ 2 = 0.39, p = 0.82; Supplementary Fig. ). Moreover, the frequency showing strongest spike-LFP synchronization and the frequency showing maximal encoding-phase gain matched closely (median frequency ratio: 1 ± 0.01 SE; Supplementary Fig. ). This similarity of synchronization and encoding frequency did not differ on the basis of the functional designation (Kruskal–Wallis test, χ 2 = 0.047, p = 0.98), nor the area from which the spikes were sampled (Kruskal–Wallis test, χ 2 = 0.53, p = 0.77). Here, we found a significant proportion of neurons whose phase-of-firing in a band-limited beta frequency conveyed significantly more information about three learning variables than their firing rates alone. This encoding-phase gain was evident for spikes generated within the ACC, LPFC and STR of nonhuman primates in a [0.1 0.7] second period of outcome processing during reversal learning performance. Phase-of-firing encoding was most prominent at the 10–25 Hz beta frequency at which spikes synchronized to the local fields across areas. However, the strength of spike-LFP phase synchronization could not necessarily explain the strength of the phase-of-firing encoding. Rather, maximal encoding occurred for many neurons at phases away from the preferred spiking phase. The dissociation of spiking and encoding phases was particularly prominent for information about the RPE. Taken together, these results show population-level information multiplexing of learning variables at segregated phases of a beta oscillation across synchronized medial and lateral fronto-striatal loops. These findings suggest that spike-LFP oscillation phases are carriers of information, above and beyond that of a phase-blind firing rate code. The gain of information through the phase of firing provides an intriguing dynamic code that could link principles of efficient neuronal information transmission with the demands of representing multiple types of information in the same dynamical neural system. We found that three critical variables needed for adjusting behavior are represented in partly segregated neuronal populations not only in their firing rates, but in phase-specific firing at a beta frequency that is shared among ACC, LPFC and STR. This finding suggests that the beta frequency could serve as an important conduit for the fast distribution of learning-related information within fronto-striatal networks , . Prior studies have shown that ACC, LPFC and STR causally contribute to fast learning of object values. With lesioned ACC, rhesus monkeys fail to use outcome history for updating values and show perseverative behaviors . Without LPFC, rhesus monkeys fail to recognize when a previously irrelevant object becomes relevant as if they fail to calculate RPE ’s needed for updating their attentional set . When the anterior STR is lesioned, nonhuman primates tend to stick to previously learned behavior and show a lack of sensitivity to reward outcomes , . These behavioral lesion effects are consistent with the important role of each of these brain areas to track the history of recent outcomes, registering newly encountered (current) outcomes, and calculating the unexpectedness of experienced outcomes (prediction error). Consequently, our finding of segregated neuronal ensembles encoding Outcome, Prediction Error and Outcome History complements a large literature that documents how these variables are represented in the firing of neurons across fronto-striatal areas. What has been left unanswered, however, is how this firing rate information about multiple variables emerges at similar times and similar proportions across areas. Prior studies suggest that firing rate correlations between brain areas are relatively weak and poor candidates for veridical information transfer , , , while temporally aligning the spike output of many neurons to the phases of precisely timed, synchronized packets are a theoretically, particularly powerful means in affecting postsynaptic neuronal populations , , , , . Our findings support this notion of a temporal code using synchronized oscillations by showing that those neurons that carry critical information in their firing rates also tend to synchronize long-range between ACC, LPFC, and STR at a shared 10–25 Hz beta frequency. This beta frequency is thus a candidate means for enhancing distributed information transfer, because spike output of many neurons is concentrated at the same phase and thus activate postsynaptic membranes at similar times. This scenario of beta rhythmic information exchange within fronto-striatal networks is supported by previous nonhuman primate studies that demonstrated 10–25 Hz beta rhythmic synchronization during active task processing states between ACC and LPFC , , , between PFC and STR , between ACC and FEF , between LPFC and FEF , , and between LPFC or FEF with posterior parietal cortex , , – . Each of these studies has shown short-lived rhythmic long-range synchronization between distant brain areas during cognitive tasks at a ~15 Hz frequency, similar to studies in humans (e.g., ). Our findings critically complement these studies by revealing that 10–25 Hz spike-LFP synchronization is prevalent not only during cognitive processing, but also during the processing of outcomes after attention has been deployed and choices have been made. During this post-choice outcome processing, fronto-striatal circuits are likely to adjust their synaptic connection strength to minimize future prediction errors and improve performance , , . Our results suggest that this updating utilizes beta rhythmic activity fluctuations during the post-choice outcome processing period. Our finding that spiking output carries separable types of information at different phases of the same oscillation frequency has potentially far-reaching implications. By finding that Outcome, Prediction Error and Outcome History were encoded at separate phases, the population of neurons effectively multiplexes independent information streams at different phases of beta synchronized firing. This stands in contrast to prior studies reporting that long-range beta rhythmic synchronization between LPFC, ACC or STR in the primate encoded relevant task variables via the strength of beta synchrony , , , , . For example, some prefrontal cortex neurons synchronize stronger at beta to posterior parietal areas when subjects choose one visual category over another , or when they maintain one object over another in working memory . These findings are broadly consistent with a communication-through-coherence schema where upstream senders are more coherent with downstream readers when they successfully compete for representation , , . Yet it has remained unclear how such a scheme may operate when multiple items must be multiplexed and transmitted in the same recurrent network , , , – . Computationally, the multiplexing and the efficient transmission of information can operate in tandem when the temporal organization of activity is exploited at the sending and receiving site , , , . Consequently, selective synchronization between distal sites could be leveraged to enhance transmission selectivity, whereas temporally segregated information streams could enhance transmission capacity . Our results resonate with this view by showing that neurons that synchronize long-range at one oscillation phase carries information of any of three learning variables at phases systematically offset from the synchronizing phase. By finding evidence for such a temporal multiplexing in the beta frequency band, we critically extend previous reports of phase encoding of information for object features, object identities, and object categories at theta, alpha and gamma frequencies , , – . In our study, the beta phase enhanced encoding applied to three complex learning variables that were needed to succeed in the behavioral learning task. In particular, the presence of reward prediction error information provides a critical teaching signal that indicates how much synaptic connections should change to represent future value expectations more accurately , . Our results suggest that this updating can utilize spike-timing-dependent plasticity mechanisms that are tuned to firing phases ~27 ms away from the preferred synchronization phase in the beta frequency band. How such a temporal organization in the beta band is used in the larger fronto-striatal network will be an important question for future studies. A caveat in interpreting the phase-modulated coding we report is that it is consistent with multiple coding schemes beyond a phase-based multiplexing . For example, spiking activity may be phase-synchronized in one condition but not another, or alternatively, conditions may be encoded on separate phases. Our results provide support for both coding schemes. Outcome cells resemble coding via an asynchronous code; that is to say, spike-LPF phase synchronization is higher in one condition than another, with no evidence of phase differences between conditions. On the other hand, RPE and Outcome History cells show evidence of phase-separation coding. These cells showed no significant difference in PPC between conditions but did show a (near significant) trend towards firing on different phases. These suggestions depend on a proper estimation of phase, which can be influenced by the level of background noise and the degree of synchrony of individual cells within a population . However, we believe this would not affect the main conclusions of our study, as we observed (1) significant increases in encoding-phase gain both when oscillatory bursts were prominent or not, and (2) significant phase encoding gain when controlling for outcome induced activity. So far, evidence in humans and rodents suggested that processes linked to beta frequency activity during the evaluation of outcomes support the detection of errors and the updating of erroneous internal predictions , . In fact, there have been conflicting views on whether beta oscillations related to outcome signals are more likely to reflect a weighted integration of recent outcomes, or the unexpectedness of the current, observed outcome relative to recent outcomes , . Our findings reconcile these viewpoints by documenting that encoding of Outcome History weights and of Prediction Errors coexist in the same circuit at the same oscillation frequency in phase-dependent firing of single neurons. We found that the beta phase allowing maximal encoding of Prediction Errors was offset ~27 ms on average from the phase at which most spikes synchronized to the local fields. Such a dissociation of mean spike-phase and encoding-phase has been reported previously for the beta frequency band in parietal cortex, where maximal information of joint saccadic and joystick choice directions were best predicted by spike counts at ~50 degrees away from the preferred beta spike phase . Such phase offsets underlying maximal encoding in parietal cortex as well as in ACC, LPFC, and STR in our study provide constraints on the possible circuit mechanisms that permit temporal segregation of inputs streams through phase-specific oscillatory dynamics . One possible circuit mechanism that implements and utilizes multiplexed information streams through phase-specific firing has been described and computationally modeled specifically for the low 10–20 Hz frequency range , . This work suggests that distinct sets of pyramidal neurons can encode distinct input streams in their firing phases at 10–20 Hz beta activity when these inputs streams arrive with a phase offset to each other, e.g., when they arrive sequentially in time. According to this schema, a first input stream activates pyramidal neurons in deep cortical layers that feed information to superficial layers whose interlaminar inhibitory connections closes an interlaminar reverberant loop of activity. This interlaminar ensemble follows a beta activity rhythm due to cell specific dynamics that maintains the beta-phasic firing of active neurons , . When a second input stream activates another set of pyramidal cells within the same beta rhythmic neural population, the input timing of that second stream was maintained at a different phase than the phase of the first activated ensemble . The parallel coding of information at a common beta rhythm in these models provides a qualitative proof of concept about phase-specific encoding of multiple types of inputs in larger beta rhythmic ensembles, and suggests a possible mechanistic realization of enhanced encoding by the phase of firing in the beta band . Moreover, these models , also suggest possible reasons why encoding phases and the average, preferred spiking phases can differ. In our study RPE encoding was maximal for spikes that occurred 27 ms away from the preferred beta phase at which most spikes of the neurons were elicited. In the context of these models, a phase offset could indicate that RPE’s are part of an input stream that is arriving already with a delay to the major beta rhythmic input stream that this neuron sees. For example, input carrying prediction error information might arrive from the ventral tegmental area while the dominant beta rhythmic firing (that determines the mean phase) might be based on local cortical mechanisms coupled to other cortical areas. Consistent with such a scenario, a prior rodent study has shown that the phase of phase-synchronous prefrontal cortex neurons shifted with the learning of new reward locations, consistent with a dopaminergic influence form the ventral tegmental area on the phase of spike-LFP synchrony. Alternatively, a 27 ms phase offset for encoding prediction error information might have a local origin, with the delay reflecting the computation of the error in prediction based on input that carries the prediction itself. This scenario gains plausibility when considering that a prediction error reflects a transformation of two signals, i.e., it is the difference of the expected value and the received outcome. This transformation will take time. In the temporal domain, this delay is likely reflected in a latency difference with prediction error signals emerging typically after outcomes are processed (which we found, Supplementary Fig. ). In a recurrent circuit, this delay in computing an error might additionally be reflected in a phase offset. According to this view, the 27 ms offset in maximal encoding of RPE indicates a local transformation of two input streams (predicted value and outcome) into their difference (the error in value prediction). Future work needs to specify whether these scenarios are realized by beta rhythmically firing ensembles of neurons and how long-lasting and robust the encoding with phase-specific firing is with regard to the overall firing rates and firing variability of individual neurons during active brain states. In summary, we have documented that learning variables are better encoded when taking into account the phase of firing of neurons that synchronize long-range across primate fronto-striatal circuits. These neurons that showed a phase encoding gain of their firing also carried information in overall firing rate modulations which clarifies that an asynchronous rate code and a synchronous temporal code coexist in the same circuit . By exploiting the temporal structure endowed in long-range neuronal synchronization our findings suggest how neuronal assemblies in one brain area could be read out from neural assemblies in distally connected brain areas . This phase-of-firing schema entails key features required from a versatile neural code including the efficient neural transmission and the effective representation of variables needed for adaptive goal-directed behavior . Experimental animals Data were collected from two adult, 9 and 7-year-old, male rhesus monkeys ( Macaca mulatta ) following procedures described in ref. . All animal care and experimental protocols were approved by the York University Council on Animal Care and were in accordance with the Canadian Council on Animal Care guidelines. Behavioral paradigm Monkeys performed a feature-based reversal-learning task that required covert attention to one of two stimuli based on the reward associated with the color of the stimuli. Which stimulus color was rewarded remained identical for ≥30 trials and reversed without explicit cue. The reward reversal required monkeys to utilize trial outcomes to adjust to the new color-reward rule. Details of the task have been described before . Each trial started when subjects foveated a central cue. After 0.5–0.9 s, two black and white gratings appeared. After another 0.4 s, the stimuli either began to move within their aperture in opposite directions (up-/downwards) or were colored with opposite colors (red/green or blue/yellow). After another 0.5–0.9 s, they gained the color when the first feature was motion, or they gained motion when the first feature had been color. After 0.4–0.1 s, the stimuli could transiently dim. The dimming occurred either in both stimuli simultaneously, or separated in time by 0.55 s. Dimming represented the go-cue to make a saccade in the direction of the motion when it occurred in the stimulus with the reward associated color. The dimming acted as a no-go cue when it occurred in the stimulus with the non-rewarded color. A saccadic response was only rewarded when it was made in the direction of motion of the stimulus with the rewarded color. Motion direction and location of the individual colors were randomized within a block. Thus, the only feature predictive of reward within a block was color. Color-reward associations were constant for a minimum of 30 trials. Block changes occurred when 90% performance was reached over the last 12 trials, or 100 trials were completed without reaching criterion. The block change was uncued. Rewards were deterministic. Electrophysiology Extra-cellular recordings were made with 1–12 tungsten electrodes (impedance 1.2–2.2 MOhm, FHC, Bowdoinham, ME) in ACC (ACC; area 24), prefrontal cortex (LPFC; area 46, 8, 8a), or anterior STR (STR; caudate nucleus (CD), and ventral striatum (VS)) through a rectangular recording chambers (20 by 25 mm) implanted over the right hemisphere (Supplementary Fig. ). Electrodes were lowered daily through guide tubes using software-controlled precision micro-drives (NAN Instruments Ltd., Israel and Neuronitek, Ontario, Canada). Data amplification, filtering, and acquisition were done with a multichannel acquisition system (Neuralynx). Spiking activity was obtained following a 300–8000 Hz passband filter and further amplification and digitization at 40 kHz sampling rate. Sorting and isolation of single unit activity was performed offline with Plexon Offline Sorter, based on analysis of the first two principal components of the spike waveforms. Experiments were performed in a custom-made sound attenuating isolation chamber. Monkeys sat in a custom-made primate chair viewing visual stimuli on a computer monitor running with a 60 Hz refresh rate. Eye positions were monitored using a video-based eye-tracking system (EyeLink, SRS Systems) calibrated prior to each experiment to a nine-point fixation pattern. Eye fixation was controlled within a 1.4°–2.0° radius window. During the experiments, stimulus presentation, monitored eye positions, and reward delivery were controlled via MonkeyLogic . The liquid reward was delivered by a custom-made, air-compression controlled, and mechanical valve system. Recording locations were aligned and plotted onto representative atlas slices . Data analysis The analysis was performed with custom Matlab code (Matlab 2019a), using functions from the fieldtrip toolbox . For Elastic-net regression, the glmnet package in R was used . Only correct and error responses were analyzed. Error responses included those where the responses were made to the incorrect target, or in the incorrect response window. We included all trials from learned blocks, with a minimum of two blocks, unless otherwise indicated. The trial immediately following a reversal event was not included in analysis. Learned blocks were defined as ones where animals reached 90% correct responses within the last 10 trials within the block. Standard errors of the median were estimated via bootstrapping (200 repetitions, unless otherwise indicated). Data analyses proceeded through multiple steps. First, we quantified how outcomes (reward/no-reward for correct/error outcomes) affected monkeys’ choices. After showing that outcomes are integrated over recent trials, we next asked how this is reflected in the firing rate activity of individual neurons during a post-outcome period using a penalized GLM. We used a data-driven clustering approach to assign functional labels to cells exhibiting similar sensitivities to experienced outcomes in their rate. On the basis of these functional labels, we extracted a corresponding encoding metric for neurons in each functional cluster. We then analyzed how the encoding metrics depend on time, or the phase of oscillatory activity in the LFP. For the latter analysis, we used standard spectral decomposition techniques and spike-phase consistency measures to characterize how spikes and phases between distal electrodes are related. We quantified and compared differences in phase-dependent encoding in terms of (1) the degree of phase-dependent modulation of encoding, and (2) the phase at which encoding is maximal. Behavioral analysis To determine the timescale over which past outcomes are integrated, we used a binomial GLM: 1 [12pt]{minimal} $$Y = { }_i^5 _{t - i}X_{t - i},$$ Y = ∑ i 5 β t − i X t − i , where Y was the current outcome, B t − i is the influence of outcome X t − i on trial t − i . The outcome for trial t −5 was defined as a nuisance variable that accounted for all responses occurring over very long time-scales (similar to ref. ). Rate encoding of outcome history To test how individual units integrated outcome history, we used a Poisson GLM: 2 [12pt]{minimal} $${}( ) = { }_i^5 _{t - i}X_{t - i},$$ log ( λ ) = ∑ i 5 β t − i X t − i , where λ was the conditional intensity (spike count), B t − i is the influence of outcome X t − i on trial t − i . Firing counts on each trial were determined in a [0.1 0.7]s window after outcome onset . Neurons were included in the analysis if they were isolated for more than 25 (learned) trials across at least two blocks, and if they showed an overall firing rate of >1 Hz. With these criteria, we analyzed a total of 1460 neurons, with an average of 230.56 ± 3.44 trials and 5.75 ± 0.082 blocks. To mitigate issues of multi-collinearity, and extract only the most predictive regressors, we employed elastic-net regularization using the R package glmnet . This procedure shrinks small coefficients to zero, and smoothly interpolates between ridge and lasso regularization by controlling a parameter alpha (with alpha = 0 corresponding to ridge regression, and alpha = 1 to lasso regression) . We used an alpha of 0.95, which tends to select only one regressor in the presence of collinearity (as in pure lasso regression ), while at the same time avoiding issues with degeneracy if correlations among regressors are particularly strong . The optimal value of the shrinkage parameter (lambda) was the minimum as selected by 10-fold cross validation. To assess model stability and extract significant fits, we used a bootstrap approach, whereby trials were sampled with replacement 1000 times and the procedure was rerun. As the LASSO shrinks non-valuable predictors to zero, a model fit was said to be significant if at least one relevant regressor (outcome t −4 to t −0) was non-zero more than 95% of the time. Functional clustering based on neural encoding Our ultimate goal is to describe how encoding varies as a function of phase (and time). However, encoding cells showed variability in how they responded to experienced outcome (e.g., Fig. ; Supplementary Fig. ). Thus, in order to properly evaluate changes in encoding in time and phase, we must first define populations of cells that encode similar types of information. To determine the putative function of significantly encoding units, we used a clustering approach via bootstrapped K-means. We clustered cells on the basis of their mean beta weights as determined by the penalized regression model (see above). As a preprocessing step, for units where the current outcome was negatively encoded (i.e., encodes errors), we flipped the sign of every coefficient in that model. This has the effect of erasing the directionality of any functional association, and thus collapses neurons with similar functions (for example, Error or Correct encoding units become Outcome encoding units). Cells were independently clustered for each area. We clustered cells on the basis of their clustering stability . We opted for this method because k-means clustering can be sensitive to initial conditions . This involved three steps: (1) choosing the optimal number of clusters Nc , (2) measuring clustering stability, and (3) performing the final clustering. For steps 1–2, we used k-mean clustering with a cosine distance metric, which is insensitive to the magnitude of the vector and is instead concerned with the direction, unlike, for example, the Euclidian distance. In other words, we clustered based on the relative pattern of beta weights of each cell, irrespective of differences in magnitude between cells. To determine the optimal number of clusters, we extracted the Silhouette metric over many bootstrap iterations. In brief, cells were sampled with replacement 1000 times and for each iteration, the optimal number of clusters was extracted where the silhouette was maximal. The overall optimal number of clusters Nc was the mode over all bootstrap iterations. Next, we assessed the clustering stability of pairs of cells. To do so, we built a similarity matrix S via a bootstrap approach, where similarity was defined as the proportion of times that pairs of cells were clustered together. First, we resampled with replacement individual cells. Next, we ran K-means with cosine distance and Nc clusters. For units that were clustered together, their respective cell in the similarity matrix was incremented by one. Because bootstrapping could sample the same units twice, these pairs were ignored. Bootstrapping was run 100,000 times. To compute the final cluster assignment, we first formed a dissimilarity matrix D = 1− S , before performing agglomerative clustering with Euclidian distance and Nc clusters. Metric for outcome, outcome history, and prediction error We quantified the degree of encoding of Outcome ( E outcome ), Outcome History ( E history ), and Reward Prediction Error ( E RPE ) on the basis of the GLM weights for trials −1 and 0: 3 [12pt]{minimal} $$E_{} = {}( {B_0} ),$$ E outcome = abs B 0 , 4 [12pt]{minimal} $$E_{} = {}( {B_{ - 1} + B_0} ),$$ E history = abs B − 1 + B 0 , 5 [12pt]{minimal} $$E_{} = {}(B_0 - B_{ - 1})$$ E RPE = abs ( B 0 − B − 1 ) We refer to these generically as Encoding Metrics. Latency analysis To determine the latency of encoding for each functional cluster, we performed a time-resolved analysis (Supplementary Fig. ). On the basis of our previous results showing that the outcome on trial 0 and −1 were most predictive (Fig. ), we used a simpler GLM of just the current and previous outcome. For the response variable, we calculated the spike density using a sliding Gaussian window, with a 200 ms window and 50 ms standard deviation. We performed this analysis [−0.4 0.7] around outcome onset. We thus obtained a time-resolved estimate of encoding. To determine the latency of significant encoding, we looked at time points in the post-outcome period that were significantly different from the pre-outcome period. We thus determined, for each cell, when encoding exceed a threshold criterion in a time-of-interest. First, we z-score normalized each individual cell’s encoding metric to the pre-outcome period ([−0.4 0] s). Next, we asked, for each time point, whether the population response was significantly different from zero via a Wilcoxon signrank test. We then extracted the largest cluster mass of contiguous significant time points (at an alpha = 0.05, e.g., ) to find a time-of-interest. Finally, we extracted, for each individual cell, the time point where the area under the curve of the encoding metric in this time-of-interest reached 10% of the total. Thus, we obtain for each encoding cluster a distribution of latencies of when they started to show significant encoding of Outcome, Outcome History, or Prediction Error. Spectral decomposition and spike-LFP phase synchronization To determine how encoding varied as a function of phase, we extracted the estimate of phase at the time of spikes, for frequencies from 6 to 60 Hz. We first characterized the degree of spike-phase synchronization, described below. We focused spike-phase analysis on pairs of distally recorded sites, thus obviating any concerns of spike energy bleeding into the LFP . For frequencies from 6 to 30 Hz, the resolution was 1 Hz, and above that it was 2 Hz. For every frequency F , we determined the spike-LFP phase by extracting an LFP segment centered on the spike of length 5/F (i.e., 5 cycles), as is standard to balance temporal and spectral resolution. Spectral decomposition was done via an FFT after applying a Hanning taper. This procedure was applied separately to the pre-outcome period [−1 0]s, and the post-outcome period [0.1 1]s. The strength of spike-LFP synchronization was quantified using the pairwise-phase consistency (PPC), which is unbiased by the number of spikes . The PPC is quantified on the basis of pairwise differences between spike-phases. If spikes tend to fire on specific phases, phase difference will be concentrated, and thus the PPC will take on a high value, whereas if spikes are distributed randomly relative to the LFP phase, phase differences will be random and the PPC will tend towards zero. The PPC effect size was determined as previously reported , 6 [12pt]{minimal} $${}\,{} = ( {} )}}{{1 - 2 ( {} )}}.$$ Effect size = 1 + 2 * sqrt PPC 1 − 2 * sqrt PPC . This effect size can be interpreted as the relative increase in spike rate at the cell’s preferred (mean) phase over its anti-preferred (opposite) firing phase. For example, a PPC value of 0.01 corresponds to a 1.5 times greater spike rate at the preferred phase. We determined the frequency at which spike-LFP phase synchronization was significant by determining peaks in the PPC spectrum. A cell was said to synchronize to a particular frequency if the following criteria were met: (1) Peaks had to be above a threshold of 0.005, (2) show a minimum prominence of 0.005, and (3) show significant Rayleigh test (i.e., phase concentration). To test for inter-areal differences in spike-beta synchronization, we extracted the maximal significant/prominent PPC peak in the [10 25] Hz band that showed significant encoding. For those encoding cells that did not show significant PPC peaks, we extracted the frequency of the maximal PPC in this band instead. We tested for differences in synchronization strength using a one-way ANOVA, and report on pairwise comparisons after multiple comparison correction. Phase-of-firing dependent encoding of outcome, outcome history, and prediction error To determine if spikes falling on certain phases of the LFP were more informative, we re-ran the (reduced) GLM on phase-binned spikes, using only the previous and current outcomes (see “Latency analysis” above). We first aligned all spike-triggered-phases to the circular mean of their distribution. Phases were extracted from the frequency of the corresponding maximal peak in the [10 25] Hz band in the PPC. However, if spikes are phase locked to an LFP, the firing rate around the preferred phase will naturally be higher. Thus, we used non-equal bin sizes, adjusting the bin limits such that they had the same spike count. On average the spike-count equalized bin has a spike rate of 1.85 Hz, range: 1.80–1.86 Hz across bins). Phase bins with equalized spike counts were on average 7.5% larger for the bin around the non-preferred phase (spanning ~21% of the full cycle) than for the bin around the preferred phase (spanning ~13.5% of the full cycle). We then re-ran the GLM analysis on spikes falling within a particular bin and computed the encoding metrics as described previously. To aid in comparison, we also fit the model using randomly permuted phases (thus preserving the over-all rate response structure). We ignored spike-LFP pairs where the GLM could not converge to a solution and threw a warning, or where the beta coefficients were above 20 (however, relaxing or tightening this constraint did not qualitatively change the results). To determine the phase and degree of phase-dependent encoding, we fit a cosine function to the phase-binned encoding values (illustrated in Fig. ) , . One encoding value was selected for each spike-LFP pair, on the basis of the cluster assignment of the spiking neuron. From this fit, we obtain three values: T (phase offset, or phase of cosine maxima), A (amplitude), and M (overall mean, or offset). The value T is thus the phase at which encoding is maximal, relative to the preferred firing phase. To compare the strength of encoding across functional clusters, we computed the empirical phase-of-firing gain: 7 [12pt]{minimal} $${}_E = 2 .$$ PFG E = 2 * A M . This quantity represents the difference in encoding between the peak and trough relative to the overall encoding strength. A PFG E value of 0 implies that phase-of-firing adds no information (corresponding to a pure rate code), whereas PFG = 1 means that encoding between the peak and trough is 100% stronger compared to the overall encoding strength. To determine if phase significantly added information above that of a phase-blind rate code, we opted for a randomization approach. For each cell, we first permuted the phase label of each spike, re-ran the GLM, re-fit a cosine and extracted the encoding phase-of-firing gain. This procedure was repeated 50 times, from which we obtain a distribution PFG R of randomized encoding gains. For this procedure, because phase labels were permuted, the distribution of phases remains the same, and thus the bin-widths need not be re-calculated. We report on the “excess” PFG, defined as the difference between empirical and the median of the randomized phase-of-firing gain, which we refer to in the manuscript at the Encoding Phase-of-Firing Gain (EPFG) : 8 [12pt]{minimal} $${} = {}_{} - {}( {}_{} ).$$ EPFG = PFG E − median PFG R . A positive value implies that encoding is modulated by phase above what would be expected by chance. To assess whether individual units showed significant encoding, we compared PFG E against the null distribution PFG R . Units were deemed significant at an alpha level of 0.05. The procedure described above destroys any within-trial correlation between spike phases. Thus, in a related analysis, we determined PFG R by adding a random phase in the range [0 2pi] to all spikes within a single trial, thus preserving their correlation structure. In this case, the phase bin widths were re-calculated for every randomization. The EPFG effectively quantifies the difference in mean firing rates between conditions, as a function of LFP phase. However, this does not necessarily imply that the information is easily decodable by other brain circuits. To address this question, we asked how much variance can be explained by the model fit to data in each phase bin. To this end, for each fit on each phase bin, we extracted the percent deviance explained (analogous to the ANOVA percent variance explained but modified for a Poisson GLM). The percent deviance explained D 2 was calculated as : 9 [12pt]{minimal} $$D^2 = 1 - ( {}\,{}}{{}\,{}}} ).$$ D 2 = 1 − Residual Deviance Null Deviance . The deviance for a Poisson distribution is defined as: 10 [12pt]{minimal} $${} = 2 { }_i^n Y_i ( {}{{ _i}}} ) - ( {Y_i - _i} ),$$ Deviance = 2 * ∑ i n Y i * log Y i λ i − Y i − λ i , where Y i is the observed spike count on trial i , and [12pt]{minimal} $$ _i$$ λ i is the predicted spike count. We then determined how D 2 varied as a function of phase using the same procedure as described above; namely, we fitted a cosine to the D 2 of each phase bin, extracted the amplitude and phase, and compared it to a null distribution where phases have been permuted. We call this quantity the Encoding Phase-of-Firing Gain (D 2 ), or EPFG D2 . We tested the stability of encoding across phase bins for each neuron (with significant rate encoding) by determining the sign of the encoding metric (i.e., before taking the absolute value). We found that for the vast majority of cell-LFP pairs (~90%), the sign of the encoding metric was the same for all 6/6 phase bins as for the full model. To test the frequency specificity of the EPFG, we extended the above analysis to the larger 6–60 Hz frequency range (Fig. ). We statistically tested the EPFG across frequencies using the Wilcoxon signrank test. To correct for multiple comparisons, we used a cluster-based permutation approach . First, we determined the largest cluster mass of contiguous significant samples ( p < 0.05). Next, we shuffled empirical and randomized PFG E and PFG R across cell-LFP pairs to determine a randomized EPFG R and re-calculated the largest cluster mass. We performed this procedure 200 times. Significant clusters were those whose mass exceeded that of the randomized distribution. We also tested the degree to which our results may be influenced by cue-aligned activity. To this end, we first obtained the average evoked potential for each LFP channel and subtracted this component from individual trials. We then performed all steps of the analysis again to compare the original EPFG with the EPFG free from potential cue-aligned biases. To test whether the preferred firing phase or relative phase with maximal encoding was concentrated above what would be expected by chance, we used the circular Hodges–Ajne test (Fig. ). To determine whether the phase showing maximal encoding differed from the preferred firing phase in each functional encoding cluster, we performed the Median test to test if the phase differed from zero (Fig. ). We tested how the strength of phase synchronization related to the strength of phase-of-firing encoding by performing two analysis. First, we compared encoding in cells that showed significant spike-phase synchronization to those that did not. For non-synchronizing cells, we selected the center frequency with the maximal PPC in the [10 25] Hz range, and computed the EPFG at this frequency. We compared EPFG between locking and non-locking populations using the Kruskal–Wallis test (Fig. ). Second, we asked whether spike-phase synchrony in different trial conditions contained similar information to that of the phase-of-firing. To this end, for each encoding cell, we compared trials that were predicted to have the maximal firing rate differences. For Outcome encoding, we compared correct versus error trials. For Reward Prediction Error encoding we compared correct trials following error versus following error trials following correct. For Outcome History cells, this was errors followed by errors versus correct outcomes followed by correct. We took the absolute difference of the PPC between the two conditions and correlated it with the EPFG of the respective cell using the Spearman rank correlation. In a similar vein, we also tested whether the mean phase differed between the conditions outlined above. After extracting the mean phase per condition for each cell-LFP pair, we performed a bootstrap test to test if the difference in phase between conditions differed from zero . We also tested whether phase gain depended on the number of bins used to fit the cosine function. We performed the analysis for 4, 6, 8, and 10 bins. We used Spearman rank correlation to determine if EPFG was related to the number of bins, and circular–linear correlation to associate the phase of maximal encoding with the number of bins . We tested for the presence of feature-specific phase-of-firing encoding for those cells clustered as RPE encoding. We calculated the [12pt]{minimal} $${}_{}$$ PFG E for each cell-LFP pair twice, using only trials from blocks where either color 1 or 2 was rewarded. We analyzed pairs with a minimum of 30 trials, and where the [12pt]{minimal} $${}_{}$$ PFG E was well-defined for both colors. A total of 102 pairs were thus selected. The average number of trials for color 1 was 136 ± 5.4, from an average of 3.15 ± 0.13 blocks. Color 2 analysis used 129 ± 4.85 trials from 3.07 ± 0.12 blocks. We then asked, for each color, whether the [12pt]{minimal} $${}_{}$$ PFG E was above chance (described above). [12pt]{minimal} $${}_{}$$ PFG E could be significant for neither color, one-color (defined as feature-specific encoding), or both colors (non-feature-specific encoding). We tested whether the relative frequencies of non-encoding, feature-specific encoding, and non-feature specific encoding differed by chance using a χ 2 test. We used the same test to determine if the proportion of feature-specific encoding differed between areas. Cell-type classification and analysis To determine if phase-modulated encoding of information differed based on cell type, we focused the following analysis on highly isolated single units that showed encoding of learning-relevant variables and significant, prominent spike-beta locking. Detailed information is provided in ref. . In brief, to distinguish putative interneurons (narrow-spiking) and putative pyramidal cells (broad-spiking) in LPFC and ACC, we analyzed the peak-to-trough duration and the time for repolarization for each neuron. After applying principal component analysis (PCA) using both measures, we used the first principal to discriminate between narrow and broad-spiking cells. This allowed for better discrimination than using either measure alone. We confirmed that a two-Gaussian model fit the data better than a one-Gaussian model using the Akaike and Bayesian Information Criterion (AIC, BIC). We then used the two-Gaussian model to define narrow and broad-spiking populations. A similar analysis was applied to striatal units to distinguish putative interneurons from medium spiny projection neurons (MSN). Here, we use the peak-width and Initial Slope of Valley Decay (ISVD) : 11 [12pt]{minimal} $${} = }}{{A_{}}},$$ ISVD = V t − V 0.26 A PT , where V T is the most negative value (trough) of the spike waveform, V 0.26 is the voltage at 0.26 ms after V T , and A PT is the peak-to-trough amplitude. After PCA and two-Gaussian modeling (as described above), we defined two cut-off points. The first cutoff was the point at which the likelihood of narrow-spiking cells was three times larger than the likelihood of broad-spiking cells, and vice-versa for the second cutoff. We compared differences in Encoding Phase-of-Firing Gain between narrow and broad-spiking neurons using the Kruskal–Wallis test, independently for each area. To clarify, we analyzed spike-LFP pairs here; thus, the same neuron may be included more than once. Assessing encoding linked to the temporal evolution of LFP We assessed how the sites we analyzed were related to the temporal evolution of the LFP in two ways, first, by assessing how the LFP power and phase changed with stimulus onset; and second, how encoding changed as a function of periods of particularly high or low beta power. We determined how power and phase were distributed relative to the stimulus or reward onset. As for the spike-aligned analysis, we decomposed the LFP via the Fourier transform after Hanning tapering. We determined the spectral content 6–60 Hz frequency window, from [−2 2] s stepped every 5 ms. Power was taken as the squared magnitude of the spectra representation. Power was normalized for 1/f noise. To determine the spectral peak across sites and epochs, we z-score normalized the power across all time points and epochs for each LFP individually. We report on the median of this normalized quantity. To determine if there was evidence of phase resetting, we performed, for each LFP, a Rayleigh test at every point in time for every frequency, and extracted the Z statistic. We report on the median Raleigh Z value, with higher values related to a greater phase consistency across trials. Finally, we assessed how encoding varied during burst periods , . We took an approach conceptually similar to Lundqvist and colleagues . For each LFP channel, we first normalized the power for 1/f noise. Next, we averaged this signal in the same [10 25] Hz window we used for spike-aligned analysis above. Following this, we Z-scored beta power within each trial individually. Bursts were defined as periods where the normalized power exceeded 1.5 SD for a minimum of 3 cycles (=45 ms). The burst proportion was defined as the mean across trials at each point in time. To assess how encoding varied as a function of burst periods, we separately selected spikes that either occurred within burst periods, or outside of burst periods, before calculating the EPFG as before. We only analyzed cell-LFP pairs with a minimum of 30 spikes after this selection. Data were collected from two adult, 9 and 7-year-old, male rhesus monkeys ( Macaca mulatta ) following procedures described in ref. . All animal care and experimental protocols were approved by the York University Council on Animal Care and were in accordance with the Canadian Council on Animal Care guidelines. Monkeys performed a feature-based reversal-learning task that required covert attention to one of two stimuli based on the reward associated with the color of the stimuli. Which stimulus color was rewarded remained identical for ≥30 trials and reversed without explicit cue. The reward reversal required monkeys to utilize trial outcomes to adjust to the new color-reward rule. Details of the task have been described before . Each trial started when subjects foveated a central cue. After 0.5–0.9 s, two black and white gratings appeared. After another 0.4 s, the stimuli either began to move within their aperture in opposite directions (up-/downwards) or were colored with opposite colors (red/green or blue/yellow). After another 0.5–0.9 s, they gained the color when the first feature was motion, or they gained motion when the first feature had been color. After 0.4–0.1 s, the stimuli could transiently dim. The dimming occurred either in both stimuli simultaneously, or separated in time by 0.55 s. Dimming represented the go-cue to make a saccade in the direction of the motion when it occurred in the stimulus with the reward associated color. The dimming acted as a no-go cue when it occurred in the stimulus with the non-rewarded color. A saccadic response was only rewarded when it was made in the direction of motion of the stimulus with the rewarded color. Motion direction and location of the individual colors were randomized within a block. Thus, the only feature predictive of reward within a block was color. Color-reward associations were constant for a minimum of 30 trials. Block changes occurred when 90% performance was reached over the last 12 trials, or 100 trials were completed without reaching criterion. The block change was uncued. Rewards were deterministic. Extra-cellular recordings were made with 1–12 tungsten electrodes (impedance 1.2–2.2 MOhm, FHC, Bowdoinham, ME) in ACC (ACC; area 24), prefrontal cortex (LPFC; area 46, 8, 8a), or anterior STR (STR; caudate nucleus (CD), and ventral striatum (VS)) through a rectangular recording chambers (20 by 25 mm) implanted over the right hemisphere (Supplementary Fig. ). Electrodes were lowered daily through guide tubes using software-controlled precision micro-drives (NAN Instruments Ltd., Israel and Neuronitek, Ontario, Canada). Data amplification, filtering, and acquisition were done with a multichannel acquisition system (Neuralynx). Spiking activity was obtained following a 300–8000 Hz passband filter and further amplification and digitization at 40 kHz sampling rate. Sorting and isolation of single unit activity was performed offline with Plexon Offline Sorter, based on analysis of the first two principal components of the spike waveforms. Experiments were performed in a custom-made sound attenuating isolation chamber. Monkeys sat in a custom-made primate chair viewing visual stimuli on a computer monitor running with a 60 Hz refresh rate. Eye positions were monitored using a video-based eye-tracking system (EyeLink, SRS Systems) calibrated prior to each experiment to a nine-point fixation pattern. Eye fixation was controlled within a 1.4°–2.0° radius window. During the experiments, stimulus presentation, monitored eye positions, and reward delivery were controlled via MonkeyLogic . The liquid reward was delivered by a custom-made, air-compression controlled, and mechanical valve system. Recording locations were aligned and plotted onto representative atlas slices . The analysis was performed with custom Matlab code (Matlab 2019a), using functions from the fieldtrip toolbox . For Elastic-net regression, the glmnet package in R was used . Only correct and error responses were analyzed. Error responses included those where the responses were made to the incorrect target, or in the incorrect response window. We included all trials from learned blocks, with a minimum of two blocks, unless otherwise indicated. The trial immediately following a reversal event was not included in analysis. Learned blocks were defined as ones where animals reached 90% correct responses within the last 10 trials within the block. Standard errors of the median were estimated via bootstrapping (200 repetitions, unless otherwise indicated). Data analyses proceeded through multiple steps. First, we quantified how outcomes (reward/no-reward for correct/error outcomes) affected monkeys’ choices. After showing that outcomes are integrated over recent trials, we next asked how this is reflected in the firing rate activity of individual neurons during a post-outcome period using a penalized GLM. We used a data-driven clustering approach to assign functional labels to cells exhibiting similar sensitivities to experienced outcomes in their rate. On the basis of these functional labels, we extracted a corresponding encoding metric for neurons in each functional cluster. We then analyzed how the encoding metrics depend on time, or the phase of oscillatory activity in the LFP. For the latter analysis, we used standard spectral decomposition techniques and spike-phase consistency measures to characterize how spikes and phases between distal electrodes are related. We quantified and compared differences in phase-dependent encoding in terms of (1) the degree of phase-dependent modulation of encoding, and (2) the phase at which encoding is maximal. To determine the timescale over which past outcomes are integrated, we used a binomial GLM: 1 [12pt]{minimal} $$Y = { }_i^5 _{t - i}X_{t - i},$$ Y = ∑ i 5 β t − i X t − i , where Y was the current outcome, B t − i is the influence of outcome X t − i on trial t − i . The outcome for trial t −5 was defined as a nuisance variable that accounted for all responses occurring over very long time-scales (similar to ref. ). To test how individual units integrated outcome history, we used a Poisson GLM: 2 [12pt]{minimal} $${}( ) = { }_i^5 _{t - i}X_{t - i},$$ log ( λ ) = ∑ i 5 β t − i X t − i , where λ was the conditional intensity (spike count), B t − i is the influence of outcome X t − i on trial t − i . Firing counts on each trial were determined in a [0.1 0.7]s window after outcome onset . Neurons were included in the analysis if they were isolated for more than 25 (learned) trials across at least two blocks, and if they showed an overall firing rate of >1 Hz. With these criteria, we analyzed a total of 1460 neurons, with an average of 230.56 ± 3.44 trials and 5.75 ± 0.082 blocks. To mitigate issues of multi-collinearity, and extract only the most predictive regressors, we employed elastic-net regularization using the R package glmnet . This procedure shrinks small coefficients to zero, and smoothly interpolates between ridge and lasso regularization by controlling a parameter alpha (with alpha = 0 corresponding to ridge regression, and alpha = 1 to lasso regression) . We used an alpha of 0.95, which tends to select only one regressor in the presence of collinearity (as in pure lasso regression ), while at the same time avoiding issues with degeneracy if correlations among regressors are particularly strong . The optimal value of the shrinkage parameter (lambda) was the minimum as selected by 10-fold cross validation. To assess model stability and extract significant fits, we used a bootstrap approach, whereby trials were sampled with replacement 1000 times and the procedure was rerun. As the LASSO shrinks non-valuable predictors to zero, a model fit was said to be significant if at least one relevant regressor (outcome t −4 to t −0) was non-zero more than 95% of the time. Our ultimate goal is to describe how encoding varies as a function of phase (and time). However, encoding cells showed variability in how they responded to experienced outcome (e.g., Fig. ; Supplementary Fig. ). Thus, in order to properly evaluate changes in encoding in time and phase, we must first define populations of cells that encode similar types of information. To determine the putative function of significantly encoding units, we used a clustering approach via bootstrapped K-means. We clustered cells on the basis of their mean beta weights as determined by the penalized regression model (see above). As a preprocessing step, for units where the current outcome was negatively encoded (i.e., encodes errors), we flipped the sign of every coefficient in that model. This has the effect of erasing the directionality of any functional association, and thus collapses neurons with similar functions (for example, Error or Correct encoding units become Outcome encoding units). Cells were independently clustered for each area. We clustered cells on the basis of their clustering stability . We opted for this method because k-means clustering can be sensitive to initial conditions . This involved three steps: (1) choosing the optimal number of clusters Nc , (2) measuring clustering stability, and (3) performing the final clustering. For steps 1–2, we used k-mean clustering with a cosine distance metric, which is insensitive to the magnitude of the vector and is instead concerned with the direction, unlike, for example, the Euclidian distance. In other words, we clustered based on the relative pattern of beta weights of each cell, irrespective of differences in magnitude between cells. To determine the optimal number of clusters, we extracted the Silhouette metric over many bootstrap iterations. In brief, cells were sampled with replacement 1000 times and for each iteration, the optimal number of clusters was extracted where the silhouette was maximal. The overall optimal number of clusters Nc was the mode over all bootstrap iterations. Next, we assessed the clustering stability of pairs of cells. To do so, we built a similarity matrix S via a bootstrap approach, where similarity was defined as the proportion of times that pairs of cells were clustered together. First, we resampled with replacement individual cells. Next, we ran K-means with cosine distance and Nc clusters. For units that were clustered together, their respective cell in the similarity matrix was incremented by one. Because bootstrapping could sample the same units twice, these pairs were ignored. Bootstrapping was run 100,000 times. To compute the final cluster assignment, we first formed a dissimilarity matrix D = 1− S , before performing agglomerative clustering with Euclidian distance and Nc clusters. We quantified the degree of encoding of Outcome ( E outcome ), Outcome History ( E history ), and Reward Prediction Error ( E RPE ) on the basis of the GLM weights for trials −1 and 0: 3 [12pt]{minimal} $$E_{} = {}( {B_0} ),$$ E outcome = abs B 0 , 4 [12pt]{minimal} $$E_{} = {}( {B_{ - 1} + B_0} ),$$ E history = abs B − 1 + B 0 , 5 [12pt]{minimal} $$E_{} = {}(B_0 - B_{ - 1})$$ E RPE = abs ( B 0 − B − 1 ) We refer to these generically as Encoding Metrics. To determine the latency of encoding for each functional cluster, we performed a time-resolved analysis (Supplementary Fig. ). On the basis of our previous results showing that the outcome on trial 0 and −1 were most predictive (Fig. ), we used a simpler GLM of just the current and previous outcome. For the response variable, we calculated the spike density using a sliding Gaussian window, with a 200 ms window and 50 ms standard deviation. We performed this analysis [−0.4 0.7] around outcome onset. We thus obtained a time-resolved estimate of encoding. To determine the latency of significant encoding, we looked at time points in the post-outcome period that were significantly different from the pre-outcome period. We thus determined, for each cell, when encoding exceed a threshold criterion in a time-of-interest. First, we z-score normalized each individual cell’s encoding metric to the pre-outcome period ([−0.4 0] s). Next, we asked, for each time point, whether the population response was significantly different from zero via a Wilcoxon signrank test. We then extracted the largest cluster mass of contiguous significant time points (at an alpha = 0.05, e.g., ) to find a time-of-interest. Finally, we extracted, for each individual cell, the time point where the area under the curve of the encoding metric in this time-of-interest reached 10% of the total. Thus, we obtain for each encoding cluster a distribution of latencies of when they started to show significant encoding of Outcome, Outcome History, or Prediction Error. To determine how encoding varied as a function of phase, we extracted the estimate of phase at the time of spikes, for frequencies from 6 to 60 Hz. We first characterized the degree of spike-phase synchronization, described below. We focused spike-phase analysis on pairs of distally recorded sites, thus obviating any concerns of spike energy bleeding into the LFP . For frequencies from 6 to 30 Hz, the resolution was 1 Hz, and above that it was 2 Hz. For every frequency F , we determined the spike-LFP phase by extracting an LFP segment centered on the spike of length 5/F (i.e., 5 cycles), as is standard to balance temporal and spectral resolution. Spectral decomposition was done via an FFT after applying a Hanning taper. This procedure was applied separately to the pre-outcome period [−1 0]s, and the post-outcome period [0.1 1]s. The strength of spike-LFP synchronization was quantified using the pairwise-phase consistency (PPC), which is unbiased by the number of spikes . The PPC is quantified on the basis of pairwise differences between spike-phases. If spikes tend to fire on specific phases, phase difference will be concentrated, and thus the PPC will take on a high value, whereas if spikes are distributed randomly relative to the LFP phase, phase differences will be random and the PPC will tend towards zero. The PPC effect size was determined as previously reported , 6 [12pt]{minimal} $${}\,{} = ( {} )}}{{1 - 2 ( {} )}}.$$ Effect size = 1 + 2 * sqrt PPC 1 − 2 * sqrt PPC . This effect size can be interpreted as the relative increase in spike rate at the cell’s preferred (mean) phase over its anti-preferred (opposite) firing phase. For example, a PPC value of 0.01 corresponds to a 1.5 times greater spike rate at the preferred phase. We determined the frequency at which spike-LFP phase synchronization was significant by determining peaks in the PPC spectrum. A cell was said to synchronize to a particular frequency if the following criteria were met: (1) Peaks had to be above a threshold of 0.005, (2) show a minimum prominence of 0.005, and (3) show significant Rayleigh test (i.e., phase concentration). To test for inter-areal differences in spike-beta synchronization, we extracted the maximal significant/prominent PPC peak in the [10 25] Hz band that showed significant encoding. For those encoding cells that did not show significant PPC peaks, we extracted the frequency of the maximal PPC in this band instead. We tested for differences in synchronization strength using a one-way ANOVA, and report on pairwise comparisons after multiple comparison correction. To determine if spikes falling on certain phases of the LFP were more informative, we re-ran the (reduced) GLM on phase-binned spikes, using only the previous and current outcomes (see “Latency analysis” above). We first aligned all spike-triggered-phases to the circular mean of their distribution. Phases were extracted from the frequency of the corresponding maximal peak in the [10 25] Hz band in the PPC. However, if spikes are phase locked to an LFP, the firing rate around the preferred phase will naturally be higher. Thus, we used non-equal bin sizes, adjusting the bin limits such that they had the same spike count. On average the spike-count equalized bin has a spike rate of 1.85 Hz, range: 1.80–1.86 Hz across bins). Phase bins with equalized spike counts were on average 7.5% larger for the bin around the non-preferred phase (spanning ~21% of the full cycle) than for the bin around the preferred phase (spanning ~13.5% of the full cycle). We then re-ran the GLM analysis on spikes falling within a particular bin and computed the encoding metrics as described previously. To aid in comparison, we also fit the model using randomly permuted phases (thus preserving the over-all rate response structure). We ignored spike-LFP pairs where the GLM could not converge to a solution and threw a warning, or where the beta coefficients were above 20 (however, relaxing or tightening this constraint did not qualitatively change the results). To determine the phase and degree of phase-dependent encoding, we fit a cosine function to the phase-binned encoding values (illustrated in Fig. ) , . One encoding value was selected for each spike-LFP pair, on the basis of the cluster assignment of the spiking neuron. From this fit, we obtain three values: T (phase offset, or phase of cosine maxima), A (amplitude), and M (overall mean, or offset). The value T is thus the phase at which encoding is maximal, relative to the preferred firing phase. To compare the strength of encoding across functional clusters, we computed the empirical phase-of-firing gain: 7 [12pt]{minimal} $${}_E = 2 .$$ PFG E = 2 * A M . This quantity represents the difference in encoding between the peak and trough relative to the overall encoding strength. A PFG E value of 0 implies that phase-of-firing adds no information (corresponding to a pure rate code), whereas PFG = 1 means that encoding between the peak and trough is 100% stronger compared to the overall encoding strength. To determine if phase significantly added information above that of a phase-blind rate code, we opted for a randomization approach. For each cell, we first permuted the phase label of each spike, re-ran the GLM, re-fit a cosine and extracted the encoding phase-of-firing gain. This procedure was repeated 50 times, from which we obtain a distribution PFG R of randomized encoding gains. For this procedure, because phase labels were permuted, the distribution of phases remains the same, and thus the bin-widths need not be re-calculated. We report on the “excess” PFG, defined as the difference between empirical and the median of the randomized phase-of-firing gain, which we refer to in the manuscript at the Encoding Phase-of-Firing Gain (EPFG) : 8 [12pt]{minimal} $${} = {}_{} - {}( {}_{} ).$$ EPFG = PFG E − median PFG R . A positive value implies that encoding is modulated by phase above what would be expected by chance. To assess whether individual units showed significant encoding, we compared PFG E against the null distribution PFG R . Units were deemed significant at an alpha level of 0.05. The procedure described above destroys any within-trial correlation between spike phases. Thus, in a related analysis, we determined PFG R by adding a random phase in the range [0 2pi] to all spikes within a single trial, thus preserving their correlation structure. In this case, the phase bin widths were re-calculated for every randomization. The EPFG effectively quantifies the difference in mean firing rates between conditions, as a function of LFP phase. However, this does not necessarily imply that the information is easily decodable by other brain circuits. To address this question, we asked how much variance can be explained by the model fit to data in each phase bin. To this end, for each fit on each phase bin, we extracted the percent deviance explained (analogous to the ANOVA percent variance explained but modified for a Poisson GLM). The percent deviance explained D 2 was calculated as : 9 [12pt]{minimal} $$D^2 = 1 - ( {}\,{}}{{}\,{}}} ).$$ D 2 = 1 − Residual Deviance Null Deviance . The deviance for a Poisson distribution is defined as: 10 [12pt]{minimal} $${} = 2 { }_i^n Y_i ( {}{{ _i}}} ) - ( {Y_i - _i} ),$$ Deviance = 2 * ∑ i n Y i * log Y i λ i − Y i − λ i , where Y i is the observed spike count on trial i , and [12pt]{minimal} $$ _i$$ λ i is the predicted spike count. We then determined how D 2 varied as a function of phase using the same procedure as described above; namely, we fitted a cosine to the D 2 of each phase bin, extracted the amplitude and phase, and compared it to a null distribution where phases have been permuted. We call this quantity the Encoding Phase-of-Firing Gain (D 2 ), or EPFG D2 . We tested the stability of encoding across phase bins for each neuron (with significant rate encoding) by determining the sign of the encoding metric (i.e., before taking the absolute value). We found that for the vast majority of cell-LFP pairs (~90%), the sign of the encoding metric was the same for all 6/6 phase bins as for the full model. To test the frequency specificity of the EPFG, we extended the above analysis to the larger 6–60 Hz frequency range (Fig. ). We statistically tested the EPFG across frequencies using the Wilcoxon signrank test. To correct for multiple comparisons, we used a cluster-based permutation approach . First, we determined the largest cluster mass of contiguous significant samples ( p < 0.05). Next, we shuffled empirical and randomized PFG E and PFG R across cell-LFP pairs to determine a randomized EPFG R and re-calculated the largest cluster mass. We performed this procedure 200 times. Significant clusters were those whose mass exceeded that of the randomized distribution. We also tested the degree to which our results may be influenced by cue-aligned activity. To this end, we first obtained the average evoked potential for each LFP channel and subtracted this component from individual trials. We then performed all steps of the analysis again to compare the original EPFG with the EPFG free from potential cue-aligned biases. To test whether the preferred firing phase or relative phase with maximal encoding was concentrated above what would be expected by chance, we used the circular Hodges–Ajne test (Fig. ). To determine whether the phase showing maximal encoding differed from the preferred firing phase in each functional encoding cluster, we performed the Median test to test if the phase differed from zero (Fig. ). We tested how the strength of phase synchronization related to the strength of phase-of-firing encoding by performing two analysis. First, we compared encoding in cells that showed significant spike-phase synchronization to those that did not. For non-synchronizing cells, we selected the center frequency with the maximal PPC in the [10 25] Hz range, and computed the EPFG at this frequency. We compared EPFG between locking and non-locking populations using the Kruskal–Wallis test (Fig. ). Second, we asked whether spike-phase synchrony in different trial conditions contained similar information to that of the phase-of-firing. To this end, for each encoding cell, we compared trials that were predicted to have the maximal firing rate differences. For Outcome encoding, we compared correct versus error trials. For Reward Prediction Error encoding we compared correct trials following error versus following error trials following correct. For Outcome History cells, this was errors followed by errors versus correct outcomes followed by correct. We took the absolute difference of the PPC between the two conditions and correlated it with the EPFG of the respective cell using the Spearman rank correlation. In a similar vein, we also tested whether the mean phase differed between the conditions outlined above. After extracting the mean phase per condition for each cell-LFP pair, we performed a bootstrap test to test if the difference in phase between conditions differed from zero . We also tested whether phase gain depended on the number of bins used to fit the cosine function. We performed the analysis for 4, 6, 8, and 10 bins. We used Spearman rank correlation to determine if EPFG was related to the number of bins, and circular–linear correlation to associate the phase of maximal encoding with the number of bins . We tested for the presence of feature-specific phase-of-firing encoding for those cells clustered as RPE encoding. We calculated the [12pt]{minimal} $${}_{}$$ PFG E for each cell-LFP pair twice, using only trials from blocks where either color 1 or 2 was rewarded. We analyzed pairs with a minimum of 30 trials, and where the [12pt]{minimal} $${}_{}$$ PFG E was well-defined for both colors. A total of 102 pairs were thus selected. The average number of trials for color 1 was 136 ± 5.4, from an average of 3.15 ± 0.13 blocks. Color 2 analysis used 129 ± 4.85 trials from 3.07 ± 0.12 blocks. We then asked, for each color, whether the [12pt]{minimal} $${}_{}$$ PFG E was above chance (described above). [12pt]{minimal} $${}_{}$$ PFG E could be significant for neither color, one-color (defined as feature-specific encoding), or both colors (non-feature-specific encoding). We tested whether the relative frequencies of non-encoding, feature-specific encoding, and non-feature specific encoding differed by chance using a χ 2 test. We used the same test to determine if the proportion of feature-specific encoding differed between areas. To determine if phase-modulated encoding of information differed based on cell type, we focused the following analysis on highly isolated single units that showed encoding of learning-relevant variables and significant, prominent spike-beta locking. Detailed information is provided in ref. . In brief, to distinguish putative interneurons (narrow-spiking) and putative pyramidal cells (broad-spiking) in LPFC and ACC, we analyzed the peak-to-trough duration and the time for repolarization for each neuron. After applying principal component analysis (PCA) using both measures, we used the first principal to discriminate between narrow and broad-spiking cells. This allowed for better discrimination than using either measure alone. We confirmed that a two-Gaussian model fit the data better than a one-Gaussian model using the Akaike and Bayesian Information Criterion (AIC, BIC). We then used the two-Gaussian model to define narrow and broad-spiking populations. A similar analysis was applied to striatal units to distinguish putative interneurons from medium spiny projection neurons (MSN). Here, we use the peak-width and Initial Slope of Valley Decay (ISVD) : 11 [12pt]{minimal} $${} = }}{{A_{}}},$$ ISVD = V t − V 0.26 A PT , where V T is the most negative value (trough) of the spike waveform, V 0.26 is the voltage at 0.26 ms after V T , and A PT is the peak-to-trough amplitude. After PCA and two-Gaussian modeling (as described above), we defined two cut-off points. The first cutoff was the point at which the likelihood of narrow-spiking cells was three times larger than the likelihood of broad-spiking cells, and vice-versa for the second cutoff. We compared differences in Encoding Phase-of-Firing Gain between narrow and broad-spiking neurons using the Kruskal–Wallis test, independently for each area. To clarify, we analyzed spike-LFP pairs here; thus, the same neuron may be included more than once. We assessed how the sites we analyzed were related to the temporal evolution of the LFP in two ways, first, by assessing how the LFP power and phase changed with stimulus onset; and second, how encoding changed as a function of periods of particularly high or low beta power. We determined how power and phase were distributed relative to the stimulus or reward onset. As for the spike-aligned analysis, we decomposed the LFP via the Fourier transform after Hanning tapering. We determined the spectral content 6–60 Hz frequency window, from [−2 2] s stepped every 5 ms. Power was taken as the squared magnitude of the spectra representation. Power was normalized for 1/f noise. To determine the spectral peak across sites and epochs, we z-score normalized the power across all time points and epochs for each LFP individually. We report on the median of this normalized quantity. To determine if there was evidence of phase resetting, we performed, for each LFP, a Rayleigh test at every point in time for every frequency, and extracted the Z statistic. We report on the median Raleigh Z value, with higher values related to a greater phase consistency across trials. Finally, we assessed how encoding varied during burst periods , . We took an approach conceptually similar to Lundqvist and colleagues . For each LFP channel, we first normalized the power for 1/f noise. Next, we averaged this signal in the same [10 25] Hz window we used for spike-aligned analysis above. Following this, we Z-scored beta power within each trial individually. Bursts were defined as periods where the normalized power exceeded 1.5 SD for a minimum of 3 cycles (=45 ms). The burst proportion was defined as the mean across trials at each point in time. To assess how encoding varied as a function of burst periods, we separately selected spikes that either occurred within burst periods, or outside of burst periods, before calculating the EPFG as before. We only analyzed cell-LFP pairs with a minimum of 30 spikes after this selection. Supplementary Information Peer Review File
Pharmacogenomics of Clozapine-induced agranulocytosis: a systematic review and meta-analysis
2c550162-9db6-4ce6-a438-dfdce9538732
9363274
Pharmacology[mh]
Schizophrenia is a debilitating condition that affects as many as 20 million people worldwide . Approximately 20–30% of these individuals experience treatment-resistant schizophrenia (TRS), which is characterized by ongoing psychotic symptoms and functional impairments despite adequate trials with different antipsychotic medications . At present, clozapine remains the standard treatment of choice for TRS recommended by international guidelines due to its superior efficacy compared to other existing antipsychotics . Despite the abundance of robust evidence supporting the effectiveness of clozapine in improving outcomes for TRS patients, clozapine is underutilized due to concerns about tolerability and monitoring , and its initiation is commonly delayed for several years in many countries worldwide, including in the USA and Canada . Studies have even suggested that the utilization of clozapine earlier in treatment, rather than waiting for multiple drug failures and subsequent severe TRS, results in better response . Further, initiation of clozapine has been shown to reduce healthcare costs by decreasing the number of hospitalizations and shifting care from inpatient to outpatient . The reasons for underuse and delay in clozapine initiation could be attributed to several factors, including highly variable and difficult to predict clinical outcomes. For example, roughly 40–70% of patients on clozapine experience persistent symptoms and remain treatment-resistant . Further, side effects in patients taking clozapine vary greatly, ranging from none or mild to life-threatening side effects . Particularly of concern is the development of clozapine-induced agranulocytosis (CIA), which is defined as an absolute neutrophil count (ANC) < 500 cells/mm 3 . CIA is a severe and potentially fatal neutropenia with an overall prevalence of 0.4% (95% CI: 0.3%, 0.6%) and fatality rate of 0.05% (95% CI: 0.03%, 0.09%) . The World Health Organization’s (WHO) Pharmacovigilance global database, VigiBase, containing more than 140,000 clinician reports of clozapine adverse drug reactions (ADRs) classified in over 5,000 ADR categories, showed that the “broad agranulocytosis” category is the third major cause of fatal outcomes after “broad pneumonia” and “sudden death and cardiac arrests” . Although CIA is a rare hematological condition that represents only 2% of reported fatal outcomes within the VigiBase database , the U.S. Food and Drug Administration (FDA) along with the majority of global health authorities have mandated that patients taking clozapine receive regular blood draws to monitor neutrophil count. These authorities also require enrollment in the Clozapine Risk Evaluation and Mitigation Strategy (REMS) Program in order to reduce the risk of clozapine-induced neutropenia. Existing strategies for regular long-term hematological monitoring in patients taking clozapine have been previously criticized for not being cost-effective, especially given that roughly 80% of CIA cases occur within 18 weeks of clozapine initiation, and after one year of clozapine treatment, incidence of CIA decreases to 0.07% or less . One study reported that frequent and long-term monitoring of white blood cell counts increased quality-adjusted survival by less than one day per patient and was more costly compared to no monitoring . The additional physical burden of regular blood monitoring and its related costs further discourages both patients and clinicians from choosing clozapine and may, in part, account for the suboptimal use of clozapine in clinical practice. To broaden the usage of clozapine and improve outcomes for patients with TRS, researchers have focused on identifying predictive biomarkers for CIA that could be used to identify individuals most at risk. Although the pathophysiology of CIA remains unclear, twin studies have indicated a genetic component contributing to its development . Numerous genetic association studies have been performed to identify genetic factors that increase the susceptibility to CIA. This is because the discovery of reliable genetic markers for CIA could contribute to the development of a predictive pharmacogenomic test to stratify patients based on risk. Patients identified as low risk will be less susceptible to developing CIA and can safely use clozapine with relaxed hematological monitoring, whereas those identified as high risk for developing CIA can undergo close, routine hematological monitoring or be considered for alternate treatments . The development of such a clinical decision-making tool could minimize the risk and incidence of CIA, reduce costs associated with frequent hematological monitoring, and optimize treatment outcomes for TRS patients. Therefore, the purpose of the current review and meta-analysis is to review existing pharmacogenomic studies for CIA in patients with TRS , conduct meta-analyses on alleles reported to be associated with CIA, and discuss the development of a predictive pharmacogenomic test based on alleles significantly associated with CIA for use within clinical practice. Search strategy Using PRISMA guidelines, a systematic literature search was performed using PubMed from database inception date to April 2021 . The following Boolean search string was used: (clozapine AND agranulocytosis). Only peer-reviewed articles published in English and on human participants were considered. As such, two PubMed filters, “Species: Humans” and “Languages: English”, were applied. Eligibility criteria and study selection Included studies were those that compared genetic distributions (reported as variant carrier status) among patients who developed CIA (ANC < 500/μL; cases) against patients who were tolerant to clozapine (i.e., did not develop CIA; controls). Not all studies specified the duration of clozapine treatment or dosage in both cases and controls, therefore, there were no restrictions on duration or dose of clozapine treatment. Further, no age restrictions were implemented. Studies with case-control pairings as described above were included. Case studies, conference proceedings, letters to the editor, narratives, meta-analyses, posters, and systematic reviews were not considered for quantitative analysis, but were mentioned in the text if applicable. Studies which did not report genotypes or carrier-status were not considered. Data collection process Data items collected included the ethnicity or ancestry of subjects, the genetic variant(s) studied, the number of cases who carried at least one copy of the genetic variant, the number of cases who carried no copies of the genetic variant, the number of controls who carried at least one copy of the genetic variant, and the number of controls who carried no copies of the genetic variant. All data items were collected as displayed in each article or its supplementary information. Statistical analysis Principal summary measures included odds ratio (OR), 95% confidence interval (CI), z-score, sensitivity, specificity, negative predictive value (NPV), positive predictive value (PPV), number needed to genotype, and p value. NPV was corrected for the prevalence of CIA (prevalence = 0.91%). Haldane correction was used as necessary for the calculation of OR summary measures . Meta-analyses were performed using Review Manager 5.3 (RevMan 5.3, The Cochrane Collaboration), using dichotomous Mantel-Haenszel OR measures with random effects. A random effects model was used under the assumption that the various ancestral populations in the included studies would introduce heterogeneity due to varying patterns of linkage disequilibrium (LD). LD analysis was performed using haplotype and allele frequency data extracted from http://www.allelefrequencies.net . The calculation of LD statistics was performed as described by Slatkin . Using a bone marrow registry for a Polish population ( http://www.allelefrequencies.net/pop6001c.asp?pop_id=3670 ), LD statistics were calculated for HLA-DRB1 *04:02 and HLA-DQB1 *05:02. Bias was not assessed in individual studies; however, findings were corrected for multiple testing using Bonferroni correction and meta-analyses were assessed for heterogeneity using I 2 . Using PRISMA guidelines, a systematic literature search was performed using PubMed from database inception date to April 2021 . The following Boolean search string was used: (clozapine AND agranulocytosis). Only peer-reviewed articles published in English and on human participants were considered. As such, two PubMed filters, “Species: Humans” and “Languages: English”, were applied. Included studies were those that compared genetic distributions (reported as variant carrier status) among patients who developed CIA (ANC < 500/μL; cases) against patients who were tolerant to clozapine (i.e., did not develop CIA; controls). Not all studies specified the duration of clozapine treatment or dosage in both cases and controls, therefore, there were no restrictions on duration or dose of clozapine treatment. Further, no age restrictions were implemented. Studies with case-control pairings as described above were included. Case studies, conference proceedings, letters to the editor, narratives, meta-analyses, posters, and systematic reviews were not considered for quantitative analysis, but were mentioned in the text if applicable. Studies which did not report genotypes or carrier-status were not considered. Data items collected included the ethnicity or ancestry of subjects, the genetic variant(s) studied, the number of cases who carried at least one copy of the genetic variant, the number of cases who carried no copies of the genetic variant, the number of controls who carried at least one copy of the genetic variant, and the number of controls who carried no copies of the genetic variant. All data items were collected as displayed in each article or its supplementary information. Principal summary measures included odds ratio (OR), 95% confidence interval (CI), z-score, sensitivity, specificity, negative predictive value (NPV), positive predictive value (PPV), number needed to genotype, and p value. NPV was corrected for the prevalence of CIA (prevalence = 0.91%). Haldane correction was used as necessary for the calculation of OR summary measures . Meta-analyses were performed using Review Manager 5.3 (RevMan 5.3, The Cochrane Collaboration), using dichotomous Mantel-Haenszel OR measures with random effects. A random effects model was used under the assumption that the various ancestral populations in the included studies would introduce heterogeneity due to varying patterns of linkage disequilibrium (LD). LD analysis was performed using haplotype and allele frequency data extracted from http://www.allelefrequencies.net . The calculation of LD statistics was performed as described by Slatkin . Using a bone marrow registry for a Polish population ( http://www.allelefrequencies.net/pop6001c.asp?pop_id=3670 ), LD statistics were calculated for HLA-DRB1 *04:02 and HLA-DQB1 *05:02. Bias was not assessed in individual studies; however, findings were corrected for multiple testing using Bonferroni correction and meta-analyses were assessed for heterogeneity using I 2 . A total of 686 studies were identified, with 661 excluded following the screening of titles and abstracts. After screening and removal of duplicates, 21 studies met the selection criteria for being included into the literature review, and 13 of the 21 studies qualified for inclusion into the meta-analysis. The PRISMA flowchart with details of the search yield is shown in Fig. . The characteristics of the included studies in the meta-analysis are summarized in Table . All of the studies included individuals receiving clozapine treatment who demonstrated CIA defined as an ANC < 500/mm 3 (i.e., <0.5 × 10 9 /L or <500/μL), and comparison or control participants who had not developed any hematotoxic reactions to clozapine. The mean daily dosage of clozapine was 417.0 ± 144.8 mg/d and 482.4 ± 159.0 mg/d for the CIA and comparison groups, respectively, for studies that reported these data. Of the 13 studies included in the meta-analysis, two were genome-wide association studies (GWAS) and the rest were candidate gene studies conducted in different populations, including Ashkenazi Jewish , Europeans , Japanese , and others . Of the 13 studies, eight (61.5%) included only non-Jewish European, three (23.1%) included only Jewish, one (7.7%) included a mix of non-Jewish and Jewish Europeans, and one (7.7%) included Japanese samples. One study included participants with diagnoses other than schizophrenia or schizoaffective disorder . Tables and summarize the findings for fifty-three alleles and seven haplotypes, respectively, of individual studies for which no previous replication was found (i.e., these studies reported on allelic markers which have not been investigated in other independent studies). Twelve additional alleles and one additional haplotype were evaluated in at least two studies and were analyzed via meta-analysis shown in Table . Therefore, Bonferroni correction for multiple testing (m = 73) was applied to each of the 73-total analyses. After correction for multiple testing, four of the non-replicated alleles remained significant predictors of CIA, including TNFb5 (OR = 0.08; 95% CI 0.04, 0.20; p c = 1.64 × 10 −6 ), HLA-B *59:01 (OR = 7.21; 95% CI 3.56, 14.61; p c = 3.06 × 10 −6 ), TNFb4 (OR = 7.69; 95% CI 3.55, 16.65; p c = 1.71 × 10 −5 ), and TNFd3 (OR = 4.61; 95% CI 2.17, 9.82; p c = 5.23 × 10 −3 ). None of the non-replicated haplotypes were significant predictors of CIA after Bonferroni correction. After correction for multiple testing (m = 73), one of the meta-analyzed alleles (Table ) remained a significant predictor of CIA, HLA-DRB1 *04:02 (OR = 5.89; 95% CI 2.20, 15.80; p c = 0.03). The sensitivity and the specificity values of the HLA-DRB1 *04:02 allele for prediction of CIA were 26.0% and 94.0%, respectively. The PPV and NPV were estimated to be 3.8% and 99.3%, respectively. The number of new clozapine users needed to genotype to prevent one case of CIA is three in individuals of European ancestry, which may vary in other ancestral groups. Forest plots for each meta-analysis are available in the (Fig. ). Additional alleles have reached genome wide significance such as rs149104283 , rs3129891 , rs41549217 , and HLA-B 158 T . However, HLA-B 158 T was not found to be significantly associated with CIA in a second study . We systematically summarized and quantified available evidence on genetic variants contributing to CIA and conducted several meta-analyses. We found that one genetic variant within the human leukocyte antigen ( HLA ) locus (major histocompatibility complex [MHC] in humans) was significantly associated with CIA after correction for multiple testing. Specifically, individuals carrying the HLA-DRB1 *04:02 allele had nearly sixfold (95% CI 2.20, 15.80) increased odds of CIA. For this variant, the probability that CIA was not present in individuals without the HLA-DRB1 *04:02 allele (i.e., NPV) was 99.3%, corrected for the prevalence of CIA in the USA. A high NPV indicates potential clinical utility of the HLA-DRB1 *04:02 allele in stratifying patients based on risk of developing CIA, with those that are low risk (i.e., non-carriers of the variant) being suitable candidates for clozapine treatment with a relaxed hematological monitoring schedule and those that are not low risk (i.e., carriers of the variant) monitored more closely while on clozapine or considered for alternative treatment options. HLA-DRB1 *04:02 genotyping prior to the initiation of clozapine, if clinically implemented, would not be the first HLA predictive test for assessing risk of drug-related adverse reactions. Currently, the U.S. FDA recommends prospective screening for specific HLA alleles that are strongly associated with hypersensitivity reactions to carbamazepine, abacavir, and allopurinol prior to their initiation in populations where the allele is common . In comparison to these existing predictive tests, the NPV of HLA-DRB1 *04:02 (99.3%) is higher than the NPV of HLA-B *15:11 and HLA-B*57:01 genotyping for carbamazepine (98.9%) and abacavir (82%) hypersensitivity reactions, respectively, and about the same as the HLA-B*58:01 test (99.0%) for allopurinol-induced severe cutaneous adverse reactions (SCARs) . Although the PPV of HLA-DRB1 *04:02 was low (3.8%), indicating its weak ability to identify individuals who did indeed have CIA and avoiding false negatives, it is higher than the PPV of HLA-B *15:11 (1.0%) genotyping for carbamazepine-induced SCARs . Given that CIA is potentially fatal, the low PPV of HLA-DRB1 *04:02 genotyping is greatly outweighed by the very high NPV. Furthermore, the sensitivity of HLA-DRB1 *04:02 for the prediction of CIA is higher than that of the HLA-A *31:01 (23%) and HLA-B *15:11 (14%) alleles for the prediction of carbamazepine-induced SCARs, but lower than of the HLA-B *58:01 (100%) and HLA-B*57:01 (51%) alleles for the prediction of allopurinol- and abacavir-induced hypersensitivity, respectively. The specificity of HLA-DRB1 *04:02 (94%) was slightly lower than that of HLA-A *31:01 (95%) and HLA-B *15:02 (99%) for carbamazepine and of HLA-B*57:01 (96%) for abacavir hypersensitivity, but higher than that of the HLA-B *58:01 allele (88%) for allopurinol-induced SCARs . Therefore, the predictive value and validity of an HLA-DRB1 *04:02 screening test for assessing patient risk of CIA are comparable to existing HLA predictive tests that are currently used clinically. Although HLA-DRB1 *04:02 appears to be a promising biomarker for a predictive pharmacogenomic test for CIA based on our results, it is important to note that the clinical utility of a SNP-based predictive test will differ across populations, since allele frequencies and different patterns of LD in associated regions vary substantially between ancestral groups. Given that the two studies included in the meta-analysis of HLA-DRB1 *04:02 were conducted in small non-Jewish German (CIA = 30, Controls = 77) and Ashkenazi Jewish (CIA = 12, Controls = 18) samples, little is known about the predictive value of the HLA-DRB1 *04:02 allele in other European and non-European populations. The HLA-DRB1 *04 allele of the HLA-DRB1 gene, encoding the polymorphic beta chain of the HLA-DR antigen-binding cell surface receptor, has been reported to be less frequent in Sub-Saharan Africans (0.022) compared to Europeans (0.177), Native Americans of North America (0.496), Oceanians (0.087), and Southeast Asians (0.129) . HLA-DRB1 *04:02 represents the second HLA-DRB1 *04 allele within the serologically defined HLA-DR4 antigen family. Data on the allele frequency across populations for HLA-DRB1 *04:02 is currently lacking. Furthermore, certain alleles within the HLA region are inherited in a tight cluster as conserved haplotypes, which often varies among different population groups. This means that the HLA-DRB1 *04:02 allele may be in LD with other variants that is specific to the study population, and therefore may not show an association with CIA in other ancestral groups. Therefore, the association between the HLA-DRB1 *04:02 allele and CIA susceptibility warrants investigation in different ancestral groups in future studies for the application of an HLA-DRB1 *04:02-based predictive test for CIA to be relevant across populations. There were three GWAS on CIA identified by our search. Of the three, two identified GWAS on CIA included the same sample of patients from the Clozapine-Induced Agranulocytosis Consortium (CIAC) ( n = 161 CIA patients). Given that the statistical analysis assumes independent samples, only the most recent GWAS study of the two was included in the meta-analysis to avoid increased Type 1 error and biased effect estimates. The GWAS by Goldstein et al. (2014) showed a significant association between two MHC loci with CIA, HLA-DQB1 (126Q) (OR = 0.19, 95% CI 0.12–0.29) and HLA-B (158 T) (OR = 3.3, 95% CI 2.3–4.9) . HLA-DQB1 (126Q) is in strong LD with HLA-DQB1 *05:02, the most common HLA high-risk allele for CIA , and with HLA-DQB1 6672 G > C, also previously reported to be associated with risk for CIA . Results from our meta-analysis showed that HLA-DQB1 *05:02 (OR = 7.12, 95% CI 1.91–26.51) was significantly associated with CIA, before Bonferroni corrections were applied (Table ). Furthermore, Legge et al. (2017) and Konte et al. (2021) provided independent replications for the association between HLA-DQB1 6672 G > C and CIA risk in individuals of European ancestry. These results implicate HLA-DQB1 in the pathophysiology of CIA and indicates that polymorphisms within this gene may be associated with risk of CIA in European populations. In the GWAS by Legge et al. (2017), an association between genes at chromosome 12p12.2 with CIA was reported in a sample of European patients with the top SNP being rs149104283 (OR = 4.32, P = 1.79 × 10 −8 ), which is intronic to transcripts of the solute carrier organic anion transporter genes, SLCO1B3 and SLCO1B7 . A replication analysis was conducted by Saito et al. (2017) in a Japanese sample as a part of the Clozapine Pharmacogenomics Consortium of Japan, which found no significant association of 12p12.2 with CIA . Instead, in their GWAS, Saito et al. (2016) showed HLA-B *59:01 (OR = 10.7, 95% CI 4.8–22.4) as a risk factor for CIA in a sample of Japanese patients with schizophrenia (CIA = 50, Controls = 2905) . A combined GWAS meta-analysis in a sample of patients of Chinese ancestry identified a nominal association between rs11753309 near HLA-B and clozapine-induced neutropenia; however, this GWAS was not included in the meta-analysis as the results are not specific to CIA . Findings from these GWAS taken together demonstrate that risk alleles for CIA may vary by ancestral group given that some variants lie in variation-rich genomic regions and demonstrate large differences in allele frequencies across populations, and population-specific recombination sites contribute to the high diversity of haplotypes further influencing CIA risk . Several well-known alleles and genetic variants are localized within the MHC region and show LD; therefore, it complicates whether conclusions about specific associations between HLA alleles with CIA represent a true genetic association or whether the HLA allele is in LD, or closely located to the true causative gene . Goldstein et al. (2014) found that HLA-DRB1 *04:02 and HLA-DQB1 *05:02 were not in strong LD according to r 2 , yet the D’ between these two alleles may be quite high . Limited haplotype frequency data is available for HLA-DRB1 *04:02 and HLA-DQB1 *05:02, including a bone marrow registry for a Polish population whose LD statistics were calculated. Given a high enough D’, the association between HLA-DRB1 *04:02 and CIA may not be sufficiently independent from that of HLA-DQB1 *05:02 and makes it difficult to conclude the respective contribution of a given allele to predisposition to CIA. Therefore, further analysis is required to confirm this in other populations, especially considering the rarity of this haplotype in the Polish population. Previously unreplicated alleles, TNFb5 (OR = 0.08; 95% CI 0.04–0.20), TNFb4 (OR = 7.69; 95% CI 3.55–16.65), and TNFd3 (OR = 4.61; 95% CI 2.17–9.82), showed significant associations with CIA after corrections for multiple testing. TNF microsatellites d3 and b4 were associated with increased susceptibility to CIA, while microsatellite b5 showed a protective effect in both Jewish and non-Jewish individuals with schizophrenia . TNF are immune-regulating non- HLA genes in the MHC region located between the complement cluster region and the HLA-B gene, which has more recently been demarcated as the Class IV region, and have shown a LD with HLA-B and HLA-DR alleles . As a result, for studies showing an association between TNF or HLA-B alleles with CIA noted above, strong LD between these genes complicates unraveling the relative contributions of genetic variation in the TNF locus or the HLA-B locus to CIA. The limitations of the present study include a lack of bias analysis, which may lead to an overestimation of the true effect size of the results, since studies with higher effects are more likely to be published, and thus be included in the meta-analysis. To reduce the effects of publication bias, we performed a comprehensive search to identify all relevant, published literature. An additional limitation is that we cannot confirm with certainty that the sample in the GWAS conducted by Legge et al. (2016) does not overlap with samples in the other studies included in this meta-analysis. For the GWAS by Saito et al. (2016), since it is the only study conducted in patients of Japanese ancestry, it can be safely deduced that it does not overlap with the other studies which are conducted in patients of European ancestry (Table ). Finally, translating findings from pharmacogenomic investigations, such as the present study, into clinical practice may not necessarily increase the usage of clozapine to treat TRS patients, but may instead be a deterrent to its usage as was the case with carbamazepine, which was substantially less prescribed to patients with epilepsy and bipolar disorder following the introduction of HLA-B *15:02 screening in Taiwan to prevent carbamazepine-related SCARs . Currently, there is no predictive test for CIA. For a predictive test to be useful within clinical practice, it should reliably be able to identify individuals who are at low risk of developing CIA in non-carriers of the risk allele, such that hematological monitoring is either unnecessary or reduced in frequency. NPV or the proportion of patients with a negative test (i.e. non-carriers of the risk allele) who truly do not have the condition is a reliable diagnostic for the clinical usefulness of a predictive test . Test results with a high NPV are useful to clinicians when considering treatments which can potentially be unnecessary, costly or even risky, such as clozapine pharmacotherapy . Based on the results of the meta-analyses, the HLA-DRB1 *04:02 variant demonstrates a potential for pharmacogenomic prediction of CIA within clinical practice with a high NPV (99.3%). Estimates of NPV, PPV, sensitivity, and specificity for HLA-DRB1 *04:02 genotyping for CIA risk assessment are comparable to other existing HLA screening tests for drug-induced hypersensitivity reactions that are used clinically. However, since allele frequencies and haplotypes vary substantially by ancestral groups, further research is needed to investigate the association between the HLA-DRB1 *04:02 allele and CIA susceptibility in different populations. Furthermore, the results of the meta-analysis indicate immunogenetic variations, specifically relating to the MHC genomic region, may be involved in the pathophysiology of CIA, and therefore are potential targets for identifying other genetic markers for CIA. Pharmacogenomic investigations to date suggest the involvement of multiple genetic variations with varying levels of impact on CIA. Therefore, further research is necessary to identify reliable and reproducible genetic variants in diverse populations with large effects related to CIA that can be incorporated into a predictive pharmacogenomic test. Clinical application of predictive pharmacogenomic tests with high NPV may increase the safe utilization of clozapine, decrease costs associated with regular long-term hematologic monitoring, and most importantly, improve patient outcomes. Supplementary Materials
Remote supervision for simulated cataract surgery
f8e83a9d-c89b-41e8-866f-6a0b87b03e60
8236745
Ophthalmology[mh]
Simulation in cataract surgery is established as a validated tool in developing surgical ability and reducing complications . Supervision of trainee surgeons is fundamental to ensure the development of correct techniques and to prevent bad habits from being formed. This principle extends to simulated cataract surgery using the Eyesi surgical simulator (VRmagic Holding AG, Mannheim, Germany). The use of Eyesi surgical simulators has been associated with a significant decrease in the rate of posterior capsular rupture amongst trainee surgeons . It has been reported that access to Eyesi surgical simulators is highly varied . Trainees often need to travel to other hospitals in order to use a simulator. We present a novel method of interfacing teleconferencing software with the Eyesi surgical simulator that can allow trainees to be remotely supervised by their trainers from anywhere in the world (Fig. ). The technical configuration consists of a short series of widely available and low-cost adapters. The analogue video from the Eyesi is converted to a digital signal and then split into two feeds. One feed goes back towards the Eyesi after being converted back into an analogue signal. The other feed is output as a digital video signal through a USB connection which is recognised by computers as a webcam. This gives the flexibility for users to stream the audio and video from the Eyesi surgical simulator via a teleconferencing application of their choice in real time in order to receive remote supervision. Video output and touch screen controls of the Eyesi’s monitor are retained as per normal. Configuration is required once only after which cables, except for the USB connector, can be discretely hidden away with ease. A detailed overview of this system including how to set this up in one’s own department and what equipment is required is detailed in Supplementary Video 1. In our department we chose to further enhance remote supervision by joining teleconferencing calls with an additional mobile phone whose camera was pointing towards the trainee and their hands. This was mounted on top of the Eyesi’s monitor. This allowed the supervisor to give additional feedback about positioning and posture as well as allowing the trainee to converse more naturally with their trainer. As a result of the COVID-19 pandemic and the restrictions related to social distancing, trainees are unlikely to be able to be able to have face to face supervision when using the Eyesi surgical simulator. Our method of remote supervision offers trainees an option to be supervised by their own local trainers even if they need to travel to another hospital to use a simulator. Remote and internet based surgical supervision democratises training by connecting experts with junior surgeons, potentially from all over the world, to ensure the highest standard of training. This will ultimately lead to the best possible outcomes for patients. Video 1
Typische Notfälle in der Hals-Nasen-Ohren-Heilkunde – eine monozentrische Evaluation über den jahreszeitlichen Verlauf
ef2f5fc7-0ab8-4395-bf76-843fe4781ce3
9164187
Otolaryngology[mh]
Die Inzidenz der Akutdiagnosen im Hals-Nasen-Ohren(HNO)-Bereich wird durch multiple Parameter beeinflusst. Hierzu gehören das patientenindividuelle Verhalten, Komorbiditäten und allgemeine Umstände wie zeitliche oder lokale Risikofaktoren. Ein solcher Umstand ist die Jahreszeit, welche u. a. durch Temperaturänderung und entsprechend angepasste Freizeitaktivitäten der Patienten zum vermehrten Auftreten einzelner Diagnosen führt. Epistaxis nasi, Otitis externa und Otitis media sowie Traumata gehören zu den häufigsten Notfalldiagnosen in der HNO-Heilkunde . Viele der Akutdiagnosen im HNO-Bereich werden der kalten Jahreszeit zugeordnet, da beispielsweise virale und bakterielle Infekte im Bereich der oberen Atemwege nicht nur für sich eine akute ärztliche Vorstellung mit sich ziehen, sondern wiederum konsekutiv eine Otitis media oder Epistaxis nasi als Folge haben können . In bisherigen Studien wird auch für Abszesse im Bereich der Mundhöhle und des Pharynx ein Zusammenhang mit meteorologischen Aspekten diskutiert. Eine brasilianische retrospektive Untersuchung postulierte beispielsweise, dass Peritonsillarabszesse vermehrt in den wärmeren Monaten auftreten . Ebenso fand eine deutsche Studie Hinweise für einen Zusammenhang der Inzidenz von odontogenen Abszessen und der Außentemperatur . Hingegen konnten zwei deutsche retrospektive Studien keinen statistischen Zusammenhang zwischen odontogenen Abszessen und meteorologischen Parametern wie Temperatur, Atmosphärendruck und relativer Luftfeuchtigkeit finden . Der Einfluss einzelner Diagnosen untereinander ist ein weiterer Diskussionspunkt. Ein Beispiel ist der Zusammenhang von akuter Tonsillitis und Peritonsillarabszessen. Gegenübergestellt werden die Hypothesen der Peritonsillarabszessentstehung als Komplikation einer akuten Tonsillitis versus Abszessentstehung aufgrund von einer Blockade des Weber-Drüsen-Ausführungsgangs am oberen Tonsillenpol . Auch die akute virale Rhinosinusitis gehört zu den häufigsten HNO-Notfallvorstellungen mit Einfluss auf viele weitere Diagnosen im HNO-Bereich. Trotz der zunächst erscheinenden Banalität ist das Krankheitsbild nicht zu unterschätzen. Eine schwedische Fragebogenstudie mit über 1000 ausgewerteten Bögen zeigte einen substanziellen Einfluss auf die Gesellschaft durch die akute Rhinitis mit durchschnittlich 5,1 Fehldiagnosen aufgrund der Erkrankung pro Bürger pro Jahr . Trotz vorwiegend leichter Krankheitsverläufe können Komplikationen in Form von intraorbitaler oder intrakranieller Infektionsausbreitung auftreten . Weitere häufige Komplikationen sind die Otitis media über aufsteigende Keime durch die Eustachi’sche Röhre sowie deren Komplikationen . Auch das Auftreten von Epistaxis nasi wird als mögliche Komplikation einer Infektion des oberen Atemtrakts zugeschrieben . Das jahreszeitabhängig vermehrte Auftreten von Epistaxis nasi wird in vielen Studien befürwortet, dennoch gibt es auch hier einige Untersuchungen, die keinen statistisch signifikanten Zusammenhang zwischen dem Auftreten von Epistaxis nasi und meteorologischen Einflüssen finden konnten . Die Diagnose akute Otitis externa hingegen ist klar assoziiert mit Wärme und hoher Luftfeuchtigkeit. Ihre Inzidenz ist in tropischen Gebieten merklich höher als in gemäßigteren Temperaturzonen. Prädisponierende Faktoren reichen vom regelmäßigen Schwimmen über dermatologische Grunderkrankungen bis zu lokalen Traumata . Zu den weiteren häufigen Notfalldiagnosen im HNO-Bereich gehört die Nasenpyramidenfrakur. Diese tritt signifikant häufiger bei Männern im Alter von 19–29 Jahren auf . Zu den häufigsten Ursachen gehören Verkehrsunfälle, tätliche Auseinandersetzungen und Stürze, wobei die jeweilige Häufigkeit zwischen den Kontinenten und Altersgruppen differiert . Die Definition eines medizinischen Notfalls ist eigentlich ein akuter, lebensbedrohlicher klinischer Zustand . Da im medizinischen Alltag der Begriff „Notfall“ oft Patienten bezeichnet, die aufgrund von akuten Beschwerden eine medizinische Einrichtung außerhalb der regulären Öffnungszeiten bzw. ohne eigentlichen Termin konsultieren, wird auch in diesem Artikel die Bezeichnung „Notfall“ für die untersuchten Diagnosen verwendet. Die Fallzahlen in Notaufnahmen nehmen kontinuierlich zu. In einem Positionspapier für eine Reform der medizinischen Notfallversorgung in deutschen Notaufnahmen wird von jährlichen Steigerungen der Fallzahlen von 4–8 % gesprochen . Gründe hierfür werden unter anderem in der zunehmenden Multimorbidität der Bevölkerung und der Reduktion von medizinischen Versorgungsstrukturen gesehen. Ziel unserer retrospektiven Arbeit war es mitunter, weitere allgemeine Faktoren in einer süddeutschen Universitätsklinik zu untersuchen, die zu einer erhöhten Inzidenz von einzelnen Akutdiagnosen führt. Ein Schwerpunkt war dabei der saisonale Verlauf der Notfallvorstellungen und der Zusammenhang einzelner Diagnosen untereinander. Folgende Hypothesen zu den Notfalldiagnosen sollten in der vorliegenden Arbeit untersucht werden: Die Otitis externa tritt zusammenhängend mit den lokalen Sommerferien vermehrt auf. Notfalldiagnosen der inneren Nase und des Nasenrachensystems sowie des Mittelohrs (Otitis media, akute Rhinosinusitis und Epistaxis nasi) treten vermehrt in der kälteren Jahreshälfte auf. Entzündliche Notfälle der Tonsillen treten vermehrt in der kälteren Jahreszeit auf (akute Tonsillitis, Peritonsillarabszess). Das Auftreten der Diagnosen Peritonsillarabszess und akute Tonsillitis korreliert miteinander. Die Diagnose akute Nasenpyramidenfraktur tritt vermehrt in Wochen mit einem gesetzlichen Feiertag auf. Es wurde eine digitale Auswertung der Notfalldiagnosen in der HNO-Heilkunde durchgeführt. Die Auswertung erfolgte auf Basis der ICD(„International Statistical Classification of Diseases and Related Health Problems“)-10-Diagnosecodes mithilfe des SAP-basierten Krankenhausinformationssystems i.s.h.med® (Cerner Corporation, North Kansas City, MO, USA). Der ausgewertete Zeitraum betrug sechs Jahre (01.01.2013–31.12.2018). Ausgewertet wurden alle im i.s.h.med® dokumentierten ambulanten und stationär aufgenommenen Fälle mit den im folgenden genannten Diagnosen. Bewusst wurden häufige Notfalldiagnosen ausgewählt, die hinsichtlich der genannten Hypothesen und in Abhängigkeit zum saisonalen Verlauf in der Auswertung von Interesse waren und zudem rein aufgrund des klinischen Bildes richtig diagnostiziert werden – ohne weitere Gerätediagnostik, die im Notdienst nicht immer verfügbar ist. Die Auswertung der Häufigkeit ist dementsprechend nur untereinander und anhand der Absolutzahlen möglich, nicht im allgemeinen Vergleich zur Analyse aller Notfalldiagnosen in einer HNO-Klinik. Es handelt sich um die Auswertung eines Universitätsklinikums im süddeutschen Raum mit einem Einzugsgebiet von etwa 100 km Radius. Die folgenden Diagnosen wurden berücksichtigt: H61.2 Cerumen H60.3 Otitis externa H66.0 Akute Otitis media J03. Akute Tonsillitis J63 Peritonsillarabszess R04.0 Epistaxis nasi J01. Akute Sinusitis S02.2 Nasenpyramidenfraktur Ausgewertet wurde die jeweilig erfasste Hauptdiagnose, die im Rahmen der Patientenvorstellung ermittelt wurde. Es erfolgte eine primär deskriptive Analyse. Die Vorstellungen wurden in die einzelnen Kalenderwochen im Jahr eingeteilt. Summe, Median und Mittelwert wurden ermittelt. Mittels Microsoft Excel und GraphPad Prism wurden graphische Darstellungen erstellt. Die statistische Analyse wurde mit GraphPad Prism durchgeführt. Es erfolgten Korrelationsanalysen zu zeitlichen Faktoren wie Ferien bzw. Feiertagen. Diese wurden mittels der nichtparametrischen Korrelationsanalyse nach Spearman untersucht. Signifikanzanalysen wurden mit dem zweiseitigen Mann-Whitney-U-Test durchgeführt. Generell wurde bei einem p -Wert von ≤ 0,05 ein statistisch signifikanter Zusammenhang angenommen. Insgesamt wurden 32.968 Fälle ausgewertet. Im Folgenden sind die Ergebnisse nach Häufigkeit in Absolutzahlen aufgelistet. Die Abb. stellt den prozentualen Anteil dar. Epistaxis nasi (8082 Fälle) Otitis media (5918 Fälle) Otitis externa (4781 Fälle) Akute Tonsillitis (3998 Fälle) Nasenpyramidenfraktur (3313 Fälle) Cerumen obturans (2451 Fälle) Peritonsillarabszess (2411 Fälle) Akute Rhinosinusitis (2014 Fälle) Alle Fälle wurden als Absolutzahlen summiert und den jeweiligen Kalenderwochen zugeordnet. Hierdurch wurde ein Verlauf der Fälle über das Jahr herausgearbeitet (Abb. ). In Kalenderwoche 38 waren in der Summe die wenigsten Notfallvorstellungen aufgrund der genannten Diagnosen, in Kalenderwoche 52 kam es summiert zu den meisten Notfallvorstellungen. Notfälle äußeres Ohr Die Diagnosen Cerumen obturans und Otitis externa wurden ebenfalls im Jahresverlauf nach Kalenderwochen analysiert. Hier zeigte sich für die Otitis externa eine deutliche Fallzunahme in den Sommermonaten. Es wurde eine Signifikanztestung mittels zweiseitigem Mann-Whitney-U-Test durchgeführt, wobei ein Zusammenhang zwischen dem Auftreten der Diagnosen und den Sommerferien (Bayern und Baden-Württemberg) gegenüber den restlichen Wochen im Jahr untersucht wurde. Hier zeigte sich ein signifikanter Zusammenhang zwischen dem Auftreten der Otitis externa und den Sommerferien ( p < 0,01; Abb. a) und kein signifikanter Zusammenhang zwischen dem Auftreten von Cerumen obturans und den Sommerferien. ( p = 0,1986; Abb. b). Notfälle innere Nase, Nasenrachensystem und Mittelohr Die Diagnosen Epistaxis nasi, akute Rhinosinusitis und die akute Otitis media wurden als Notfalldiagnosen der inneren Nase und des Nasenrachensystems ausgewertet. Neben der summierten Auswertung über die Kalenderwochen des Jahres erfolgte eine Signifikanztestung hinsichtlich des Auftretens in der kälteren und wärmeren Jahreszeit. Hierzu wurden die Kalenderwochen 1–14 sowie 41–52 als kältere Jahreszeit und die Kalenderwochen 15–40 als warme Jahreszeit definiert. Hier zeigte sich bei allen drei Diagnosen ein signifikanter Zusammenhang zur kalten Jahreshälfte (Otitis media: p = 0,0022; Abb. a; akute Rhinosinusitis: p = 0,005; Abb. b; Epistaxis nasi: p = 0,0043; Abb. c; Mann-Whitney-U-Test). Notfälle des Oropharynx Zu den Notfällen im Bereich der Tonsillen wurde die akute Tonsillitis und der Peritonsillarabszess ausgewertet. Auch hier erfolgte neben den Absolutwerten im zeitlichen Verlauf über das Jahr eine Signifikanztestung hinsichtlich des Auftretens in der kälteren und wärmeren Jahreszeit (Abb. ). Hierzu wurden wie im eben genannten Abschnitt die Kalenderwochen 1–14 sowie 41–52 als kältere Jahreszeit und die Kalenderwochen 15–40 als warme Jahreszeit definiert. Es zeigte sich bei beiden Diagnosen kein signifikanter Zusammenhang (akute Tonsillitis: p = 0,1488; Peritonsillarabszess: p = 1,00; Mann-Whitney-U-Test). Darüber hinaus wurde eine Korrelationsanalyse durchgeführt, ob das zeitliche Auftreten der Diagnose akute Tonsillitis mit dem Auftreten der Diagnose Peritonsillarabszess korreliert. Auch hier ergab sich in der nichtparametrischen Korrelationsanalyse nach Spearman (r s = 0,05536) kein Anhalt für eine Korrelation ( p = 0,3927). Notfall Nasenpyramidenfraktur Die Notfalldiagnose Nasenpyramidenfraktur wurde in Absolutzahlen über den Jahresverlauf ausgewertet (Abb. a). Zur Analyse der Risikofaktoren erfolgte zusätzlich eine Auswertung der Häufigkeit des Auftretens der Notfalldiagnose Nasenpyramidenfraktur in der Woche eines gesetzlichen Feiertags. Dabei wurden die Feiertage in den süddeutschen Bundesländern berücksichtigt. In der Untersuchung konnte kein signifikanter Zusammenhang festgestellt werden ( p = 0,4704; Mann-Whitney-U-Test, Abb. c). Die Diagnosen Cerumen obturans und Otitis externa wurden ebenfalls im Jahresverlauf nach Kalenderwochen analysiert. Hier zeigte sich für die Otitis externa eine deutliche Fallzunahme in den Sommermonaten. Es wurde eine Signifikanztestung mittels zweiseitigem Mann-Whitney-U-Test durchgeführt, wobei ein Zusammenhang zwischen dem Auftreten der Diagnosen und den Sommerferien (Bayern und Baden-Württemberg) gegenüber den restlichen Wochen im Jahr untersucht wurde. Hier zeigte sich ein signifikanter Zusammenhang zwischen dem Auftreten der Otitis externa und den Sommerferien ( p < 0,01; Abb. a) und kein signifikanter Zusammenhang zwischen dem Auftreten von Cerumen obturans und den Sommerferien. ( p = 0,1986; Abb. b). Die Diagnosen Epistaxis nasi, akute Rhinosinusitis und die akute Otitis media wurden als Notfalldiagnosen der inneren Nase und des Nasenrachensystems ausgewertet. Neben der summierten Auswertung über die Kalenderwochen des Jahres erfolgte eine Signifikanztestung hinsichtlich des Auftretens in der kälteren und wärmeren Jahreszeit. Hierzu wurden die Kalenderwochen 1–14 sowie 41–52 als kältere Jahreszeit und die Kalenderwochen 15–40 als warme Jahreszeit definiert. Hier zeigte sich bei allen drei Diagnosen ein signifikanter Zusammenhang zur kalten Jahreshälfte (Otitis media: p = 0,0022; Abb. a; akute Rhinosinusitis: p = 0,005; Abb. b; Epistaxis nasi: p = 0,0043; Abb. c; Mann-Whitney-U-Test). Zu den Notfällen im Bereich der Tonsillen wurde die akute Tonsillitis und der Peritonsillarabszess ausgewertet. Auch hier erfolgte neben den Absolutwerten im zeitlichen Verlauf über das Jahr eine Signifikanztestung hinsichtlich des Auftretens in der kälteren und wärmeren Jahreszeit (Abb. ). Hierzu wurden wie im eben genannten Abschnitt die Kalenderwochen 1–14 sowie 41–52 als kältere Jahreszeit und die Kalenderwochen 15–40 als warme Jahreszeit definiert. Es zeigte sich bei beiden Diagnosen kein signifikanter Zusammenhang (akute Tonsillitis: p = 0,1488; Peritonsillarabszess: p = 1,00; Mann-Whitney-U-Test). Darüber hinaus wurde eine Korrelationsanalyse durchgeführt, ob das zeitliche Auftreten der Diagnose akute Tonsillitis mit dem Auftreten der Diagnose Peritonsillarabszess korreliert. Auch hier ergab sich in der nichtparametrischen Korrelationsanalyse nach Spearman (r s = 0,05536) kein Anhalt für eine Korrelation ( p = 0,3927). Die Notfalldiagnose Nasenpyramidenfraktur wurde in Absolutzahlen über den Jahresverlauf ausgewertet (Abb. a). Zur Analyse der Risikofaktoren erfolgte zusätzlich eine Auswertung der Häufigkeit des Auftretens der Notfalldiagnose Nasenpyramidenfraktur in der Woche eines gesetzlichen Feiertags. Dabei wurden die Feiertage in den süddeutschen Bundesländern berücksichtigt. In der Untersuchung konnte kein signifikanter Zusammenhang festgestellt werden ( p = 0,4704; Mann-Whitney-U-Test, Abb. c). In der retrospektiven Studie einer Universitätsklinik zu akuten Notfalldiagnosen im HNO-Bereich mit insgesamt 32.968 Fällen wurden fünf zuvor aufgestellte Hypothesen bearbeitet und für die einzelnen Diagnosen eine deskriptive Auswertung hinsichtlich ihres Auftretens im Jahresverlauf durchgeführt. Mit 24,5 % (8082 Fälle) war die Epistaxis nasi in der vorliegenden Studie die häufigste Notfalldiagnose. Dies entspricht dem Ergebnis der retrospektiven Untersuchung von Kuhr et al., in welcher die Epistaxis nasi ebenfalls die am häufigsten aufgetretene HNO-Notfalldiagnose war . Bolz et al. führten 2004 eine retrospektive Auswertung in einer HNO-Notfallambulanz der Universitätsklinik Köln durch. Hier waren die Otitis media gefolgt von der Otitis externa die häufigsten Notfalldiagnosen . Mitunter einen Einfluss auf die Häufigkeit der einzelnen Diagnosen nimmt sowohl das Einzugsgebiet als auch die Größe der Stadt und das Vorhandensein weiterer Anlaufstellen, wie HNO-Notdienste der kassenärztlichen Vereinigung (KV). Somit ist an dieser Stelle als erste Limitation die unizentrische Auswertung der Studie zu nennen, die im Fall der vorliegenden Studie eine HNO-Universitätsklinik ohne naheliegende weitere Universitätsklinik mit einem Einzugsgebiet von ungefähr 100 km Radius umfasst. Dies sind Faktoren, die relevanten Einfluss auf die Notfallvorstellungen haben und im Fall der vorliegenden Klinik auch zeigen, wie hoch die Anzahl der Vorstellung von Bagatelldiagnosen ist, die in einer fachspezifischen Universitätsklinik entsprechende Ressourcen binden. Auch das Behandlungsspektrum mitinvolvierter Disziplinen wie der Pädiatrie oder der Mund-Kiefer-Gesichts-Chirurgie beeinflusst das Notdienst-Patientenkollektiv relevant und stellt aufgrund der schwierigen Berücksichtigung in einer solchen unizentrischen Auswertung eine Limitation dar . Im Folgenden sollen die Ergebnisse zu den im Vorfeld aufgestellten Hypothesen diskutiert werden. Die Inzidenz der Otitis externa zeigte sich in der vorliegenden Studie signifikant assoziiert mit dem Zeitraum der lokalen Sommerferien. Dies unterstützt die Annahme, dass das individuelle Verhalten wie vermehrtes Baden und Schwimmen einen relevanten Einfluss auf die Inzidenz der Erkrankung hat. Zumeist handelt es sich bei der Otitis externa um ein unkompliziertes Krankheitsbild, zur generellen Prognose ist es schwierig, valide Daten zu finden . Da eine feuchte Umgebung bekanntermaßen ein Risikofaktor für die Otitis externa ist, könnten einfache Maßnahmen wie das konsequente Trockenföhnen der Ohren nach dem Baden sowie eine generelle Gehörgangspflege insbesondere bei Patienten mit weiteren Risikofaktoren (z. B. Hörgeräteträger) die Inzidenz der Otitis externa reduzieren und somit die Notaufnahmen entlasten. Basis wäre hierfür eine Patientenaufklärung im Alltag, vorwiegend durch HNO-ärztliche, pädiatrische oder allgemeinärztliche Praxen. Über den Nasopharynx besteht eine enge anatomische und funktionelle Verbindung infektiöser Notfalldiagnosen wie der Otitis media und der akuten Rhinosinusitis, was die positive Korrelation in unserer Auswertung zwischen beiden Erkrankungen erklärt. Zur Verbesserung der Belüftung des Mittelohrs bei akuter Rhinosinusitis – und somit zur Prävention einer akuten Otitis media als Komplikation – sowie zur Verhinderung eines komplizierten Verlaufs einer Sinusitis wäre die temporäre Anwendung von topischen Dekongestiva eine leicht durchführbare Therapieoption. Die Datenlage hierzu ist bislang trotz der Häufigkeit der Krankheitsbilder unzureichend . Die positive Korrelation unserer Auswertung zwischen Epistaxis nasi und akuter Rhinosinusitis hingegen kann auf unterschiedlichen Wegen diskutiert werden. Zum einen bestehen bei beiden Erkrankungen ähnliche Risikofaktoren, wie z. B. eine trockene Umgebungsluft und digitale Manipulation. Zum anderen führt eine alterierte Schleimhaut im Rahmen einer Rhinosinusitis zu einer vermehrten Blutungstendenz, was einen kausativen Zusammenhang annehmen lässt. Auch hier bestehen einfach durchzuführende Präventionsmaßnahmen in konsequenter Nasenpflege und der Anwendung von schleimhautbefeuchtenden Maßnahmen. Zunächst unerwartet in der vorliegenden Auswertung war das Ergebnis, dass die Diagnose akute Tonsillitis keine signifikante Korrelation zur Saison aufwies, was entgegen der Aussage einiger bisher vorliegender Daten ist . Eine aktuelle chinesische Studie fand sogar eine Assoziation des Auftretens der akuten Tonsillitis zu höheren Temperaturen . Hinsichtlich der Assoziation der akuten Tonsillitis zum Peritonsillarabszess wurde initial angenommen, dass ein Peritonsillarabszess aus einer akuten Tonsillitis entsteht. Das vorliegende Ergebnis, dass das Auftreten der akuten Tonsillitis nicht mit dem Auftreten der Diagnose „akuter Peritonsillarabszess“ korreliert, widerspricht dieser Annahme und deutet auf die Theorie der Abszessentstehung anderer Genese, wie aus den extratonsillär gelegenen Weber-Drüsen . Weiterhin bleibt diese Thematik viel diskutiert, beispielsweise stellt die Publikation von Klug et al. Argumentationen beider Seiten gegenüber . Ein Bruch der Nasenpyramide ist die häufigste Fraktur im Bereich des Gesichts . Insbesondere isolierte Nasenpyramidenfrakturen treten vorwiegend durch menschliche Auseinandersetzungen auf . Die formulierte Hypothese, dass beispielsweise durch vermehrten Alkoholkonsum in Wochen von gesetzlichen Feiertagen die Inzidenz von Nasenpyramidenfrakturen steigt, konnte in der vorliegenden Studie nicht bestätigt werden. Da die Studie vor der aktuellen Coronaviruspandemie durchgeführt wurde, sind die Zahlen nicht durch das Tragen des Mund-Nasen-Schutzes oder andere pandemiespezifische Faktoren beeinflusst. In vielen nationalen und internationalen Studien wurde inzwischen gezeigt, dass während der Coronapandemie die Notfallvorstellungen aus diversen Gründen zurückgingen. Die häufigsten Diagnosen vor und während der COVID-19-Pandemie hatten sich in einer Untersuchung einer HNO-Klinik aus Süddeutschland jedoch nicht signifikant verändert. Epistaxis nasi war weiterhin die häufigste Ursache für eine Akutvorstellung im HNO-Notdienst während der Coronapandemie, wenn auch mit geringeren Absolutzahlen . Die schon zitierte retrospektive Untersuchung von Kuhr et al., in der auch die zuweisende Quelle der Notfallvorstellungen in einer deutschen HNO-Universitätsklinik untersucht wurde, zeigte, dass Notfallpatienten vorwiegend aufgrund von Eigeninitiative vorstellig wurden. Zweithäufigste Fälle waren Überweisungen durch Allgemeinmediziner, und an dritter Stelle standen Überweisungen durch HNO-Fachärzte . Hier zeigt sich, dass Informationen zur potenziellen Prävention von HNO-Erkrankungen und einfach durchführbare Eigenmaßnahmen im Fall von Symptomen vorwiegend an Patienten selbst bzw. Allgemeinmediziner adressiert werden sollten. Eine Limitation der vorliegenden Studie ist das retrospektive Setting, welches hingegen die große Patientenzahl ermöglichte. Eine weitere Limitation sind unbekannte Faktoren, die auf die vorliegenden Zahlen Einfluss nahmen, ohne dass sie quantitativ erfasst werden konnten. Hierzu gehören beispielsweise die Öffnungszeiten und Besetzung umliegender HNO-Praxen, die zu einem generellen Anstieg der Absolutzahlen der Notfallpatienten in der Universitätsklinik an Feiertagen und an Wochenenden führen kann. Zudem wurden in der vorliegenden Arbeit ausgewählte Diagnosen ausgewertet. Spezielle Charakteristika, wie Alter und Geschlecht der Patienten, sowie individuelle Risikofaktoren oder Verläufe waren nicht Teil der Auswertung. Zudem wurde nur die aktuelle Hauptdiagnose erfasst. Es ist somit möglich, dass bei Patienten, bei denen eine akute Rhinosinusitis diagnostiziert wurde, parallel eine leichte Otitis media vorlag, die keinen Einfluss auf die Auswertung hatte. Berücksichtigt werden muss auch, dass die Analyse vor der aktuellen Coronaviruspandemie durchgeführt wurde. Somit bestand kein Einfluss auf die Daten durch die zwischenzeitlich eingeführte Maskenpflicht und/oder Quarantäneregelungen. Die Patientenfallzahlen in Notaufnahmen nehmen kontinuierlich zu, und Personalprobleme in medizinischen Einrichtungen sind ein mehr denn je diskutiertes Thema. Die vorliegenden Zahlen verdeutlichen, dass der HNO-Notdienst insbesondere in der kälteren Jahreszeit frequentiert wird. Hauptursache liegt in der höheren Inzidenz einiger der häufigsten Notfalldiagnosen im HNO-Bereich, wie der Epistaxis nasi, akuten Otitis media und akuten Rhinosinusitis. Diskutiert werden kann in diesem Zusammenhang eine optimierte Personalbesetzung in der kälteren Jahreshälfte. Prophylaktische Maßnahmen könnten die Inzidenz der meisten Notfallvorstellungen zudem reduzieren und somit die Belastung der Notaufnahmen verringern. Eine Verringerung der Inzidenz der akuten Otitis externa könnte durch eine bessere Patientenaufklärung erreicht werden, welche das bestmögliche Trockenhalten der äußeren Gehörgänge insbesondere im Sommerurlaub beinhaltet. Das Auseinandersetzen mit leicht vermeidbaren prädisponierenden Faktoren für einzelne Notfalldiagnosen stellt einen relevanten Ansatzpunkt für die Reduktion der Patientenfallzahlen in Notaufnahmen dar. Der HNO-ärztliche Notdienst wird vergleichsweise häufiger in der kälteren Jahreszeit frequentiert, was durch eine signifikante Zunahme der Inzidenzen der Notfalldiagnosen im Bereich der inneren Nase und des Nasenrachensystems begünstigt wird. Es besteht keine signifikante Korrelation zwischen saisonalen Faktoren und den Diagnosen akuter Tonsillitis und Peritonsillarabszess sowie zwischen den beiden Diagnosen untereinander. Ein deutlicher Zusammenhang zwischen akuter Rhinosinusitis und akuter Otitis media bzw. Epistaxis nasi konnte gezeigt werden.
A case of large uterine cystic adenomyosis outside the uterus after laparoscopic myomectomy: a case report and literature review
135b3e33-e675-4297-88c4-279921b4d8af
11705667
Laparoscopy[mh]
Uterine cystic adenomyosis is an infrequent subtype of focal adenomyosis that predominantly affects younger women . It manifests as one or more fluid-filled cavities within the myometrium containing old bloody fluid and lined with epithelium, endometrial-like glands, and stromal components . The shedding of endometrial-like tissue from these cavities during the menstrual cycle leads to hemorrhagic infarction in adjacent smooth muscle tissues and the accumulation of blood within the cysts. Enlarged cysts can cause severe dysmenorrhea, chronic pelvic pain, excessive menstrual bleeding, and infertility . Based on patient age, this condition is classified into juvenile cystic adenomyosis (JCA) and adult cystic adenomyosis; however, reports on the latter are relatively rare . In this report, we describe a case of large adult uterine cystic adenomyosis located at the original incision site following laparoscopic myomectomy. The patient, a 36-year-old Chinese woman (gravida 1, para 1) presented to our hospital with complaints of menorrhagia and intermittent right lower abdominal pain persisting for one year. Six years ago, the patient underwent laparoscopic surgery at our hospital for the excision of a 4 cm diameter uterine fibroid (Type 6, the International Federation of Gynecology and Obstetrics leiomyoma subclassification system 2018) with cystic degeneration (Fig. A) located in the right posterior uterine wall near the isthmus of the cervix. During the operation, the location of the uterine fibroid was low and close to the uterine cavity, but the cavity was not opened. Upon admission, pelvic ultrasound revealed a solid-cystic mass measuring 7.5 × 5.5 × 4.7 cm in size within the right adnexal area (Fig. E), while both ovaries appeared normal. Ultrasound also suggested a normal endometrium and uterine cavity. Pelvic computed tomography (CT) revealed a multilocular cystic mass situated on the posterior aspect of the uterus (Fig. B). The density within some cysts was slightly greater and unevenly distributed, with no enhancement observed following contrast agent administration. Notably, there seemed to be communication between this mass and the uterine cavity (Fig. C). No abnormalities were detected in the left adnexal region during CT imaging. The radiological findings suggested that this mass could represent an endometriotic cyst; however, all tumor marker levels were within normal limits. Given the uncertainty regarding its origin prior to surgery but assuming that it was more likely benign in nature, laparoscopic exploration was performed under general anesthesia. During the surgical procedure, we were initially perplexed by the presence of a well-defined cyst measuring approximately 7.5 × 6 cm in the right posterior wall of the uterus (Fig. A), while the bilateral adnexa appeared normal. A local adhesion band was observed behind the uterus, especially in the Douglas cavity, but the cyst was not surrounded by adhesions. Upon removal of the cyst, its root was found to be deeply embedded within the myometrium (Fig. B) and the cyst contained a characteristic chocolate-like fluid (Fig. C). It was evident that this cyst represented uterine cystic adenomyosis communicating with the uterine cavity (Fig. D), which was subsequently confirmed by postoperative pathology examination. The complete excision of the root of the cyst exposed normal myometrial tissue; subsequently, double-layer suturing of both the uterine muscle layer and serous muscle layer was performed using absorbable suture material for optimal healing of the uterine incision (Fig. E and F). Finally, the retrieval bag technique was employed to remove the cyst from the abdominal cavity. Postoperative pathology revealed ectopic endometrial-like tissue in the cyst, and immunohistochemistry revealed CD10 positivity. Both operations were performed by the same surgeon. Reviewing the patient’s CT image after surgery, the CT scan from the second admission revealed that the cyst was located at the same anatomical site as the uterine fibroid identified on the CT scan at the initial admission (Fig. D); the uterine artery was used as a reference. Adenomyosis is a chronic disease, and many patients suffer from dysmenorrhea and chronic pelvic pain during adolescence or at a young age . Dysmenorrhea, a risk factor for endometriosis according to some authors, plays an important role in increasing the detection rate of endometriosis and adenomyosis . They reported that the percentage of patients with signs of endometriosis on ultrasound increased to 21% in the presence of this symptom. At the same time, other authors observed that the prevalence of endometriosis in young women with dysmenorrhea and chronic pelvic pain ranged between 25% and 73% . However, this painful symptom is often considered a normal and transient symptom in young women. Several other authors also reported that the ultrasound-based detection rate of pelvic endometriosis was one-third in young patients with severe dysmenorrhea. Adenomyosis and endometriosis are known to have significant detrimental effects on the health and reproductive capabilities of affected individuals. Recent research suggests that young patients with dysmenorrhea should be referred to an expert sonographer to minimize the delay between the onset of symptoms and diagnosis. It is also important to note the coexistence of adenomyosis and deep infiltrating endometriosis (DIE) in patients with painful pelvic symptoms. Several previous studies [ , , ] reported the coexistence of adenomyosis and DIE in approximately 45–50% of patients. Investigating the presence of adenomyosis in adolescent DIE patients is of utmost importance for achieving appropriate management. Uterine cystic adenomyosis, a rare subtype of uterine adenomyosis, often remains undiagnosed until surgery. In this case, the diagnosis was clearly established intraoperatively. The typical clinical manifestations include severe dysmenorrhea, pelvic pain, and irregular menstruation. Uterine cystic adenomyosis should be considered when imaging examinations reveal a well-defined cystic lesion filled with hemorrhagic fluid within the myometrium . However, it is important to note that the reported cystic mass was distinctly located outside the contour of the uterus. Consequently, these cystic masses can be easily misdiagnosed as adnexal-derived tumors. Nonetheless, certain preoperative details can still provide clues suggesting this disease as a possibility; for example, preoperative ultrasound examination indicated normal fallopian tubes and ovaries while CT scans suggested a close association between the cyst and the uterine wall. Given the young age of most patients afflicted with this disease, minimally invasive surgery is considered the preferred approach. Postsurgery, there is a significant improvement in associated dysmenorrhea and a notable increase in the likelihood of successful pregnancy . Undoubtedly, complete removal of the lesion is highly important. In this case, the root of the cyst was deeply embedded within the myometrium, necessitating meticulous attention to ensure its thorough extraction and exposure of the normal uterine myometrium. Moreover, our proficient suturing technique was employed to facilitate uterine repair. Subsequently, we administered a gonadotropin-releasing hormone (GnRH) analog to the patient for six cycles without any observed recurrence. Goserelin was used to downregulate the function of the pituitary gland to cause low estrogen levels and to inhibit the activity of the ectopic endometrium. Six cycles were administered to achieve the optimal therapeutic effect and reduce recurrence. Currently, the etiology of uterine cystic adenomyosis remains unknown. JCA is believed to be a congenital malformation resulting from developmental defects in the Müllerian duct , while most researchers accept the endometrial injury invagination theory as the pathogenesis for adult-onset cases . Miscarriage, curettage, and surgery are known to increase the risk of both endometrial and myometrial injuries, which can lead to significant uterine adenomyosis and occasionally uterine cystic adenomyosis . Given the unique location of previous uterine fibroids in our patient’s case, their removal along with subsequent suturing posed considerable challenges that may have caused some damage at the junction between the endometrium and myometrium. This history combined with subsequent endometrial invagination could explain the development of uterine cystic adenomyosis observed in this particular patient. In addition, although there are no reports related to power morcellation and the formation of uterine cystic adenomyosis, power morcellation of uterine masses is a widely reported common pathogenetic mechanism for extrauterine adenomyoma , and morcellation should be performed only within tissue-containing bags. Clinically, the resection of uterine fibroids at specific sites and subsequent suturing of the uterus pose significant challenges for clinicians, as any defect may result in subsequent adverse events such as uterine cystic adenomyosis, as described in this study. Herein, we present a rare case of large adult uterine cystic adenomyosis located outside the uterus occurring after laparoscopic myomectomy, which further supports the invagination theory of endometrial injury as a potential etiology for adult uterine cystic adenomyosis.
An Active Learning Model for Promoting Healthy Cooking and Dietary Strategies Among South Asian Children: A Proof-of-Concept Study
ca99c3a5-3779-4d06-9265-91d1f8856a24
11820363
Health Promotion[mh]
In Canada, South Asians bear a greater burden of cardiovascular disease (CVD) compared to other ethnic groups [ , , ]. Although CVD is generally diagnosed in adulthood, the atherosclerotic process itself begins early in childhood . Not only is obesity a risk factor for CVD, but obesity during childhood predicts future development of CVD . South Asian children living in Canada are at particularly high risk for CVD as they have a higher prevalence of overweight compared to their non-South Asian counterparts . Fortunately, there is evidence that CVD risk in adulthood can be reduced by addressing elevated BMI in childhood . Consequently, to reduce the burden of CVD in adulthood, prevention efforts need to start in childhood, particularly in the South Asian community. Poor diet is a preventable risk factor for developing CVD in the South Asian community . Not only is the traditional South Asian diet high in salt, fat, and carbohydrates , but this group also reports an elevated intake of full-fat dairy and ghee . Moreover, dietary patterns worsen as a function of acculturation with increased consumption of red meat, sugar-sweetened beverages, and fast foods [ , , ]. In addition to the quality and nutritional value of foods consumed, the methods in which meals are prepared are equally important to dietary health. For example, in South Asian culture, deep-frying, high-heat cooking, and preparing foods with ghee, partially hydrogenated fats, and reheated oils are common practices that may increase this community’s susceptibility to developing CVD . Finally, culture can exert a strong influence on dietary patterns. The belief that non-traditional foods lack flavor or that healthy cooking techniques compromise taste may reflect culturally misinformed nutrition education . Thus, interventions designed in a cultural context and involving South Asian role models are more likely to be successful . Increasingly, nutrition interventions targeting youth have involved teaching children how to adopt healthier cooking methods as well as engaging learners in “hands-on” meal preparation activities. In fact, a systematic review of 42 school-based nutrition programs identified common themes of successful programs, one of which was interventions involving cooking classes. It should be noted that none of the interventions in this review were with South Asian children. However, in a systematic review and meta-analysis of 29 lifestyle interventions for South Asians, Brown and colleagues found five studies examined dietary interventions for children. Of these, none included a “hands-on” component. Clearly, additional research is needed to explore interactive cooking models to improve diet-related outcomes among South Asian children, a group at high risk for developing CVD in adulthood. For this reason, the goals of our study were to examine the impact of It’s a Family Affair, a family-focused, “hands-on” cooking workshop, to improve three cooking and dietary strategies: using healthy cooking techniques, practicing portion control, and making healthy substitutions. We hypothesized that following participation in this intervention, participants would increase their frequency in these three strategies. 2.1. Participants This proof-of-concept study consisted of a 90 min cooking workshop starting with a didactic component followed by a participatory interactive cooking segment. The multi-site, single-arm, pre–post study was approved by the Behavioral Research Ethics Board (H18-02441) at the University of British Columbia and the School District in Surrey, British Columbia (BC). We recruited children in grades 3 to 6 of South Asian descent attending four elementary schools in Surrey, BC. To be eligible to participate, children had to (i) self-identify as being of South Asian descent; (ii) communicate in English, Hindi, Punjabi, or Urdu; (iii) be enrolled in one of the four target schools; (iv) be in grades 3 to 6; (v) provide parent/legal guardian consent to participate in this study. For parents, to be eligible to participate, they had to (i) have a child of South Asian descent; (ii) be able to communicate in English, Hindi, Punjabi, or Urdu; (iii) reside in the greater Vancouver area. Recruitment strategies included providing recruitment flyers coupled with permission slips to teachers of children in grades 3 to 6. Teachers instructed students to take the flyers home for their parents to review and return a signed permission slip if interested. Research assistants then contacted interested families and enrolled them in the cooking workshop. 2.2. Instrumentation On the day of the cooking workshop, parents and children completed informed consent/assent documents, administered in Punjabi, Hindi, Urdu, and English by our multilingual research staff. Assessments were conducted at baseline (pre-workshop) and at 1 month post-workshop. Surveys for parents consisted of sociodemographic information, a cooking and dietary strategies questionnaire, and a culturally tailored food screener. Children also completed all surveys but with an abbreviated demographic questionnaire. Sociodemographic variables included the child’s age and gender, country of birth, parents’ age and gender, ethnic background, highest education level achieved, employment status, and total household income. Additionally, we also collected household information, including total household members, family history of type 2 diabetes, preferred cuisine (Western versus South Asian), time for the evening meal, and time when turning in for sleep. Cooking and dietary strategies were assessed using a 20-item survey adapted from Raber et al.’s evidence-based conceptual framework for healthy cooking. Subsequently, these authors developed a measure that demonstrated construct validity and questionnaire comprehension among a sample of 267 adults . Items tapped into the following constructs: frequency (e.g., preparing meals at home, eating at fast food restaurants), technique/methods (e.g., steaming, baking, deep-frying, stir-frying), minimal usage (eating foods high in fat, sugar, carbohydrates), and additions/replacements (e.g., usage of 1% or skim milk instead of heavy cream, whole wheat flour instead of white flour). We also added items pertaining to portion control (e.g., use of measuring cups and spoons, following suggested serving sizes). Children and parents were asked how many days in the past week they engaged in these practices. Responses were categorized as never (0 days), seldom (1–2 days), sometimes (3–4 days), often (5 days), or always (6–7 days). Parents assisted their child with their responses. Food screeners were adapted from Block and colleagues and were found to be correlated with their original full-length food frequency questionnaire among 208 adults. Screeners assessed the frequency of consumption of the following food categories: fruits and vegetables (9 items), breads/wheats/grains (9 items), proteins and fats (17 items), dairy (3 items), and snacks (7 items). This survey was culturally tailored to add foods common to the traditional South Asian diet, including Indian sweets (jalebi, gulab jamun, ladoo) and snacks (samosas, pakoras, naan, roti, daals with heavy cream, mutton, etc.). 2.3. Procedure The intervention was based on the social learning theory, which, in this context, posits that children learn by observing, modeling, and imitating those around them (e.g., parents, peers, instructors) and will perform the same behavior, especially if reinforced . At the start of the 90 min workshop, each parent–child pair was equipped with a ‘toolkit’ that included the following items: (1) hand sanitizer, (2) measuring cups and spoons, (3) portion plate, (4) steamer, (5) non-stick frying pan, (6) low-calorie cooking spray, (7) a recipe card for “Much Better Butter Chicken”, and (8) a child-size chef’s hat. Each tool was discussed and/or utilized during the cooking workshop. The workshop was delivered once at each of the four participating elementary schools (four workshops total). The cooking workshop began with an educational component (20 min) followed by a live, “hands-on”, interactive cooking demonstration (60 min) and ended with an orientation to healthy lifestyle resources tailored for Vancouver’s South Asian community (e.g., spaceforsouthasians.com, projectbhangra.com). The education component introduced three dietary strategies: healthy substitutions, healthy cooking techniques, and portion control. The materials for the education component were created in consultation with a dietician. Healthy substitutions: The traditional South Asian diet consists of foods that are high in fat, carbohydrates, sugar, and sodium. The dietician taught the parent–child pairs how to substitute for healthier ingredients. For example, 1% or skim milk instead of whole milk; canola oil instead of ghee (i.e., clarified butter); and whole wheat instead of white flour. Healthy cooking techniques: Traditional South Asian meals are often made using less healthy cooking techniques, including deep-frying, braising in fat, and searing in excess oil. The dietician discussed alternative methods such as steaming, baking, grilling, or broiling. Portion control: Exercising portion control can be difficult for many people regardless of their ethnic background. The dietician introduced the concept of ‘portion control’ and serving sizes using tools such as the portion plate, measuring cups and spoons, and the hand method for measurement. The cooking demonstration led by a chef of South Asian descent taught parent–child pairs how to prepare a healthy version of butter chicken. During the cooking demonstration, participants applied strategies presented during the health education segment, such as substituting low-fat buttermilk for heavy cream. By observing the cooking process, the parent–child pairs learned new cooking skills and techniques, in addition to learning about the ingredients used for substitutions and their nutritional value. Moreover, the instructor provided positive reinforcement and feedback to the families, which encouraged the participants to continue practicing healthy cooking techniques. By repeatedly observing and modeling healthy substitutions, healthy cooking techniques, and portion control throughout the workshop, the families were set up to develop the habit of cooking healthy meals, which in turn will lead to better health outcomes in the long term. 2.4. Data Analysis Sociodemographic characteristics were summarized using descriptive statistics for children, parents, and the total group combined. Categorical variables were reported by counts and percentages, while continuous variables were summarized as mean ± standard deviation (SD) or median and interquartile range (IQR: Q1-25th and Q3-75th). To account for the hierarchical data structure of this study [with pre- and post-workshop surveys taken on an individual participant (repeated measures nested within individuals), and child and his/her parent nested within the family], linear mixed effects models with three levels were conducted to examine the effect of the workshop on the outcome variables. In particular, individual pre- and post-workshop scores were the dependent variable; time (pre- or post-workshop) and participant status (parent or child) were independent variables. An interaction term between time and participant status was included in the model to test whether the workshop effect differed between children and parents and estimate the effect for children and parents separately. Confounding variables adjusted in the analysis were baseline score, gender, vegetarianism, and food preference. All the analyses were performed using the SAS software version 9.4. A two-sided p -value of 0.05 or less was defined as statistically significant. This proof-of-concept study consisted of a 90 min cooking workshop starting with a didactic component followed by a participatory interactive cooking segment. The multi-site, single-arm, pre–post study was approved by the Behavioral Research Ethics Board (H18-02441) at the University of British Columbia and the School District in Surrey, British Columbia (BC). We recruited children in grades 3 to 6 of South Asian descent attending four elementary schools in Surrey, BC. To be eligible to participate, children had to (i) self-identify as being of South Asian descent; (ii) communicate in English, Hindi, Punjabi, or Urdu; (iii) be enrolled in one of the four target schools; (iv) be in grades 3 to 6; (v) provide parent/legal guardian consent to participate in this study. For parents, to be eligible to participate, they had to (i) have a child of South Asian descent; (ii) be able to communicate in English, Hindi, Punjabi, or Urdu; (iii) reside in the greater Vancouver area. Recruitment strategies included providing recruitment flyers coupled with permission slips to teachers of children in grades 3 to 6. Teachers instructed students to take the flyers home for their parents to review and return a signed permission slip if interested. Research assistants then contacted interested families and enrolled them in the cooking workshop. On the day of the cooking workshop, parents and children completed informed consent/assent documents, administered in Punjabi, Hindi, Urdu, and English by our multilingual research staff. Assessments were conducted at baseline (pre-workshop) and at 1 month post-workshop. Surveys for parents consisted of sociodemographic information, a cooking and dietary strategies questionnaire, and a culturally tailored food screener. Children also completed all surveys but with an abbreviated demographic questionnaire. Sociodemographic variables included the child’s age and gender, country of birth, parents’ age and gender, ethnic background, highest education level achieved, employment status, and total household income. Additionally, we also collected household information, including total household members, family history of type 2 diabetes, preferred cuisine (Western versus South Asian), time for the evening meal, and time when turning in for sleep. Cooking and dietary strategies were assessed using a 20-item survey adapted from Raber et al.’s evidence-based conceptual framework for healthy cooking. Subsequently, these authors developed a measure that demonstrated construct validity and questionnaire comprehension among a sample of 267 adults . Items tapped into the following constructs: frequency (e.g., preparing meals at home, eating at fast food restaurants), technique/methods (e.g., steaming, baking, deep-frying, stir-frying), minimal usage (eating foods high in fat, sugar, carbohydrates), and additions/replacements (e.g., usage of 1% or skim milk instead of heavy cream, whole wheat flour instead of white flour). We also added items pertaining to portion control (e.g., use of measuring cups and spoons, following suggested serving sizes). Children and parents were asked how many days in the past week they engaged in these practices. Responses were categorized as never (0 days), seldom (1–2 days), sometimes (3–4 days), often (5 days), or always (6–7 days). Parents assisted their child with their responses. Food screeners were adapted from Block and colleagues and were found to be correlated with their original full-length food frequency questionnaire among 208 adults. Screeners assessed the frequency of consumption of the following food categories: fruits and vegetables (9 items), breads/wheats/grains (9 items), proteins and fats (17 items), dairy (3 items), and snacks (7 items). This survey was culturally tailored to add foods common to the traditional South Asian diet, including Indian sweets (jalebi, gulab jamun, ladoo) and snacks (samosas, pakoras, naan, roti, daals with heavy cream, mutton, etc.). The intervention was based on the social learning theory, which, in this context, posits that children learn by observing, modeling, and imitating those around them (e.g., parents, peers, instructors) and will perform the same behavior, especially if reinforced . At the start of the 90 min workshop, each parent–child pair was equipped with a ‘toolkit’ that included the following items: (1) hand sanitizer, (2) measuring cups and spoons, (3) portion plate, (4) steamer, (5) non-stick frying pan, (6) low-calorie cooking spray, (7) a recipe card for “Much Better Butter Chicken”, and (8) a child-size chef’s hat. Each tool was discussed and/or utilized during the cooking workshop. The workshop was delivered once at each of the four participating elementary schools (four workshops total). The cooking workshop began with an educational component (20 min) followed by a live, “hands-on”, interactive cooking demonstration (60 min) and ended with an orientation to healthy lifestyle resources tailored for Vancouver’s South Asian community (e.g., spaceforsouthasians.com, projectbhangra.com). The education component introduced three dietary strategies: healthy substitutions, healthy cooking techniques, and portion control. The materials for the education component were created in consultation with a dietician. Healthy substitutions: The traditional South Asian diet consists of foods that are high in fat, carbohydrates, sugar, and sodium. The dietician taught the parent–child pairs how to substitute for healthier ingredients. For example, 1% or skim milk instead of whole milk; canola oil instead of ghee (i.e., clarified butter); and whole wheat instead of white flour. Healthy cooking techniques: Traditional South Asian meals are often made using less healthy cooking techniques, including deep-frying, braising in fat, and searing in excess oil. The dietician discussed alternative methods such as steaming, baking, grilling, or broiling. Portion control: Exercising portion control can be difficult for many people regardless of their ethnic background. The dietician introduced the concept of ‘portion control’ and serving sizes using tools such as the portion plate, measuring cups and spoons, and the hand method for measurement. The cooking demonstration led by a chef of South Asian descent taught parent–child pairs how to prepare a healthy version of butter chicken. During the cooking demonstration, participants applied strategies presented during the health education segment, such as substituting low-fat buttermilk for heavy cream. By observing the cooking process, the parent–child pairs learned new cooking skills and techniques, in addition to learning about the ingredients used for substitutions and their nutritional value. Moreover, the instructor provided positive reinforcement and feedback to the families, which encouraged the participants to continue practicing healthy cooking techniques. By repeatedly observing and modeling healthy substitutions, healthy cooking techniques, and portion control throughout the workshop, the families were set up to develop the habit of cooking healthy meals, which in turn will lead to better health outcomes in the long term. Sociodemographic characteristics were summarized using descriptive statistics for children, parents, and the total group combined. Categorical variables were reported by counts and percentages, while continuous variables were summarized as mean ± standard deviation (SD) or median and interquartile range (IQR: Q1-25th and Q3-75th). To account for the hierarchical data structure of this study [with pre- and post-workshop surveys taken on an individual participant (repeated measures nested within individuals), and child and his/her parent nested within the family], linear mixed effects models with three levels were conducted to examine the effect of the workshop on the outcome variables. In particular, individual pre- and post-workshop scores were the dependent variable; time (pre- or post-workshop) and participant status (parent or child) were independent variables. An interaction term between time and participant status was included in the model to test whether the workshop effect differed between children and parents and estimate the effect for children and parents separately. Confounding variables adjusted in the analysis were baseline score, gender, vegetarianism, and food preference. All the analyses were performed using the SAS software version 9.4. A two-sided p -value of 0.05 or less was defined as statistically significant. A total of 70 child–parent dyads (n = 140) were enrolled in this study. shows the sociodemographic characteristics of participants. Sixty percent of families had more than five members living in the household; 59% reported a household income over CAD 50,000; the mean age for children was 9.0 ± 0.9, with 53% being males and 73% being born in Canada. Eighty-nine percent of parents were female, 94% were born outside of Canada, with a median of 13 years of living in Canada, and 64% had a part-time or full-time job. The majority of participants (77% children; 60% parents) identified as non-vegetarians, and 50% and 70% of children and parents, respectively, preferred South Asian cuisine. 3.1. Cooking and Dietary Strategies Pre- and post-changes for three primary outcome variables are presented in . Frequency of “using healthy cooking techniques” significantly increased following the workshop for both child and parent participants (children: mean change as 0.14, 95% confidence interval (CI) [0.02, 0.25] p = 0.02; parents: mean change as 0.24, 95% CI [0.13, 0.35] p < 0.001). Similarly, significantly increased frequency for “practicing portion control” was also reported for both groups (children: mean change as 0.70, 95% CI [0.39, 1.02] p < 0.001; parents: mean change as 0.38 [0.07, 0.69] p < 0.02). No change was observed for “making healthy substitutions” for children or parents. No interaction effect of the workshop was found between children and parents for all three outcome measures. The workshop impact for the combined sample of children and parents is presented in . 3.2. Food Screener Increased consumption of “green salad and non-starchy vegetables” post-workshop was observed for both child and parent participants, both having a mean change of 0.33, 95% CI [0.05, 0.61] p = 0.02. Reduced intake of “snacks” was reported by children with a mean change of −0.29, 95% CI [−0.42, −0.17] p < 0.001, but not by parents. No changes in consumption frequency of protein/fats, dairy, and South Asian breads and rice were found (see ). The only between-group difference in food consumption frequency was for snack intake (interaction term: p = 0.01). Pre- and post-changes for three primary outcome variables are presented in . Frequency of “using healthy cooking techniques” significantly increased following the workshop for both child and parent participants (children: mean change as 0.14, 95% confidence interval (CI) [0.02, 0.25] p = 0.02; parents: mean change as 0.24, 95% CI [0.13, 0.35] p < 0.001). Similarly, significantly increased frequency for “practicing portion control” was also reported for both groups (children: mean change as 0.70, 95% CI [0.39, 1.02] p < 0.001; parents: mean change as 0.38 [0.07, 0.69] p < 0.02). No change was observed for “making healthy substitutions” for children or parents. No interaction effect of the workshop was found between children and parents for all three outcome measures. The workshop impact for the combined sample of children and parents is presented in . Increased consumption of “green salad and non-starchy vegetables” post-workshop was observed for both child and parent participants, both having a mean change of 0.33, 95% CI [0.05, 0.61] p = 0.02. Reduced intake of “snacks” was reported by children with a mean change of −0.29, 95% CI [−0.42, −0.17] p < 0.001, but not by parents. No changes in consumption frequency of protein/fats, dairy, and South Asian breads and rice were found (see ). The only between-group difference in food consumption frequency was for snack intake (interaction term: p = 0.01). To our knowledge, this is the first study that found participation in a family-based cooking workshop for South Asian children and their parents. It was significantly associated with positive changes in food preparation and dietary strategies (using healthy cooking techniques and practicing portion control). Moreover, we also observed increases in the consumption of greens and non-starchy vegetables and reductions in snacking behavior. Although this was a “proof-of-concept” investigation, our favorable results could be attributed to three key elements of the intervention: being family-focused, hands-on, and school-based. Consistent with a qualitative study examining perspectives of healthy lifestyle behaviors of 13 South Asian children and their parents , we also observed that family dynamics appear to play a role in food preparation and dietary patterns. Specifically, at 3 months, the frequency of adopting healthy cooking techniques and exercising portion control increased for both children and parents. However, neither of the two parties demonstrated improvements in making healthy substitutions. In other words, children model parents’ behavior and vice versa. Considering that changes (or lack thereof) were made in tandem, perhaps interventions that target the family unit rather than an individual person within the family may be more effective. In fact, a systematic review of eight randomized controlled trials (RCTs) of family-based interventions provides some evidence these models can be effective in increasing fruit and vegetable consumption, reducing intake of sugar-sweetened beverages or sugary beverages, and reducing frequency of fast-food dining . For the South Asian community, family approaches can be particularly promising given the longstanding cultural values of family, shared meals, and expectations around hospitality . The educational strategy of “teaching by doing” may be particularly effective for improving food preparation and dietary habits. In fact, family-based lifestyle interventions that encourage children and parents to prepare healthy meals together have been associated with improved dietary behaviors in school-aged children . Our hands-on workshop involved child–parent teams preparing, step-by-step, a healthy version of a traditional Punjabi dish (better butter chicken) under the guidance of a South Asian chef. Each dietary strategy introduced at the start of the workshop was then applied during the active cooking component (e.g., replacing heavy cream with non-fat buttermilk and plating single serving portions). Interestingly, the use of cooking skills can translate to improved dietary behavior, as Asigbee et al. found that children who are consistently involved in family cooking reported greater fruit and vegetable consumption compared to their non-cooking counterparts. Most importantly, findings from qualitative studies reveal that children express a keen interest in learning how to cook and recognize the value of developing culinary skills early on to utilize later in adulthood . Equally important for intervention success is the setting in which programs are delivered. Several studies have demonstrated that schools are optimal locations to adopt and promote healthy dietary habits among children [ , , , ]. For instance, a cluster RCT of an elementary school-based program involving gardening, cooking, and nutrition education found that children in the intervention schools reported increased vegetable consumption compared to those in control schools . School-based settings can be particularly important for specific populations. In fact, in a study evaluating a Bhangra dance exercise intervention with South Asian children living in the greater Surrey, BC area (the same target population as the present study), parents strongly preferred conducting sessions at the elementary schools where their children currently attended, as it offered a safe and familiar environment at no cost . 4.1. Implications for School Health Policy, Practice, and Equity Schools can play a pivotal role in improving dietary health and preventing chronic disease, particularly among children from high-risk and low-resource communities. Our findings suggest that to optimize dietary interventions for South Asian children at risk for developing CVD in adulthood, not only do we need to engage caregivers/parents and incorporate cultural preferences in programming , but these educational experiences need to be “hands-on” as well as conducted in the safe and low-cost setting of our schools . Any lifestyle changes need to be developed, practiced, and reinforced in the environments in which the target population spends the most time. For school-aged children, these critical settings include home, school, and community-based sites. Considering that lifestyle interventions (conducted in community and school-based settings) have been demonstrated to be effective in reducing obesity-related endpoints among children of diverse ethnic backgrounds , this type of interactive workshop model can only add to the arsenal of public health efforts for CVD prevention. 4.2. Limitations This study is not without limitations. Our cooking workshop was not developed using an evidence-based conceptual framework such as the one proposed by Raber et al. , which included four components: minimal usage, flavorings, ingredient substitution, and cooking techniques. Our intervention only addressed the latter two in addition to the added strategy of portion control. Second, this proof-of-concept intervention recruited a small sample size. Thus, results are not generalizable to the larger South Asian population in Canada. Moreover, this study involved a one-time workshop without ongoing sessions or a parallel home-based component. By conducting a large-scale trial as well as lengthening and intensifying the intervention, we may observe greater improvements sustained over time. Any subsequent study should incorporate post-intervention and follow-up assessments to monitor long-term maintenance of behavior change. As well, adding a qualitative component to the assessment framework will allow us to explore why and how certain healthy cooking techniques were and were not adopted. Third, because this study did not include a control condition, we cannot conclude that improvements observed were due to the intervention itself. Fourth, outcomes were measured using self-report instruments and could be subject to social desirability bias. Finally, we did not measure anthropometric or other clinical endpoints and therefore could not examine changes in CVD risk factors. Schools can play a pivotal role in improving dietary health and preventing chronic disease, particularly among children from high-risk and low-resource communities. Our findings suggest that to optimize dietary interventions for South Asian children at risk for developing CVD in adulthood, not only do we need to engage caregivers/parents and incorporate cultural preferences in programming , but these educational experiences need to be “hands-on” as well as conducted in the safe and low-cost setting of our schools . Any lifestyle changes need to be developed, practiced, and reinforced in the environments in which the target population spends the most time. For school-aged children, these critical settings include home, school, and community-based sites. Considering that lifestyle interventions (conducted in community and school-based settings) have been demonstrated to be effective in reducing obesity-related endpoints among children of diverse ethnic backgrounds , this type of interactive workshop model can only add to the arsenal of public health efforts for CVD prevention. This study is not without limitations. Our cooking workshop was not developed using an evidence-based conceptual framework such as the one proposed by Raber et al. , which included four components: minimal usage, flavorings, ingredient substitution, and cooking techniques. Our intervention only addressed the latter two in addition to the added strategy of portion control. Second, this proof-of-concept intervention recruited a small sample size. Thus, results are not generalizable to the larger South Asian population in Canada. Moreover, this study involved a one-time workshop without ongoing sessions or a parallel home-based component. By conducting a large-scale trial as well as lengthening and intensifying the intervention, we may observe greater improvements sustained over time. Any subsequent study should incorporate post-intervention and follow-up assessments to monitor long-term maintenance of behavior change. As well, adding a qualitative component to the assessment framework will allow us to explore why and how certain healthy cooking techniques were and were not adopted. Third, because this study did not include a control condition, we cannot conclude that improvements observed were due to the intervention itself. Fourth, outcomes were measured using self-report instruments and could be subject to social desirability bias. Finally, we did not measure anthropometric or other clinical endpoints and therefore could not examine changes in CVD risk factors. Childhood is the optimal time to adopt positive lifestyle habits that can lower the risk of developing CVD risk factors later in life. The goal of this study was to examine the impact of a nutrition education model that engaged both children and their parents as well as encouraged “hands-on” application of health-promoting cooking strategies. Following the workshop, children and parents increased their frequency in using healthy cooking techniques and practicing portion control. Nutrition interventions that involve the family as a unit and are conducted in the school setting may be conducive to behavior change. However, this proof-of-concept model needs to be tested in the context of a large-scale randomized controlled trial.
Balloon pulmonary angioplasty for chronic thromboembolic pulmonary hypertension: a clinical consensus statement of the ESC working group on pulmonary circulation and right ventricular function
fff9a9e9-22b9-49eb-a8b4-c2810f9cadc3
10393078
Internal Medicine[mh]
The 2022 European Society of Cardiology (ESC) guidelines on the diagnosis and treatment of pulmonary hypertension (PH), developed together with the European Respiratory Society (ERS), provide recommendations on the optimal management of group 4.1 PH, labelled as chronic thromboembolic pulmonary disease (CTEPD) with [chronic thromboembolic pulmonary hypertension (CTEPH)] or without PH. CTEPH is thought to result from pulmonary thromboembolism and is characterized by organized vascular occlusions of the pulmonary arteries. While pulmonary endarterectomy (PEA) is guideline recommended as the treatment of choice for suitable patients with CTEPH, interventional treatment by balloon pulmonary angioplasty (BPA) is now also guideline recommended in the therapeutic algorithm of CTEPH. Guidelines recommend BPA where PEA is not feasible and medical therapy does not ameliorate symptoms, meaning inoperable CTEPH and PH after PEA. While existing data suggest prognostic benefit, this has yet to be formally demonstrated. On the other hand, this is true also for medical treatments that are used together with BPA today. Our Clinical Consensus Statement is focused on technical, structural, and logistic requirements for performing BPA, patient selection and preparation, treatment goals, procedural details, complications and their management, BPA outcomes and patient follow-up, cost of BPA, and patient needs. It focusses mainly on CTEPH, with less evidence for CTEPD without PH, and not on other pathologies of the pulmonary arteries. While the best available current evidence is summarized in this document, questionnaires were utilized for statements that were not evidence based to arrive at a group opinion by surveying the panel of participating experts. Responses were aggregated and shared with the group who incorporated the results in their individual paragraphs. The present document serves as a practical guide to performing BPA in Europe, according to the refinement of the technique by Japanese interventionists. We systematically searched PubMed, OVID, and EBSCO to identify randomized controlled trials and prospective controlled or retrospective observational studies that reported on the efficacy and safety of BPA without any language restrictions. The last search was performed on 21 December 2022, without any publication date limitations. The search string was created for PubMed and modified accordingly for the other databases and is included in the , . The additional search of reference lists of newly identified studies or discussions in the authors’ group did not identify other studies that met the eligibility criteria. Non-original studies (reviews, editorials, and commentaries), conference abstracts, meta-analyses, and case reports were excluded. The literature search criteria underlying this document can be found in the , . Furthermore, ClinicalTrials.gov was manually searched to find ongoing or unpublished clinical trials. For questions without high-quality evidence, a survey was prepared. The survey was distributed to the authors to prepare consensus-based text. The minimum threshold for consensus was 60% because BPA is an emerging procedure, and at the time of the writing process, individual practices, particularly at the technical level and choice of materials, have varied depending on local availability and conditions of reimbursement. We have added filled questionnaires in the , . Some questions were discussed during online meetings and the answers were formulated together, e.g. the etiology of lung injury (LI). Areas of knowledge gaps were identified. After the first published BPA case report in 1988 of a 30-year-old man with PH after pulmonary embolism, Feinstein et al . described 18 patients in 2001 with inaccessible or ‘nonsurgical’ CTEPH who underwent BPA. Reperfusion pulmonary edema occurred in 11 patients, and 30-day mortality was 5.5%. Because safety issues were compromising efficacy outcomes at these early treatment days, Japanese interventionists refined BPA to bring it to today’s standards, mainly by a more cautious approach with multiple sessions, use of smaller balloons in the beginning, and coronary equipment. BPA is performed in a catheter laboratory, or in a hybrid room, preferably with biplane cineangiography. BPA is a complex procedure requiring a detailed understanding of three-dimensional pulmonary vascular anatomy . Segment labels employ the letter A and numbers, in accordance with labeling of the bronchial tree. Operators need to be aware of anatomical variability. The left upper lobe is particularly prone to variation with between two and seven pulmonary arterial branches, the most constant is the left lower lobe with only four described variants. Left A7 appears variable, and may come off A8 . In addition, characteristics of lesion types must be emphasized. CTEPH lesions tend to be close to vessel bifurcations. Type A ring-like stenosis lesions result in a concentric stenosis, as if a ring was put on the vessel. Type B web lesions are hazy or abrupt narrowing opacities of the vessel that may appear in various configurations, for example as complex webs or slits. Type C and D lesions represent occlusions with tapered subtotal lesions (type C) that appear almost completely occluded but have continuous or discontinuous subtle and slow blood flow distal to the obstruction. Type D total occlusion lesions appear as pouches or ostial occlusions. Type E tortuous lesions represent web lesions or occlusions in highly tortuous small vessels distal to subsegmental arteries, surrounded by cotton wool–like stains of capillary arteries . The principle of BPA is to dilate intraluminal fibrotic obstructions and to open occlusions by penetrating proximal fibrous intimal caps with wires, stretching the vessel with balloons and compressing organized thrombi. , Once flow is established, distal vascular territories are reperfused and gain cross sectional diameter over the following 4–6 weeks. No restenosis has been observed to date. The 2022 ESC/ERS guidelines recommend that all patients with CTEPH are reviewed by a multidisciplinary CTEPH team capable of multimodality management, to decide the initial treatment after review of all relevant data . A CTEPH team should consist of a PEA surgeon, a BPA interventionist, a PH specialist, a radiologist experienced in thoracic imaging, an intensive care specialist, a nurse specialist, and a data manager. The team should meet regularly to review new referrals, follow-up cases, and complications. Centers should have BPA activities at a minimum of 100 procedures/year as described in the guidelines, ideally concentrating care and expertise in high-volume centers. Centers should be able to demonstrate excellent safety outcomes such as a 30-day mortality of <2% and serious adverse event (SAE) rates of <5% per session. According to a recent discussion of the RACE and MRBPA studies, a BPA–SAE is defined as an event that results in death, is life-threatening, requires hospitalization or extended hospitalization for treatment, causes permanent or marked impairment or dysfunction, and/or requires medical or surgical intervention to prevent one of the above. All task force members believed it was important that any center planning to set up a BPA service be proctored by an expert center. All BPA centers should seek an annual audit to review the achievement of BPA goals and survival. , , , The task force members agreed that onsite cardiothoracic surgery, extra-corporeal membrane oxygenation (ECMO), coils/gelfoam, and covered stents must be available as bail-out options when performing BPA. Patient-related factors influencing patient selection are as follows: To be eligible for BPA, patients must have New York Heart Association ( NYHA) class II or greater symptoms, most likely due to CTEPD. Patient cooperation during BPA should be ascertained, because lying flat for the duration of the intervention and the ability for a proper breath hold are mandatory. Contrast allergy, renal, and thyroid dysfunction are handled according to general guideline recommendations. , Patients who reject PEA despite the recommendation of a CTEPH team have a poor prognosis and are advised to be evaluated for BPA. There are now reports to support BPA as a treatment option in patients with operable disease . Darocha et al. reported the same efficacy and safety of BPA in technically operable patients; however, they excluded patients with large central clots and complete proximal main or lobar branch occlusions. Nishihara et al .’s report is including these lesions. Only case reports exist on rescue BPA in patients who are in right heart failure. Sex differences in the treatment of CTEPH have been evaluated in few studies. , PEA was performed more frequently in men while more females were classified as inoperable. Women have more frequently distal technically inaccessible disease and tend to reject PEA, although mortality was similar between both sexes. , In Japan, female CTEPH patients are elderly with less deep vein thrombosis, less acute embolic episodes, lower arterial oxygen tension, and more peripheral thrombi, and female Japanese CTEPH patients derive less improvement through PEA. Advanced age, frailty, and comorbidities remain important causes of non-operability. BPA in elderly patients improves functional class, hemodynamics, and biomarkers, and the rate of procedural complications and peri-procedural mortality is low. , Treatment goals should be individualized in patients with multiple co-morbidities where standard risk–benefit assessments may not hold. Disease-related factors influencing patient selection are as follows: Screening and imaging for other causes of PH must have been done for patients undergoing BPA, according to guidelines, including a coronary angiogram in patients at risk for coronary artery disease. , After confirming the diagnosis, imaging is performed for the assess ment of lesion distribution and lesion characteristics . Computed tomography pulmonary angiography (CT-PA) with multiplanar reconstruction is most commonly used for this purpose today. With a higher spatial resolution and transarterial contrast enhancement, cone beam CT of the pulmonary arteries shows peripheral lesions in more detail than CT-PA and can be used instead. In addition to cross-sectional imaging, most centers perform diagnostic and planning digital subtraction angiography of the pulmonary arteries (DSA-PA) during deep inspiratory breath holds in orthogonal projections. DSA-PA supplements the CT-PA with dynamic information of the parenchymal perfusion and allows for assessment of lobar, segmental, or subpleural perfusion defects. Direct segment-by-segment invasive PA contrast injection including intravascular imaging with intravascular ultrasound (IVUS) and/or optical coherence tomography (OCT) may be appropriate in cases of diagnostic uncertainty, or in special instances. The severity of baseline hemodynamics predicts BPA–related complications. , Roughly 40% of patients have significant pulmonary microvasculopathy that can be assessed by the PA occlusion technique. BPA after PEA has been found more difficult and less safe. Residual PH after PEA has been reported in 17%–31%. , Clinically relevant residual PH conferring worse long-term survival after PEA mainly occurs when the mean pulmonary arterial pressure is >38 mmHg. Selection of candidates for BPA after PEA includes a complete re-assessment of the patient with symptomatic PH 3–6 months after PEA using high-quality imaging techniques such as CT-PA, DSA-PA, and right heart catheterization. Recently, single-center series with small numbers of patients have reported hemodynamics and functional class improved at follow-up, suggesting BPA as a complementary therapeutic strategy for residual or recurrent PH after PEA. , , However, less hemodynamic improvement than that in primary BPA occurred. Polish authors describe very hard occlusions after PEA, challenging to dilate, potentially resulting in hemoptysis, requiring embolization. Patients with typical CTEPH vascular lesions but without PH at rest have been labeled as CTEPD without PH and are unfortunately less commonly referred to CTEPH centers. Two series have been published: Wiedenroth et al . report improved functional and hemodynamic status in 9/10 treated patients with one to five BPA sessions (average four sessions/patient), and Inami et al . found improved functional status in all 15 patients after BPA, and less need for nasal oxygen after four BPA sessions (see , ). Few complications and no periprocedural deaths were reported. Systematic data regarding prognostic impact and therapeutic goals are lacking. According to the 2022 ESC/ERS guidelines, both PEA or BPA should be considered in selected symptomatic patients with CTEPD without PH. For the identification of these patients, results of echocardiography, lung function testing, BNP/NT-proBNP, chest radiography, and cardiopulmonary exercise testing are considered in a multiparametric approach. , Patient preparation for BPA is as follows: Pretreatment with vasodilator drugs targets peripheral vascular remodeling, improves hemodynamics, and has been reported to reduce BPA–related complications, which has led to the guideline IIa recommendation of medical therapy prior to BPA. Prospective studies investigating efficacy and safety of medical treatment before BPA are needed, and ongoing (ClinicalTrials.gov Identifier NCT04780932). The use of non-vitamin K antagonist oral anticoagulants (NOACs) vs. vitamin K antagonists in CTEPH may be associated with more clotting, but those data are not from a randomized study. Because bridging anticoagulation is associated with increased bleeding, the writing group members agreed on continuing vitamin K antagonists and titration to an INR of <3.0 prior to BPA. The gold standard of mechanical treatments of CTEPH is PEA, where the goal is to remove the obstructing material from the PA lumen as completely as possible in a single session. In contrast, BPA is delivered as multiple staged interventions, and obstructing material remains in place. , , Within the writing group, there was universal agreement that an ideal BPA treatment should result in symptom relief at rest and during exercise, treating all lesions whenever possible. Further goals are to improve quality of life, normalize pulmonary hemodynamics and exchange gas with a normal resting oxygen saturation. To assess individual treatments goals, a multiparametric approach utilizing exercise capacity, hemodynamics including pulmonary vascular resistance (PVR), biomarkers, World Health Organization classification, cardiopulmonary exercise testing, and quality of life assessments is practical. Because of the prognostic threshold of a mean pulmonary artery pressure (mPAP) of 30 mmHg, the minimum hemodynamic goal of BPA is a final mPAP < 30 mmHg. Technical aspects (type, localization, and number of accessible lesions), advanced age, comorbidities, and patient’s expectations may influence individual treatment goals. The writing group rejected a prespecified timeframe to complete BPA outside a study protocol but agreed that the interventional schedule should be adapted to patient and logistics. For example, two initial sessions are done within 1–3 days in a single hospital stay, and subsequent sessions planned after 1–3 months each. BPA is performed in conscious patients under local anesthesia. A right heart catheterization is useful before each BPA session, including thermodilution or Fick cardiac output (CO) and pressure assessments (right atrial pressure, PA pressure, and PA wedge pressure), off supplemental oxygen, and predominantly via the femoral vein with generous availability of ultrasound-guided venous puncture. Arterial pressure and saturation may be measured noninvasively. Standard angiographic projections are anterior–posterior (AP) and left anterior oblique (LAO) 60° . Contrast media are Iodixanol 320 or Iomeprol 50/50 mixed with saline, hand injected. Two strategies of anticoagulation during BPA are practiced: 2000–5000 IU of unfractionated heparin, or full anticoagulation at an activated clotting time of 250 s as in coronary intervention. Patients receive supplemental oxygen to maintain a saturation > 92% during BPA. Oxygen can selectively dilate pulmonary arteries. Each procedure is typically performed by two interventionists working together, with the goal to spend 30–60 min radiation time in one session. Sheath size is 7F -9F with a telescoped 6F guiding catheter (mother and child technique, ); some prefer 8F/6F or 6F/6F or 8F/8F. Standard access to the lung segments is by multipurpose guiding catheter (MPA-1) or Judkins right (JR). Many different 0.014′ wires are used, usually hydrophilic wires, but some prefer hydrophobic silicone–coated wires (Miracle-3) that are shaped according to the need. In contrast to Japan (B-pahm), there are no dedicated BPA wires in Europe. Semi-compliant balloons are used at 6 atm on average, clearly below rated burst pressures, including large sizes of 8–10 mm diameter over-the-wire balloons via 0.035′ guidewires. Microcatheters and guide extension catheters are commonly used, while covered stents, coils, and gelfoam need to be available to manage complications. Occasionally, implantation of a stent is required to maintain the patency of a dilated lesion in a vessel with large intimal flaps. , Initially, ring and web lesions are addressed, with a preference for the right basal artery, and CTOs are addressed when hemodynamics have improved. Immediate BPA success is suspected when brisk venous return post BPA is observed. If mPAP ≥ 40 mmHg and PVR ≥ 7 WU, only 2–2.5 mm balloons are safe. Only a minority practices pressure wire–guided angioplasty. Routine endovascular imaging with IVUS or OCT is not standard of care. Complications decrease with operator experience. It is essential to establish consensus on a classification of complications and on the reporting . Complications should be reported both by procedure and by patient. Definition of complications Non-specific complications such as allergic reaction, access-site complications, kidney injury, and infections do not exceed 1% in the literature. Specific complications or ‘thoracic complications’ are those related to the properties of the BPA substrate, the pulmonary arteries, and the underlying pathology . Grades of injury after BPA are depicted in . BPA–related vascular injury was shown to be an independent predictor of LI after BPA (odds ratio, 20.1) but does not influence the overall outcome of BPA. Reperfusion edema usually develops several hours after the procedure, with clinical signs of desaturation and foamy sputum. The group agreed that LI predominantly originates from vascular injury and that LI as pure reperfusion injury may be rare. Sudden cough, drop in oxygen saturation and hemoptysis are the most common symptoms of a complication during BPA. Most frequently mild to moderate, hemoptysis can resolve quickly, but it requires immediate attention and search for a cause. 1.1. Vascular injury due to vascular complications is observed during the procedure ( and ). Distal perforation by the angioplasty guidewire is commonly accompanied by hemoptysis with onset of bleeding with a few minutes of delay from the injury. Management depends on its mechanism and how hypoxemia is acutely tolerated by the patient. The first step in persistent hemoptysis is halting the procedure and starting an angiographic search for the injury Oxygen therapy must be adapted to desaturation, and prolonged inflation of the balloon at the suspected angioplasty site , or immediate wedging of the guiding catheter, or if hemoptysis persists, heparin anticoagulation is reversed with protamine. Distal embolization can be performed, preferably using resorbable material (absorbable gelatin sponges, e.g. Serescue; Astellas Pharma Inc., Tokyo, Japan or CuraSpon®), or a covered or uncovered stent can be placed as bailout in case of rupture. In addition, continuous positive airway pressure is useful, sometimes intubation or ECMO may be needed. PA dissection at the angioplasty site or by tip of the guide catheter is benign in the majority of cases. In the large vessel segments, PA dissection may reflect contrast medium entering deep layers of thrombus. Arterial rupture is uncommon, and thrombosis at the dilated site is rarely reported. 1.2. LI is the main severe complication ( and ). It is defined by new ground-glass opacities, consolidation, and pleural effusion in the territory of dilated vessels, with or without hemoptysis, with or without hypoxemia. LI is commonly detected if routine chest scan is carried out; however, clinical consequences are limited to relatively few cases. Ikeda et al . performed CT scanning within 15 min of completion of 119 BPA sessions, identifying signs of LI in 10% of simple lesion BPA, and in 40% of occlusion lesion BPA. Most often, LI appears > 3 h after BPA with deterioration of the respiratory state . This is why at least one overnight stay after BPA is felt to be safe. In asymptomatic patients, further imaging is not needed before leaving the hospital. There are different degrees of severity: mild (with a need for oxygen inhalation via prongs), moderate (with a need for high-flow oxygen via face mask), and severe (requiring non-invasive positive pressure ventilation or mechanical ventilation +/− ECMO) . Mortality rate varies from 0% to 10%, with an average of ∼2% (see , ). Deaths are directly attributable to pulmonary injury in more than half of the cases; less commonly, death occurs due to right heart failure or sepsis. 2. Predictors of specific complications Lesion types predict complications: Complication rates are <3% for ring-like stenosis and web lesions, up to 15.5% for subtotal occlusions, and up to 40% for tortuous lesions. As more chronic total occlusions are addressed, new information on procedural complications will be observed, although it does not appear that CTO interventions increase complication rates. , Hemodynamic parameters such as a high mPAP before BPA consistently represent independent risk factors of severe LI. , , , , In the RACE study, piecewise logistic regression identified mean PAP > 45 mmHg as predictive factor associated with BPA–related adverse events and SAE in 88 patients who had BPA at any time. Therefore, a threshold mPAP increasing the incidence of LI is around 45 mmHg. There is a BPA learning curve , with experienced centers observing a significant reduction in adverse event rates with more practice. In the French registry, complication rates fell from 11.2% per session in the first 1006 sessions to 7.7% in the more recent 562 sessions. 3. How to avoid BPA complications Refinements of the BPA technique have decreased complications. The following practical rules have been developed based on the Japanese experience, particularly in case of mPAP ≥ 40 mmHg: (ii) treat simple lesions, rings, and webs first; (ii) verify proper guidewire placement; (iii) undersize balloons (balloon:artery ratio 1:0.5–0.8, or 2–2.5 mm balloons for the first session); and (iv) consider using the pressure wire technique to assess distal pressure (should remain <30 mmHg). , IVUS and/or OCT can be helpful to determine exact vessel diameter and identify dissections, thrombus, and intraluminal calcification. Given the data that complications are more severe among patients with very high pulmonary pressures, there is a logic to optimize medical therapy prior to BPA. In the RACE study, less severe hemodynamic compromise was recorded at the time of BPA in patients pretreated with riociguat than in patients treated first with BPA. Consequently, the incidence of BPA–related SAEs was lower in patients who were pretreated with riociguat [adverse events in 5 (14%) of 36 pretreated patients vs. 22 (42%) of 52 non-pretreated patients]. IMPACT-CTEPH is formally addressing this question by assessing the efficacy of macitentan on top of standard riociguat on pre-BPA PVR %baseline change in patients with inoperable CTEPH (ClinicalTrials.gov Identifier NCT04780932). Despite the relatively high contrast volume used during the entire BPA treatment cycle, BPA can be safely performed in patients with chronic kidney disease, and renal function may be improved with an increase in CO. , Radiation exposure (Air Kerma, effective dose, fluoroscopy time, and Kerma area product) needs to be rigorously collected for each patient. Long-term follow-up is guideline recommended for CTEPH BPA leads to a significant decrease of right ventricular afterload (see , , and ). The initial case series from the US reported a 23% decrease in total PVR, a 21% decrease in mPAP, and a 5% increase in CO after BPA. BPA interventionists from Europe and US reported an approximately 42%–45% decrease in PVR with a 18%–29% decrease in mPAP after refined BPA. , , More recent European series reported a decrease of PVR by 34%–60% and of mPAP by 23%–44%. , , , The Japanese registry reported a 66% decrease in PVR and a 48% decrease in mPAP. The results obtained in Japan tend to be better than those in other countries, , which may be due to the more extensive experience of operators, patient selection in light of less PEA activity, and the different structure of vascular lesions with a less inflammatory thrombotic phenotype. The most recent randomized controlled trials in two expert BPA centers reported a 65% decrease of PVR, a 40% decrease of mPAP, and a 10% increase of CO after BPA. , While vasodilators mainly increase CO, BPA decreases mPAP, resulting in lower PVR. Therefore, BPA success has to be reported together with ongoing vasodilator treatments. Hemodynamic improvement is more pronounced when complete pulmonary revascularization is achieved, including treatment of chronic total occlusions. , For patients with central clots, the decrease in PVR that can be achieved with BPA may be like in patients with distal lesions, but this needs to be confirmed in larger prospective studies. Hemodynamic improvement after BPA correlates with significant improvements in clinical status, right ventricular function, and patient-reported quality of life. , In Feinstein’s series, follow-up (mean, 36 months) mPAP, NYHA functional class, and 6-minute walking distance (6MWD) had improved, and all vessels previously dilated were patented at angiographic reassessment. Clinical benefit of BPA was demonstrated by improvements in functional class, 6MWD, and an increase in peak oxygen consumption and a decrease in VE/VCO 2 , and PETCO 2 . , A significant reduction in NT-proBNP levels occurs, , , , , , , an improvement in right ventricular function by echo, and cardiac magnetic resonance imaging, , electrical reverse remodeling by ECG, and an increase in pulmonary vascular compliance. , BPA in elderly patients is equally effective. , 3- and 5-year survival rates were reported as 92%–95% , , and 88%–90%, , respectively. The efficacy of BPA procedures as an adjunctive treatment after PEA has been evaluated in several small studies indicating that BPA is feasible, but its effectiveness appears to be lower, and the risk of complications is higher than in untreated patients. , , Significant symptomatic improvement after BPA was demonstrated in CTEPD patients without PH at rest. , Perfusion scans or DSA perfusion is helpful to identify residual lesions. Regardless of the result of BPA, guidelines recommend that patients should be regularly followed-up with functional class, NT-proBNP, echo, and 6MWD, including invasive assessment with right heart catheterization 3–6 months after last BPA. Post BPA treatments After BPA, oral anticoagulation is continued life-long. Existing information on the type of anticoagulation does not suggest a clear advantage of either NOACs or vitamin K antagonists. In antiphospholipid syndrome, vitamin K antagonists are prescribed. 3–6 months after BPA or at any time after BPA if symptoms recur, a right heart catheterization is guideline recommended to assess the need for continuation or termination or initiation of medical treatments approved for CTEPH. For the decision to continue or terminate or initiate medical treatments including riociguat, a multiparametric approach can be used, taking into account patient needs and symptoms during their regular life after BPA, the number of remaining lesions, exercise capacity, hemodynamics, arterial saturation, and response to ongoing or previous medical treatments. BPA leads to a significant decrease of right ventricular afterload (see , , and ). The initial case series from the US reported a 23% decrease in total PVR, a 21% decrease in mPAP, and a 5% increase in CO after BPA. BPA interventionists from Europe and US reported an approximately 42%–45% decrease in PVR with a 18%–29% decrease in mPAP after refined BPA. , , More recent European series reported a decrease of PVR by 34%–60% and of mPAP by 23%–44%. , , , The Japanese registry reported a 66% decrease in PVR and a 48% decrease in mPAP. The results obtained in Japan tend to be better than those in other countries, , which may be due to the more extensive experience of operators, patient selection in light of less PEA activity, and the different structure of vascular lesions with a less inflammatory thrombotic phenotype. The most recent randomized controlled trials in two expert BPA centers reported a 65% decrease of PVR, a 40% decrease of mPAP, and a 10% increase of CO after BPA. , While vasodilators mainly increase CO, BPA decreases mPAP, resulting in lower PVR. Therefore, BPA success has to be reported together with ongoing vasodilator treatments. Hemodynamic improvement is more pronounced when complete pulmonary revascularization is achieved, including treatment of chronic total occlusions. , For patients with central clots, the decrease in PVR that can be achieved with BPA may be like in patients with distal lesions, but this needs to be confirmed in larger prospective studies. Hemodynamic improvement after BPA correlates with significant improvements in clinical status, right ventricular function, and patient-reported quality of life. , In Feinstein’s series, follow-up (mean, 36 months) mPAP, NYHA functional class, and 6-minute walking distance (6MWD) had improved, and all vessels previously dilated were patented at angiographic reassessment. Clinical benefit of BPA was demonstrated by improvements in functional class, 6MWD, and an increase in peak oxygen consumption and a decrease in VE/VCO 2 , and PETCO 2 . , A significant reduction in NT-proBNP levels occurs, , , , , , , an improvement in right ventricular function by echo, and cardiac magnetic resonance imaging, , electrical reverse remodeling by ECG, and an increase in pulmonary vascular compliance. , BPA in elderly patients is equally effective. , 3- and 5-year survival rates were reported as 92%–95% , , and 88%–90%, , respectively. The efficacy of BPA procedures as an adjunctive treatment after PEA has been evaluated in several small studies indicating that BPA is feasible, but its effectiveness appears to be lower, and the risk of complications is higher than in untreated patients. , , Significant symptomatic improvement after BPA was demonstrated in CTEPD patients without PH at rest. , Perfusion scans or DSA perfusion is helpful to identify residual lesions. Regardless of the result of BPA, guidelines recommend that patients should be regularly followed-up with functional class, NT-proBNP, echo, and 6MWD, including invasive assessment with right heart catheterization 3–6 months after last BPA. After BPA, oral anticoagulation is continued life-long. Existing information on the type of anticoagulation does not suggest a clear advantage of either NOACs or vitamin K antagonists. In antiphospholipid syndrome, vitamin K antagonists are prescribed. 3–6 months after BPA or at any time after BPA if symptoms recur, a right heart catheterization is guideline recommended to assess the need for continuation or termination or initiation of medical treatments approved for CTEPH. For the decision to continue or terminate or initiate medical treatments including riociguat, a multiparametric approach can be used, taking into account patient needs and symptoms during their regular life after BPA, the number of remaining lesions, exercise capacity, hemodynamics, arterial saturation, and response to ongoing or previous medical treatments. No cost effectiveness studies have been done so far, and there are no comparative studies with PEA. One study analyzed the treatment cost in France. Authors found that the first BPA session was performed 1.1 years (IQR 0.3–2.92) after the first PH hospitalization. A mean of three stays per patient with BPA sessions was reported. The total hospital cost attributable to BPA was €21 245 ± 12 843 per patient. BPA is a novel treatment and patients wish to hear about the concept, the expected number of procedures, the speed of improvements, the likelihood of an ongoing need for medical treatments, and the risk of hemoptysis or a SAE during the procedures. While more severe hemodynamics at baseline classify a higher risk category, , no systematic BPA risk scores have yet been established. What helps patients gain confidence is that they could watch the procedure on a monitor and see the blood vessels opening up after each step of the treatment. While physicians’ guidance during BPA is valuable, a structured brochure or video informing the patient about important details of BPA beforehand is desirable. Patients report that they improve immediately after the first BPA procedure and that they can breathe better and deeper. Further improvements thereafter were also noted to be significant. Patients want to speak to their physician and set goals together. For example, if the patient is an elderly lady, she may be content with less procedures, compared with a young man who wants to compete in sports and needs even the smallest pulmonary vessel open. The 2022 ESC/ERS guidelines have upgraded their recommendation for BPA from IIb-C in 2015 to I-B. While this reflects increased understanding of its role in CTEPH treatment over the past 10 years, many open issues remain. For example, there is a need for appropriate training standards in high-volume centers, standardized technique to be globally adopted, and development of dedicated interventional equipment . Patient selection remains a significant challenge despite the use of multidisciplinary CTEPH teams, with eligibility for BPA decided on the basis of a consensus among team members. With scarce information on long-term survival, treatment goals need clarification, as it is uncertain to which degree mPAP and PVR lowering leads to improved outcomes. Many patients have mixed anatomical lesions with lobar, segmental, and microvascular involvement, but presently, there is no standardized approach to combination therapy with PEA, BPA, and pharmacological therapy . Future research should include whether PH-targeted medication prior to BPA is beneficial, whether combination treatments are successful, whether medication should be continued after completion of BPA and whether PEA and BPA should be performed simultaneous, or sequentially . Unmet needs are risk scores for patients undergoing BPA , to determine critical target lesions and a hemodynamic threshold necessary to normalize prognosis , to determine whether post PEA patients have equivalent outcomes as primary BPA candidates. The International BPA Registry ( NCT03245268) and the Japan Multicenter Registry for BPA will play important roles to answer some of the open questions. ehad413_Supplementary_Data
Cellular therapeutics and immunotherapies in wound healing – on the pulse of time?
6a60b40f-0998-4864-97da-5ab6aaaa111f
11025282
Debridement[mh]
Demographic changes pose significant challenges for healthcare systems across the globe. Advancing age is often accompanied by the prevalence of underlying health conditions, such as diabetes and arteriosclerosis, which contribute to the higher incidence of chronic wound healing disorders . For example, a retrospective analysis of Medicare beneficiaries in the United States revealed that a substantial number of individuals, up to 8.2 million patients (14.5%), were affected by chronic wound disorders . Of particular concern is the finding that 5-year mortality rates of patients suffering from diabetic foot complications are comparable to the overall rates observed in cancer-associated cases (31%) . Ordinarily, a wound heals damaged tissue through 4 sequential, overlapping phases: coagulation or hemostasis, early and late inflammation, proliferation, and tissue remodeling (Fig. a). These steps occur in a timely and highly orchestrated manner that involves the activity and recruitment of myriad cell types and ultimately culminates in the restoration of tissue homeostasis via the regeneration of functional tissue or, as is most often the case, scar tissue. Whereas normal wounds are able to resolve, chronic wounds are defined as wounds that have failed to progress toward healing after 4 weeks of standard of care (SOC) treatment . This occurs because chronic wounds are unable to properly progress through the 4 aforementioned phases, precluding the restoration of a functional barrier and making them amenable to infection and tissue death. Despite etiologies ranging from vascular insufficiency to pressure or diabetic complications, the underlying physiology is typically unifying across all types: arrest in the inflammatory phase of wound healing, characterized by excessive inflammatory signaling, increased protease activity, abundant reactive oxygen species, and a deficiency of growth factors and stem cell activity . Other factors may also contribute to deficient wound healing and chronic wound formation, including attenuated proliferation, inadequate angiogenesis, and microorganism colonization (Fig. b). Non-healing wounds have also been found to have significantly lower oxygen tension levels, ranging from 5 to 20 mmHg, whereas proper wound healing requires a tissue oxygen tension above 25 mmHg . Conditions like diabetes or vascular disorders can further contribute to a reduced peripheral oxygen supply. A considerable proportion (78%) of chronic wounds is affected by the presence of biofilms, structured communities of microorganisms embedded within an extracellular polymeric substance that often forms in wound beds. This state allows microorganisms to become highly tolerant and resistant to the host’s immune system and antimicrobial substances , posing a significant challenge to treatment and recovery . To effectively promote wound healing, then, evidence-based strategies must be adequately identified. However, there remains a paucity of studies that comprehensively summarize the existing body of evidence on this subject. The current SOC management across all types of chronic wounds includes initial surgical debridement of damaged or necrotic tissue followed by the decision to either allow the wound to heal by secondary intention or grafting tissue onto the wound bed. Either choice is then supported by the use of local or systemic antibiotic treatment and the application of an appropriate wound dressing. It should be noted that different approaches and techniques exist within each of these steps . Exploring this line of research has the potential to customize therapeutic strategies and achieve more targeted effects while minimizing systemic side effects. Below, we explore the current literature regarding available or developing cellular therapies that employ regulatory T-cells (T-regs), stem cells, macrophages, fibroblasts, and platelets in addition to immunotherapies for chronic, non-healing wounds (Table ). Moreover, understanding the mechanisms of wound healing and their interactions with underlying diseases is crucial for the development of effective therapeutic interventions. T-regs T-regs are a heterogeneous subset of the cluster of differentiation (CD)4 + helper T-cells that maintain immune tissue homeostasis by promoting self-tolerance, dampening excessive immune responses, and suppressing autoimmunity [ – ]. They are classically defined by the expression of CD4, CD25, and forkhead box protein 3 [ , , ]. Due to their heterogeneity, T-regs exert their immune tolerance and suppression via multiple mechanisms: secretion of anti-inflammatory factors such as interleukin (IL)-10, transforming growth factor-β (TGF-β), and IL-35, suppression of tumor necrosis factor-α (TNF-α), IL-6, and interferon (IFN)-γ release, and the quelling of T-cell activity and proliferation via cytotoxic T-lymphocyte antigen-4 . Within the context of wound healing, it is theorized that T-regs may aid non-healing wounds in terminating the inflammatory phase and enabling progression to the proliferative phase to continue the wound healing sequence . Lewkowicz et al . indicated that lipopolysaccharide-activated T-regs can directly inhibit the inflammatory activity of polymorphonuclear neutrophils by inducing their production of IL-10, TGF-β, and heme oxygenase-1 generation while inhibiting the production of IL-6 in vitro. Moreover, T-regs have been shown to directly suppress the inflammatory activity of macrophages in vitro by hampering their ability to secrete TNF-α and IL-6, in addition to reducing their recruitment . The anti-inflammatory activity is likely stimulated by epidermal growth factor (EGF), as Zaiss et al . indicated how T-reg suppressive activity is largely enhanced when they are exposed to EGF. These positive findings have also been observed in pre-clinical, animal models. Nosbaum et al . used an in vivo murine model to highlight the necessity of T-regs for appropriate wound closure. The group found that T-regs were activated and preferentially accumulate in wounded skin during the inflammatory phase of wound healing while depletion of T-regs significantly diminished wound closure. T-regs mediated this effect by attenuating IFN-γ-secreting effector T-cells and, consequently, the accumulation of pro-inflammatory LyC6 + macrophages in the wound. Similarly, Haertel et al. found that depletion of T-regs caused poor vessel formation, reduced contraction, and obstructed reepithelization in both wild-type and activin-transgenic mice. These findings strongly support the necessary role of T-regs in normal wound healing. Furthermore, these data suggest that the topical administration of T-reg-based cell therapy could be promising for superficial and deep tissue chronic wounds, particularly for those wounds that persist in the inflammatory phase due to T-regs’ role in anti-inflammatory processes. However, although pre-clinical studies are enlightening, significantly more basic science research regarding the precise role of T-regs in wound healing and the mechanism by which they mediate it is required. Knoedler et al. discussed future directions of T-regs, noting that chimeric antigen receptor-equipped T-regs may deliver promising treatment options for wound healing due to chimeric antigen receptor-T-regs proven track record in controlling alloimmune-mediated rejection in human skin grafts. However, potential side-effects such as T-reg overstimulation or exhaustion have to be further investigated pre-clinical prior to clinical application. With all this in mind, the limitations of adoptive T-reg cell therapies include the use of autologous cells from patients with chronic wounds whose T-regs may be dysfunctional, the time-consuming expansion of T-regs in vitro , cell delivery and survival in the inflammatory wound environment, and lack of standard generation protocols . Consequently, there are currently no registered clinical trials nor Food and Drug Administration (FDA)-approved therapies for the use of adoptive T-reg cell therapy for the treatment of chronic, non-healing wounds. Stem cells A stem cell is defined as an immature, unspecialized cell that can undergo self-propagation and has the ability to differentiate into more than one cell lineage, often to replace aged or damaged tissue . Two fundamental ideas position stem cells as a promising therapeutic prospect for the treatment of chronic wounds. First, the ability of stem cells to spatially and temporally respond to fluctuating microenvironments by secreting growth factors and differentiating into depleted cell types suggests that they may be more effective at restoring homeostasis than other therapies. This is particularly important as the presence of hypoxia, poor perfusion, microbial growth, and inflammation generally precludes many therapies from being effective [ , , ]. Second, the observation that stem cell populations are depleted in non-healing wounds supports the notion that their replacement may be advantageous . Pluripotent stem cells (PSCs) are generally divided into two categories: embryonic stem cells (ESCs) and induced PSCs (iPSCs). ESCs are PSCs derived from the inner cell mass of a blastocyst and have the capacity to differentiate into any tissue of the three germ layers . However, due to the controversial ethical dilemma surrounding ESC research and clinical applicability, iPSCs have become their functional replacement. iPSCs, which phenotypically resemble ESCs, are PSCs capable of differentiating into any tissue of the three germ layers. Unlike ESCs, however, they are derived by reprogramming adult somatic cells (most commonly keratinocytes or fibroblasts) with a cocktail of four transcription factors . Because iPSCs are easy to generate and present low immunogenicity when self-sourced, they possess all the benefits of ESCs while avoiding the moral dilemma. Christiano et al. used iPSCs to generate an autologous 3D human skin equivalent for wound healing, surgical reconstruction, or skin diseases. And Takagi et al. developed an artificial, 3D allograft from iPSCs which contained functional skin appendages such as hair follicles and sebaceous glands. In fact, cells derived from iPSCs may contribute to extracellular matrix (ECM) deposition, angiogenesis, cell proliferation, and immunoregulation . Sasson et al. have investigated TNF-α preconditioned human iPSC in terms of cellular viability and secretome when integrated into a 3D collagen scaffold. Their results showed improved cell viability as well as a significant increase in vascular endothelial growth factor (VEGF) expression in the preconditioned human iPSCs meaning that augmented cellular viability in combination with secretion of paracrine factors could lead to improved wound healing due to migration (Fig. ). Despite the promising landscape, there are currently no ESC or iPSC therapies that are commercially available for non-healing wounds or, in fact, for any disease. As mentioned previously, ESCs are unlikely to see clinical application due to ethical considerations. And while significant progress has been made in characterizing iPSCs and their application, their safety profile remains uncertain for clinical use, particularly due to their tumorigenic potential . Furthermore, determining the optimal platform for delivering iPSCs to chronic wounds and characterizing the ideal microenvironment to ensure their survival is crucial. Innovative approaches to eliminate undifferentiated cells that possess tumorigenic potential before cellular transplantation are also required. Finally, there are considerations to be made with iPSCs derived from patients with different health profiles. We conclude that primate studies are likely necessary before iPSCs can make the jump towards clinical use in humans . Adult stem cells (ASCs) also known as tissue-specific stem cells, are stem cells that reside in and can be isolated from non-fetal tissue . In contrast to PSCs, ASCs are multipotent, meaning their progeny is restricted to the lineage of a single layer . One of the principal roles of ASCs, then, is to maintain the integrity of the tissues they reside in by replacing old or damaged cells. Although many distinct types of ASCs exist, mesenchymal stem cells (MSCs) dominate the stem cell-based therapy field with regard to non-healing wound treatment. MSCs are of mesodermal origin and are found in many tissues throughout the body, including bone marrow, adipose tissue, umbilical cord blood, Wharton’s Jelly, and many other tissues. Bian et al. present a comprehensive review of the now-established effects that MSCs exert in vitro, including the secretion of proliferative and angiogenic growth factors, exerting anti-inflammatory and immunoregulatory effects, and differentiating into mesodermal lineage cell types such as fibroblasts, keratinocytes, and endothelial cells. The two most prevalent types of MSCs in wound healing studies are bone marrow mesenchymal stem cells (BMSCs) and adipose-derived stem cells (ADSCs) (Fig. ). Falanga et al. had previously reported that topically-applied autologous BMSCs were able to significantly accelerate and improve wound healing in both non-healing human wounds and diabetic murine models. However, the difficulty in sourcing BMSCs, combined with reduced red marrow content as patients age, has made finding an alternative source of ASCs compelling . ADSCs have consequently become an increasingly relevant object of research as they are abundant in adipose tissue, easily harvested from lipoaspirates, and not subject to the same limitations as BMSCs . Brembilla et al. provide an excellent review of both pre-clinical and clinical data which supports the idea that ADSCs can improve reepithelization, promote angiogenesis, and accelerate wound healing. However, they also highlight the need for further research and well-designed clinical trials before their effectiveness can be conclusively appraised. To date, the only FDA-approved, stem cell-based therapy for chronic wounds is Grafix, a cryopreserved amniotic membrane with a 3D-ECM scaffold containing growth factors, fibroblasts, endothelial cells, and MSCs . Two clinical studies, including a large randomized controlled trial (RCT), demonstrate the safety and efficacy of Grafix treatment ( n = 50 and n = 66) in diabetic foot ulcers, even in those that are treatment refractory. Patients who were treated with Grafix achieved wound closure (62%) up to 4 weeks faster than the SOC-treated patients (21.3%) while also experiencing fewer complications . Ultimately, stem cell therapies are a promising avenue for chronic wound treatment, with the current literature largely focusing on MSC-based treatments . However, although several registered stem cell clinical trials are underway, the dearth of commercial drugs clearly suggests further research is necessary . Macrophages Macrophages are long-lived phagocytic cells of the innate immune system that play specialized roles in immune protection, inflammation, and tissue homeostasis . Tissue-resident macrophages of the skin are broadly divided into two categories: epidermal macrophages (also known as Langerhans cells) and dermal macrophages . Monocyte-derived macrophages are also recruited during periods of infection, inflammation, or injury, serving an essential role in coordinating both the immune response and the wound healing process . Macrophages are phenotypically diverse and exhibit a high degree of plasticity which allows them to adapt to the microenvironment in which they reside. However, despite their essential role in tissue homeostasis, dysregulation of macrophage activity may heavily contribute to inflammatory and autoimmune diseases. This is particularly evident in chronic wounds, where inflammatory macrophages play a significant role in the persistence of inflammation and, consequently, affecting wound healing. In healthy skin, injuries lead to the recruitment of both tissue-resident and monocyte-derived macrophages to the site of injury, where they help orchestrate the three-phase wound healing response. Immediately after an injury occurs, the inflammatory phase begins. Macrophages are polarized to an M1 (inflammatory) phenotype as a means of promoting an anti-pathogenic microenvironment. Following pathogen clearance, macrophages transition towards a spectrum of M2 (inflammation-resolving) phenotype as the wound becomes ready to enter the proliferative phase and, eventually, the tissue-remodeling phase. Because non-healing wounds tend to be caused by the persistence of the inflammatory phase, influencing the behavior of macrophages has been at the forefront of wound healing research. This can be accomplished by either attenuating the activity of M1 macrophages or enhancing the activity of M2 macrophages. Therapies are often grouped into two broad categories: pharmacological therapies or ex vivo transplantation of macrophages . Ashcroft et al. indicated improved rates of closure in wounds after the application of Infliximab, neutralizing antibodies of TNF-α. Infliximab was applied topically in 8 patients with 14 distinct chronic wounds. Seven of the 8 patients enrolled had positive clinical outcomes, with 12 of the 14 individual wounds responding to treatment. Five treatment-refractory ulcers healed within 4 to 8 weeks, and 4 ulcers showed a reduction in size of 75% after 8 weeks. Non-healed wounds included 2 ulcers which showed no reduction after 8 weeks and a single ulcer that did not display any response to treatment . Pharmacological therapies that stimulate the activity of M2 macrophages have also been explored. Agonists of the peroxisome proliferator-activated receptor, such as GQ-11 have been suggested to improve reepithelization and collagen deposit by repolarizing inflammatory M1 macrophages towards an M2 state in a diabetic murine model. Silva et al. demonstrated that the application of GQ-11 resulted in a 30% increase in wound closure 10 d post-wounding in diabetic mice relative to their vehicle control and pioglitazone groups. It should be noted, however, that this improvement occurred only during the later stages of wound closure and that this positive closure effect was not observed in non-diabetic mice. Although several disease models show promising findings for macrophage transplantation, the evidence of benefit in wound healing is unclear. Due to the highly concerted and convoluted M1-to-M2 progression, the successful administration of polarized macrophages depends on the appropriate temporal application during the sequence of wound healing. Gu et al. presented evidence on how the administration of activated M1 macrophages within 1 d of injury accelerated wound healing. Moreover, Jetten et al. found that administration of IL-4/IL-10-stimulated M2 macrophages during the early inflammatory phase was detrimental. Dreymueller et al. corroborated these findings, with M1 transplantation sparking the wound healing process when delivered early on, but M2 transplantation significantly delaying the recovery in a chronic wound murine model. These groups concluded that, at least in the context of cutaneous healing in mice, attempting to modify the wound environment through the external introduction of M2-polarized macrophages was not an effective therapeutic approach. Instead, therapeutic strategies directly targeting pro-inflammatory factors like IL-1β or TNF-α appear more promising . The data presented above suggest that SOC treatments, including debridement, disinfection, and appropriate dressing, are more beneficial than macrophage transplantation . There are no FDA-approved drugs to pharmacologically target or transplant macrophages specifically for wound healing, to date. We believe further research is warranted to better understand the complex inflammatory microenvironment of chronic wounds to learn how macrophage transplantation in chronic wounds could become therapeutically viable. Fibroblasts Fibroblasts are a heterogeneous group of mesenchymal cells responsible for creating and maintaining the ECM. The ECM is essential in establishing the internal milieu, providing structural integrity, serving as a growth factor repertoire, and regulating cell activity . Fibroblasts also play a direct role in tissue homeostasis by responding to signaling cues, secreting growth factors and inflammatory cytokines, and having contractility properties. In spite of all of these functions, fibroblasts usually remain in a quiescent state until homeostasis is disturbed . The central role that fibroblasts play in tissue maintenance necessitates that they also orchestrate the return to homeostasis during injury. As a wound occurs and enters the inflammatory phase, fibroblasts exit quiescence and become myofibroblasts, an inflammatory, secretory, and highly contractile phenotype that helps coordinate wound closure. Myofibroblasts migrate into the wound bed, generating large amounts of ECM and releasing cytokines and growth factors. Myofibroblast contraction further helps in wound compaction. As with inflammatory macrophages, dysregulation and persistence of myofibroblast activity can maintain wounds in a chronic, non-healing state . Their central role in regulating, or dysregulating, the wound healing process consequently makes them an attractive therapeutic candidate. Like the absence of stem cells, the broad depletion of growth factors in chronic wound beds suggests that supplementation could serve as an effective therapeutic. While several growth factors are known to stimulate fibroblast activity, proliferation, and survival, the three following are the most studied: platelet-derived growth factor (PDGF), the fibroblast growth factor (FGF) family, and TGF-β . As demonstrated by Tassi et al. , fibroblast growth factor-binding protein 1 (FGF-BP1) enhances angiogenesis, granulation tissue formation, and both fibroblast and keratinocyte migration in their murine wound healing model. A significantly accelerated wound closure rate was observed in transgenic mice that expressed BP1 while FGF2-null mice showed delayed wound closure, suggesting that BP1 plays an important role in FGF activity. By day 6, BP1-expressing wounds were nearly closed while the equivalent control did not demonstrate epithelial closure. And despite the challenges of studying the effects of TGF-β due to its pleiotropic effects, in vitro and in vivo studies provide overwhelming evidence of its necessity for re-epithelialization, inflammation, angiogenesis, and granulation tissue formation during the wound healing process . Despite the successful pre-clinical outcomes, only a handful of growth factor-based therapies have become FDA-approved and are used clinically for chronic wound treatment. In fact, beclapermin, a recombinant human PDGF approved for diabetic neuropathic ulcers, is the only growth factor that is FDA-approved and in-use to date. In contrast to a placebo gel, the application of becaplermin gel at 100 µg/g demonstrated a notable improvement in wound closure outcomes ( n = 382 patients). Specifically, it increased the rate of complete wound closure by 43% and reduced the time required to achieve complete wound closure by 32%. Adverse events reported during treatment or within a 3-month follow-up period were comparable in terms of type and frequency across all treatment groups . However, there are therapies that are commercially available outside of the United States. These include Fiblast Spray [recombinant human basic fibroblast growth factor (bFGF) available in Japan], Heberprot and Easyef (recombinant human EGF formulations available largely in East Asia), and Regen-D150 (recombinant human EGF gel available in India) . It is postulated that the short half-life, particularly in the presence of the inflammatory microenvironment within chronic wounds, our inability to temporally regulate growth factors, and the relatively limited physiological scope of single growth factor therapies have hampered their clinical use. Research is underway to develop biomaterials that can avoid these barriers and deliver growth factors to chronic wounds more efficiently. As with other cellular therapies, the direct transplantation of live fibroblasts allows for the regulated release of growth factors and cytokines based on the cells’ ability to sense and respond to the microenvironment. Fibroblasts are often embedded in a scaffold which include growth factors, keratinocytes, or other cellular components that support their survival and ability to improve wound healing . Importantly, some studies have found that both autologous and allogeneic fibroblast-based sheets, whether they are dry, frozen, or non-frozen, had comparable effects on improving wound healing outcomes [ – ]. This is clinically impactful for a few reasons: allogeneic fibroblast sheets are quality-tested and normalized, present considerable time and cost savings, and offer an overall expedited treatment process compared to autologous sheets. Clinically, 8 fibroblast-based cell therapies are in use for wound management to date . Notable examples include: Dermagraft, Apligraf, and TheraSkin [ – , ]. A prospective, multicenter RCT found complete wound healing in 34% ( n = 64) of Dermagraft-treated patients compared to 31% ( n = 56) of SOC-treated patients. The study concluded that Dermagraft is comparable but not necessarily superior in outcomes relative to SOC therapies . On the other hand, another international, multicenter RCT found that type 1 and 2 diabetes patients undergoing Apligraf treatment showed a significantly higher wound closure rate ( n = 33 with 51.5% response rate) in the treatment group relative to SOC ( n = 39 with 26.3% response rate) after 12 weeks . Finally, a multicenter, retrospective, propensity-matched cohort study that analyzed the efficacy of TheraSkin ( n = 1997) compared to SOC ( n = 1997), and found that TheraSkin treatment led to a statistically significant increase in healing rate at 90 d, between 90 – 179 d, and beyond 180 d. These data indicate that combined cellular therapies show great promise for the treatment of chronic, non-healing wounds . Platelets Platelets are small, anucleated cell fragments that originate from megakaryocytes in the bone marrow and are essential for the wound healing process. Besides being directly responsible for hemostasis, the first phase of wound healing, via plug formation, they also orchestrate the start of the inflammatory phase by serving as adhesion points for recruited neutrophils and monocytes and by secretion of chemokine (C-X-C motif) ligand 4, IL-1β, and other inflammatory mediators [ – ]. Additionally, perhaps most importantly, platelets serve as an enormous source of growth factors, notably PDGF, VEGF, FGF, and EGF , which aid in the progression of normal wound healing. Platelet-based therapies, collectively named platelet concentrates, are subsequently based on the knowledge that wounds are depleted of growth factors, that these growth factors can be sourced from platelets, and that platelets contain a variety of growth factors that clinically outperform the use of any given factor by itself. Platelet concentrates are broadly divided into 4 distinct groups: platelet-rich plasma (PRP), platelet-rich fibrin (PRF), platelet lysate (PL), and platelet extracellular vesicles (PEVs). PRP is a first-generation plasma concentrate derived autologously from a patient’s blood and is the most widely known platelet concentrate . It consists of a plasma fraction with a higher concentration of platelets than whole blood. Its ease of generation, abundance of growth factors, and low immunogenicity make it a popular therapeutic choice in regenerative medicine, particularly in wound healing . Farghali et al. tested subcutaneous PRP infiltration to assess its effect on full-thickness cutaneous wounds. They found that PRP-treated wounds in dogs had higher rates of re-epithelization, increased contractility, more collagen deposition, and acceleration of granulation tissue maturation while also displaying reduced scarring. Clinically, because it is derived from a patient’s blood, PRP is not considered a drug and is therefore not subject to FDA approval. Rather, the use and application of PRP is at the discretion of a patient’s healthcare provider. Its effectiveness in treating chronic wounds, however, is still debated. Qu et al. performed a meta-analysis of RCT and found evidence that autologous PRP may improve healing in diabetic non-healing wounds but insufficient evidence for venous or pressure ulcers. PRF is a second-generation plasma concentrate that builds and improves upon the success of PRP . The generation procedure is similar to PRP. However, it avoids blood collection tubes coated with anticoagulants and, instead, allows for the formation of a fibrin mesh in the tube in which platelets and leukocytes become embedded in elevated concentrations. The entrapment of cells is thought to prolong and help regulate the pace of growth factor release . Like PRP, PRF is not subject to FDA approval if sourced from a patient’s blood. Pinto et al. designed a prospective, self-controlled study [ n = 44 patients, with venous leg ulcers ( n = 28), diabetic foot ulcers ( n = 9), pressure ulcers ( n = 5), or complex wounds ( n = 2)] and observed that all except 5 of the 44 patients with these treatment-refractory chronic ulcers were able to attain wound closure when treated with PRF. Moreover, the 5 remaining patients in which PRF treatment did not completely heal the wound had ulcers > 10 cm 2 and quit treatment before wound closure. However, their wound recovery was similar to other large wounds that successfully healed, suggesting full recovery was attainable if therapy had not been discontinued. It should be noted that since the study is self-controlled, the strength is limited. However, the group concluded that PRF is a safe, effective, and cheap therapeutic option for these treatment-refractory patients. A more extensive review by Miron et al. found that out of a total of 31 clinical studies, 18 studies (58%) reported positive wound-healing outcomes linked to PRF treatment, even when compared to control groups. Moreover, 27 (87%) out of the 31 clinical studies, supported the utilization of PRF in the context of soft tissue regeneration and wound healing across various medical procedures. This systematic review underscores the favorable impact of PRF on wound healing following regenerative therapy for managing diverse soft tissue issues, including chronic non-healing wounds, encountered in both medical and dental practices. Finally, PL and PEV are both relatively new and understudied platelet concentrates, relative to PRP and PRF, that have the potential to reach clinical application. PL is generated by freezing/thawing platelets, allowing them to lyse and release their intracellular contents. da Fonseca et al. extensively reviewed the use of PL in different diseases, recommending its use in clinical practice. Additionally, Barsotti et al. broadly characterized the effect of platelet concentrates across several cell types in the wound bed by adding variable PL concentrations. They found that fibroblast migration, keratinocyte epithelization, human umbilical vein endothelial cell viability, proliferation, angiogenic capacity, and monocyte chemotaxis were elevated when exposed to PL. However, it should be noted that Bonferoni et al. compared the effectiveness of PRF to PL, finding that PRF yields better wound healing outcomes. The skew towards PRP/PRF in the literature suggests that these therapies may be more clinically applicable than PL. PEVs are collected by stimulating platelets and collecting the extracellular vesicles they secrete. Interestingly, PEVs are quite heterogeneous and are amenable to the stimuli applied to the platelets . Guo et al. indicated that exosomes derived from PRP were able to induce migration and proliferation of endothelial cells and fibroblasts to improve wound healing both in vitro and in vivo in a full-thickness wound model in diabetic rats ( n = 36). Studies regarding their clinical use, however, are still early-stage and sparse. In fact, the first and only human clinical trial that investigated the safety of allogeneic PEVs in non-healing wounds was recently performed by Johnson et al. . Because the group focused on safety, no difference in wound healing capacity was found across experimental and control groups. Rather, they suggest future studies investigate clinical concentrations that are effective for wound regeneration. T-regs are a heterogeneous subset of the cluster of differentiation (CD)4 + helper T-cells that maintain immune tissue homeostasis by promoting self-tolerance, dampening excessive immune responses, and suppressing autoimmunity [ – ]. They are classically defined by the expression of CD4, CD25, and forkhead box protein 3 [ , , ]. Due to their heterogeneity, T-regs exert their immune tolerance and suppression via multiple mechanisms: secretion of anti-inflammatory factors such as interleukin (IL)-10, transforming growth factor-β (TGF-β), and IL-35, suppression of tumor necrosis factor-α (TNF-α), IL-6, and interferon (IFN)-γ release, and the quelling of T-cell activity and proliferation via cytotoxic T-lymphocyte antigen-4 . Within the context of wound healing, it is theorized that T-regs may aid non-healing wounds in terminating the inflammatory phase and enabling progression to the proliferative phase to continue the wound healing sequence . Lewkowicz et al . indicated that lipopolysaccharide-activated T-regs can directly inhibit the inflammatory activity of polymorphonuclear neutrophils by inducing their production of IL-10, TGF-β, and heme oxygenase-1 generation while inhibiting the production of IL-6 in vitro. Moreover, T-regs have been shown to directly suppress the inflammatory activity of macrophages in vitro by hampering their ability to secrete TNF-α and IL-6, in addition to reducing their recruitment . The anti-inflammatory activity is likely stimulated by epidermal growth factor (EGF), as Zaiss et al . indicated how T-reg suppressive activity is largely enhanced when they are exposed to EGF. These positive findings have also been observed in pre-clinical, animal models. Nosbaum et al . used an in vivo murine model to highlight the necessity of T-regs for appropriate wound closure. The group found that T-regs were activated and preferentially accumulate in wounded skin during the inflammatory phase of wound healing while depletion of T-regs significantly diminished wound closure. T-regs mediated this effect by attenuating IFN-γ-secreting effector T-cells and, consequently, the accumulation of pro-inflammatory LyC6 + macrophages in the wound. Similarly, Haertel et al. found that depletion of T-regs caused poor vessel formation, reduced contraction, and obstructed reepithelization in both wild-type and activin-transgenic mice. These findings strongly support the necessary role of T-regs in normal wound healing. Furthermore, these data suggest that the topical administration of T-reg-based cell therapy could be promising for superficial and deep tissue chronic wounds, particularly for those wounds that persist in the inflammatory phase due to T-regs’ role in anti-inflammatory processes. However, although pre-clinical studies are enlightening, significantly more basic science research regarding the precise role of T-regs in wound healing and the mechanism by which they mediate it is required. Knoedler et al. discussed future directions of T-regs, noting that chimeric antigen receptor-equipped T-regs may deliver promising treatment options for wound healing due to chimeric antigen receptor-T-regs proven track record in controlling alloimmune-mediated rejection in human skin grafts. However, potential side-effects such as T-reg overstimulation or exhaustion have to be further investigated pre-clinical prior to clinical application. With all this in mind, the limitations of adoptive T-reg cell therapies include the use of autologous cells from patients with chronic wounds whose T-regs may be dysfunctional, the time-consuming expansion of T-regs in vitro , cell delivery and survival in the inflammatory wound environment, and lack of standard generation protocols . Consequently, there are currently no registered clinical trials nor Food and Drug Administration (FDA)-approved therapies for the use of adoptive T-reg cell therapy for the treatment of chronic, non-healing wounds. A stem cell is defined as an immature, unspecialized cell that can undergo self-propagation and has the ability to differentiate into more than one cell lineage, often to replace aged or damaged tissue . Two fundamental ideas position stem cells as a promising therapeutic prospect for the treatment of chronic wounds. First, the ability of stem cells to spatially and temporally respond to fluctuating microenvironments by secreting growth factors and differentiating into depleted cell types suggests that they may be more effective at restoring homeostasis than other therapies. This is particularly important as the presence of hypoxia, poor perfusion, microbial growth, and inflammation generally precludes many therapies from being effective [ , , ]. Second, the observation that stem cell populations are depleted in non-healing wounds supports the notion that their replacement may be advantageous . Pluripotent stem cells (PSCs) are generally divided into two categories: embryonic stem cells (ESCs) and induced PSCs (iPSCs). ESCs are PSCs derived from the inner cell mass of a blastocyst and have the capacity to differentiate into any tissue of the three germ layers . However, due to the controversial ethical dilemma surrounding ESC research and clinical applicability, iPSCs have become their functional replacement. iPSCs, which phenotypically resemble ESCs, are PSCs capable of differentiating into any tissue of the three germ layers. Unlike ESCs, however, they are derived by reprogramming adult somatic cells (most commonly keratinocytes or fibroblasts) with a cocktail of four transcription factors . Because iPSCs are easy to generate and present low immunogenicity when self-sourced, they possess all the benefits of ESCs while avoiding the moral dilemma. Christiano et al. used iPSCs to generate an autologous 3D human skin equivalent for wound healing, surgical reconstruction, or skin diseases. And Takagi et al. developed an artificial, 3D allograft from iPSCs which contained functional skin appendages such as hair follicles and sebaceous glands. In fact, cells derived from iPSCs may contribute to extracellular matrix (ECM) deposition, angiogenesis, cell proliferation, and immunoregulation . Sasson et al. have investigated TNF-α preconditioned human iPSC in terms of cellular viability and secretome when integrated into a 3D collagen scaffold. Their results showed improved cell viability as well as a significant increase in vascular endothelial growth factor (VEGF) expression in the preconditioned human iPSCs meaning that augmented cellular viability in combination with secretion of paracrine factors could lead to improved wound healing due to migration (Fig. ). Despite the promising landscape, there are currently no ESC or iPSC therapies that are commercially available for non-healing wounds or, in fact, for any disease. As mentioned previously, ESCs are unlikely to see clinical application due to ethical considerations. And while significant progress has been made in characterizing iPSCs and their application, their safety profile remains uncertain for clinical use, particularly due to their tumorigenic potential . Furthermore, determining the optimal platform for delivering iPSCs to chronic wounds and characterizing the ideal microenvironment to ensure their survival is crucial. Innovative approaches to eliminate undifferentiated cells that possess tumorigenic potential before cellular transplantation are also required. Finally, there are considerations to be made with iPSCs derived from patients with different health profiles. We conclude that primate studies are likely necessary before iPSCs can make the jump towards clinical use in humans . Adult stem cells (ASCs) also known as tissue-specific stem cells, are stem cells that reside in and can be isolated from non-fetal tissue . In contrast to PSCs, ASCs are multipotent, meaning their progeny is restricted to the lineage of a single layer . One of the principal roles of ASCs, then, is to maintain the integrity of the tissues they reside in by replacing old or damaged cells. Although many distinct types of ASCs exist, mesenchymal stem cells (MSCs) dominate the stem cell-based therapy field with regard to non-healing wound treatment. MSCs are of mesodermal origin and are found in many tissues throughout the body, including bone marrow, adipose tissue, umbilical cord blood, Wharton’s Jelly, and many other tissues. Bian et al. present a comprehensive review of the now-established effects that MSCs exert in vitro, including the secretion of proliferative and angiogenic growth factors, exerting anti-inflammatory and immunoregulatory effects, and differentiating into mesodermal lineage cell types such as fibroblasts, keratinocytes, and endothelial cells. The two most prevalent types of MSCs in wound healing studies are bone marrow mesenchymal stem cells (BMSCs) and adipose-derived stem cells (ADSCs) (Fig. ). Falanga et al. had previously reported that topically-applied autologous BMSCs were able to significantly accelerate and improve wound healing in both non-healing human wounds and diabetic murine models. However, the difficulty in sourcing BMSCs, combined with reduced red marrow content as patients age, has made finding an alternative source of ASCs compelling . ADSCs have consequently become an increasingly relevant object of research as they are abundant in adipose tissue, easily harvested from lipoaspirates, and not subject to the same limitations as BMSCs . Brembilla et al. provide an excellent review of both pre-clinical and clinical data which supports the idea that ADSCs can improve reepithelization, promote angiogenesis, and accelerate wound healing. However, they also highlight the need for further research and well-designed clinical trials before their effectiveness can be conclusively appraised. To date, the only FDA-approved, stem cell-based therapy for chronic wounds is Grafix, a cryopreserved amniotic membrane with a 3D-ECM scaffold containing growth factors, fibroblasts, endothelial cells, and MSCs . Two clinical studies, including a large randomized controlled trial (RCT), demonstrate the safety and efficacy of Grafix treatment ( n = 50 and n = 66) in diabetic foot ulcers, even in those that are treatment refractory. Patients who were treated with Grafix achieved wound closure (62%) up to 4 weeks faster than the SOC-treated patients (21.3%) while also experiencing fewer complications . Ultimately, stem cell therapies are a promising avenue for chronic wound treatment, with the current literature largely focusing on MSC-based treatments . However, although several registered stem cell clinical trials are underway, the dearth of commercial drugs clearly suggests further research is necessary . Macrophages are long-lived phagocytic cells of the innate immune system that play specialized roles in immune protection, inflammation, and tissue homeostasis . Tissue-resident macrophages of the skin are broadly divided into two categories: epidermal macrophages (also known as Langerhans cells) and dermal macrophages . Monocyte-derived macrophages are also recruited during periods of infection, inflammation, or injury, serving an essential role in coordinating both the immune response and the wound healing process . Macrophages are phenotypically diverse and exhibit a high degree of plasticity which allows them to adapt to the microenvironment in which they reside. However, despite their essential role in tissue homeostasis, dysregulation of macrophage activity may heavily contribute to inflammatory and autoimmune diseases. This is particularly evident in chronic wounds, where inflammatory macrophages play a significant role in the persistence of inflammation and, consequently, affecting wound healing. In healthy skin, injuries lead to the recruitment of both tissue-resident and monocyte-derived macrophages to the site of injury, where they help orchestrate the three-phase wound healing response. Immediately after an injury occurs, the inflammatory phase begins. Macrophages are polarized to an M1 (inflammatory) phenotype as a means of promoting an anti-pathogenic microenvironment. Following pathogen clearance, macrophages transition towards a spectrum of M2 (inflammation-resolving) phenotype as the wound becomes ready to enter the proliferative phase and, eventually, the tissue-remodeling phase. Because non-healing wounds tend to be caused by the persistence of the inflammatory phase, influencing the behavior of macrophages has been at the forefront of wound healing research. This can be accomplished by either attenuating the activity of M1 macrophages or enhancing the activity of M2 macrophages. Therapies are often grouped into two broad categories: pharmacological therapies or ex vivo transplantation of macrophages . Ashcroft et al. indicated improved rates of closure in wounds after the application of Infliximab, neutralizing antibodies of TNF-α. Infliximab was applied topically in 8 patients with 14 distinct chronic wounds. Seven of the 8 patients enrolled had positive clinical outcomes, with 12 of the 14 individual wounds responding to treatment. Five treatment-refractory ulcers healed within 4 to 8 weeks, and 4 ulcers showed a reduction in size of 75% after 8 weeks. Non-healed wounds included 2 ulcers which showed no reduction after 8 weeks and a single ulcer that did not display any response to treatment . Pharmacological therapies that stimulate the activity of M2 macrophages have also been explored. Agonists of the peroxisome proliferator-activated receptor, such as GQ-11 have been suggested to improve reepithelization and collagen deposit by repolarizing inflammatory M1 macrophages towards an M2 state in a diabetic murine model. Silva et al. demonstrated that the application of GQ-11 resulted in a 30% increase in wound closure 10 d post-wounding in diabetic mice relative to their vehicle control and pioglitazone groups. It should be noted, however, that this improvement occurred only during the later stages of wound closure and that this positive closure effect was not observed in non-diabetic mice. Although several disease models show promising findings for macrophage transplantation, the evidence of benefit in wound healing is unclear. Due to the highly concerted and convoluted M1-to-M2 progression, the successful administration of polarized macrophages depends on the appropriate temporal application during the sequence of wound healing. Gu et al. presented evidence on how the administration of activated M1 macrophages within 1 d of injury accelerated wound healing. Moreover, Jetten et al. found that administration of IL-4/IL-10-stimulated M2 macrophages during the early inflammatory phase was detrimental. Dreymueller et al. corroborated these findings, with M1 transplantation sparking the wound healing process when delivered early on, but M2 transplantation significantly delaying the recovery in a chronic wound murine model. These groups concluded that, at least in the context of cutaneous healing in mice, attempting to modify the wound environment through the external introduction of M2-polarized macrophages was not an effective therapeutic approach. Instead, therapeutic strategies directly targeting pro-inflammatory factors like IL-1β or TNF-α appear more promising . The data presented above suggest that SOC treatments, including debridement, disinfection, and appropriate dressing, are more beneficial than macrophage transplantation . There are no FDA-approved drugs to pharmacologically target or transplant macrophages specifically for wound healing, to date. We believe further research is warranted to better understand the complex inflammatory microenvironment of chronic wounds to learn how macrophage transplantation in chronic wounds could become therapeutically viable. Fibroblasts are a heterogeneous group of mesenchymal cells responsible for creating and maintaining the ECM. The ECM is essential in establishing the internal milieu, providing structural integrity, serving as a growth factor repertoire, and regulating cell activity . Fibroblasts also play a direct role in tissue homeostasis by responding to signaling cues, secreting growth factors and inflammatory cytokines, and having contractility properties. In spite of all of these functions, fibroblasts usually remain in a quiescent state until homeostasis is disturbed . The central role that fibroblasts play in tissue maintenance necessitates that they also orchestrate the return to homeostasis during injury. As a wound occurs and enters the inflammatory phase, fibroblasts exit quiescence and become myofibroblasts, an inflammatory, secretory, and highly contractile phenotype that helps coordinate wound closure. Myofibroblasts migrate into the wound bed, generating large amounts of ECM and releasing cytokines and growth factors. Myofibroblast contraction further helps in wound compaction. As with inflammatory macrophages, dysregulation and persistence of myofibroblast activity can maintain wounds in a chronic, non-healing state . Their central role in regulating, or dysregulating, the wound healing process consequently makes them an attractive therapeutic candidate. Like the absence of stem cells, the broad depletion of growth factors in chronic wound beds suggests that supplementation could serve as an effective therapeutic. While several growth factors are known to stimulate fibroblast activity, proliferation, and survival, the three following are the most studied: platelet-derived growth factor (PDGF), the fibroblast growth factor (FGF) family, and TGF-β . As demonstrated by Tassi et al. , fibroblast growth factor-binding protein 1 (FGF-BP1) enhances angiogenesis, granulation tissue formation, and both fibroblast and keratinocyte migration in their murine wound healing model. A significantly accelerated wound closure rate was observed in transgenic mice that expressed BP1 while FGF2-null mice showed delayed wound closure, suggesting that BP1 plays an important role in FGF activity. By day 6, BP1-expressing wounds were nearly closed while the equivalent control did not demonstrate epithelial closure. And despite the challenges of studying the effects of TGF-β due to its pleiotropic effects, in vitro and in vivo studies provide overwhelming evidence of its necessity for re-epithelialization, inflammation, angiogenesis, and granulation tissue formation during the wound healing process . Despite the successful pre-clinical outcomes, only a handful of growth factor-based therapies have become FDA-approved and are used clinically for chronic wound treatment. In fact, beclapermin, a recombinant human PDGF approved for diabetic neuropathic ulcers, is the only growth factor that is FDA-approved and in-use to date. In contrast to a placebo gel, the application of becaplermin gel at 100 µg/g demonstrated a notable improvement in wound closure outcomes ( n = 382 patients). Specifically, it increased the rate of complete wound closure by 43% and reduced the time required to achieve complete wound closure by 32%. Adverse events reported during treatment or within a 3-month follow-up period were comparable in terms of type and frequency across all treatment groups . However, there are therapies that are commercially available outside of the United States. These include Fiblast Spray [recombinant human basic fibroblast growth factor (bFGF) available in Japan], Heberprot and Easyef (recombinant human EGF formulations available largely in East Asia), and Regen-D150 (recombinant human EGF gel available in India) . It is postulated that the short half-life, particularly in the presence of the inflammatory microenvironment within chronic wounds, our inability to temporally regulate growth factors, and the relatively limited physiological scope of single growth factor therapies have hampered their clinical use. Research is underway to develop biomaterials that can avoid these barriers and deliver growth factors to chronic wounds more efficiently. As with other cellular therapies, the direct transplantation of live fibroblasts allows for the regulated release of growth factors and cytokines based on the cells’ ability to sense and respond to the microenvironment. Fibroblasts are often embedded in a scaffold which include growth factors, keratinocytes, or other cellular components that support their survival and ability to improve wound healing . Importantly, some studies have found that both autologous and allogeneic fibroblast-based sheets, whether they are dry, frozen, or non-frozen, had comparable effects on improving wound healing outcomes [ – ]. This is clinically impactful for a few reasons: allogeneic fibroblast sheets are quality-tested and normalized, present considerable time and cost savings, and offer an overall expedited treatment process compared to autologous sheets. Clinically, 8 fibroblast-based cell therapies are in use for wound management to date . Notable examples include: Dermagraft, Apligraf, and TheraSkin [ – , ]. A prospective, multicenter RCT found complete wound healing in 34% ( n = 64) of Dermagraft-treated patients compared to 31% ( n = 56) of SOC-treated patients. The study concluded that Dermagraft is comparable but not necessarily superior in outcomes relative to SOC therapies . On the other hand, another international, multicenter RCT found that type 1 and 2 diabetes patients undergoing Apligraf treatment showed a significantly higher wound closure rate ( n = 33 with 51.5% response rate) in the treatment group relative to SOC ( n = 39 with 26.3% response rate) after 12 weeks . Finally, a multicenter, retrospective, propensity-matched cohort study that analyzed the efficacy of TheraSkin ( n = 1997) compared to SOC ( n = 1997), and found that TheraSkin treatment led to a statistically significant increase in healing rate at 90 d, between 90 – 179 d, and beyond 180 d. These data indicate that combined cellular therapies show great promise for the treatment of chronic, non-healing wounds . Platelets are small, anucleated cell fragments that originate from megakaryocytes in the bone marrow and are essential for the wound healing process. Besides being directly responsible for hemostasis, the first phase of wound healing, via plug formation, they also orchestrate the start of the inflammatory phase by serving as adhesion points for recruited neutrophils and monocytes and by secretion of chemokine (C-X-C motif) ligand 4, IL-1β, and other inflammatory mediators [ – ]. Additionally, perhaps most importantly, platelets serve as an enormous source of growth factors, notably PDGF, VEGF, FGF, and EGF , which aid in the progression of normal wound healing. Platelet-based therapies, collectively named platelet concentrates, are subsequently based on the knowledge that wounds are depleted of growth factors, that these growth factors can be sourced from platelets, and that platelets contain a variety of growth factors that clinically outperform the use of any given factor by itself. Platelet concentrates are broadly divided into 4 distinct groups: platelet-rich plasma (PRP), platelet-rich fibrin (PRF), platelet lysate (PL), and platelet extracellular vesicles (PEVs). PRP is a first-generation plasma concentrate derived autologously from a patient’s blood and is the most widely known platelet concentrate . It consists of a plasma fraction with a higher concentration of platelets than whole blood. Its ease of generation, abundance of growth factors, and low immunogenicity make it a popular therapeutic choice in regenerative medicine, particularly in wound healing . Farghali et al. tested subcutaneous PRP infiltration to assess its effect on full-thickness cutaneous wounds. They found that PRP-treated wounds in dogs had higher rates of re-epithelization, increased contractility, more collagen deposition, and acceleration of granulation tissue maturation while also displaying reduced scarring. Clinically, because it is derived from a patient’s blood, PRP is not considered a drug and is therefore not subject to FDA approval. Rather, the use and application of PRP is at the discretion of a patient’s healthcare provider. Its effectiveness in treating chronic wounds, however, is still debated. Qu et al. performed a meta-analysis of RCT and found evidence that autologous PRP may improve healing in diabetic non-healing wounds but insufficient evidence for venous or pressure ulcers. PRF is a second-generation plasma concentrate that builds and improves upon the success of PRP . The generation procedure is similar to PRP. However, it avoids blood collection tubes coated with anticoagulants and, instead, allows for the formation of a fibrin mesh in the tube in which platelets and leukocytes become embedded in elevated concentrations. The entrapment of cells is thought to prolong and help regulate the pace of growth factor release . Like PRP, PRF is not subject to FDA approval if sourced from a patient’s blood. Pinto et al. designed a prospective, self-controlled study [ n = 44 patients, with venous leg ulcers ( n = 28), diabetic foot ulcers ( n = 9), pressure ulcers ( n = 5), or complex wounds ( n = 2)] and observed that all except 5 of the 44 patients with these treatment-refractory chronic ulcers were able to attain wound closure when treated with PRF. Moreover, the 5 remaining patients in which PRF treatment did not completely heal the wound had ulcers > 10 cm 2 and quit treatment before wound closure. However, their wound recovery was similar to other large wounds that successfully healed, suggesting full recovery was attainable if therapy had not been discontinued. It should be noted that since the study is self-controlled, the strength is limited. However, the group concluded that PRF is a safe, effective, and cheap therapeutic option for these treatment-refractory patients. A more extensive review by Miron et al. found that out of a total of 31 clinical studies, 18 studies (58%) reported positive wound-healing outcomes linked to PRF treatment, even when compared to control groups. Moreover, 27 (87%) out of the 31 clinical studies, supported the utilization of PRF in the context of soft tissue regeneration and wound healing across various medical procedures. This systematic review underscores the favorable impact of PRF on wound healing following regenerative therapy for managing diverse soft tissue issues, including chronic non-healing wounds, encountered in both medical and dental practices. Finally, PL and PEV are both relatively new and understudied platelet concentrates, relative to PRP and PRF, that have the potential to reach clinical application. PL is generated by freezing/thawing platelets, allowing them to lyse and release their intracellular contents. da Fonseca et al. extensively reviewed the use of PL in different diseases, recommending its use in clinical practice. Additionally, Barsotti et al. broadly characterized the effect of platelet concentrates across several cell types in the wound bed by adding variable PL concentrations. They found that fibroblast migration, keratinocyte epithelization, human umbilical vein endothelial cell viability, proliferation, angiogenic capacity, and monocyte chemotaxis were elevated when exposed to PL. However, it should be noted that Bonferoni et al. compared the effectiveness of PRF to PL, finding that PRF yields better wound healing outcomes. The skew towards PRP/PRF in the literature suggests that these therapies may be more clinically applicable than PL. PEVs are collected by stimulating platelets and collecting the extracellular vesicles they secrete. Interestingly, PEVs are quite heterogeneous and are amenable to the stimuli applied to the platelets . Guo et al. indicated that exosomes derived from PRP were able to induce migration and proliferation of endothelial cells and fibroblasts to improve wound healing both in vitro and in vivo in a full-thickness wound model in diabetic rats ( n = 36). Studies regarding their clinical use, however, are still early-stage and sparse. In fact, the first and only human clinical trial that investigated the safety of allogeneic PEVs in non-healing wounds was recently performed by Johnson et al. . Because the group focused on safety, no difference in wound healing capacity was found across experimental and control groups. Rather, they suggest future studies investigate clinical concentrations that are effective for wound regeneration. Antibodies Monoclonal antibodies have quickly become a leading drug class, with indications across cancer, autoimmune disease, transplant rejection, and infectious diseases (Fig. ) . According to The Antibody Society, however, there are currently no therapeutic antibodies approved or under review by the FDA for chronic wounds . Acute and chronic wounds contain diverse microbiological populations which may limit wound recovery . We briefly comment below on a prominent use of antibodies: the targeting of bacteria within wound beds. Antibodies against a host of bacterial and fungal elements , and exhibit anti-biofilm activity in vitro and in vivo have been reviewed by Watson et al. , although the authors deny the present availability of any licensed anti-biofilm antibodies. They explain that anti-biofilm antibody therapies are enhanced by both targeting multiple biofilm components and pairing their use with antibacterial drugs. In combining these two ideas, antibodies conjugated to nanomaterials or antibiotics are becoming extensively studied as a potential therapy for biofilm disruption and elimination . We explore the below antibodies that target pathogenic toxins as well as those conjugated to nanomaterials. Antibodies conjugated to drugs are explored in further detail in the Biofilm section . Antibodies against Staphylococcus aureus ( S. aureus ) virulence factors have been shown to have wound healing applications [ – ]. By exploiting the heat generated by the interaction between alternating magnetic field (AMF) and magnetic nanoparticles conjugated with antibody against S. aureus protein A, Kim et al. reported an in vitro inactivation of 80% in colony forming units in AMF treatment compared to 50% with an alternative treatment control. The reduction of bacterial burden consequently enhanced wound healing rates and outcomes in AMF-treated mice compared to controls. In a similar vein, Chen et al. used magneto-ovoid strain (MO-1) magnetotactic bacteria and coated them with anti-MO-1 polyclonal antibodies. They successfully targeted the magnetotactic bacteria to S. aureus through interactions between S. aureus protein A and the antibodies’ Fc fragments. AMF exposure produced magnetic hyperthermia conditions in vitro leading to a 50% increase in the killing efficiency of S. aureus in suspension compared to approximately 30% efficiency for AMF-treated, uncoated MO-1 cells . Moreover, significant reductions in wound lengths at 7 d post-wounding were observed in mice receiving treatment of antibody-coated MO-1 with AMF compared to groups who received no treatment, AMF-only, and antibody-coated MO-1 without AMF . In a diabetic murine model of polymicrobial wounds published by Tkaczyk et al. , triple monoclonal antibody treatment against multiple S. aureus virulence factors (alpha toxin, four secreted bicomponent leukotoxins, and fibrinogen binding cell-surface adhesin clumping factor A) resulted in full skin re-epithelization within 21 d compared to control-IgG, which did not see any wound closure. The polymicrobial wounds also contained Pseudomonas aeruginosa ( P. aeruginosa ) and Streptococcus pyogenes, whose numbers were significantly decreased with the triple antibody treatment compared to control-IgG . The authors argued that, due to an over-time decline in NETosis activity and pro-inflammatory mediators in diabetic mice polymicrobial skin lesions following treatment, other pathogens were more easily targeted . Antibodies against other bacterial pathogens have been investigated as well. Barnea et al. investigated the healing of P. aeruginosa- infected burn wounds in mice using polyclonal antibodies against the N-terminal of P . aeruginosa type b flagellin. Results of this study showed that infected mice were systemically treated with different regimens of anti- P. aeruginosa antibodies had a reduced mortality rate of 0 – 17%, which was comparable to the mortality rate in the imipenem-treated group, and significantly lower than the control mortality rate of 58 – 83%. Moreover, the IgG-treatment group displayed a significantly faster mean wound healing time (15 d vs. 23 d for non-IgG control) with improved histopathological regeneration and no apparent necrosis, ulceration, or abscess formation as was observed in control groups. It should be noted that there were no differences in mortality and wound healing time between anti- P. aeruginosa antibody treatment and imipenem antibiotic treatment . Animal studies of wound healing illustrate the promise of bacteria-directed antibodies and their application alongside anti-microbial treatments. We note that no ongoing clinical trials or FDA-approved antibody therapies for chronic, non-healing wounds are available to date. Rapid proteolysis and clearance of antibodies in inflammatory microenvironments, high costs of manufacturing, potential side effects, and lack of effective delivery systems are all significant challenges that these therapies must overcome if they are to be applied in the clinical setting. However, it is important to highlight those promising innovations in the vehicle space, ranging from nanomaterials to physical and chemical penetration enhancers, that may improve delivery, bioavailability, and persistence in the wound [ – ]. Ultimately, we believe scientific evidence points to the therapeutic potential of antibody-based methods for wound healing. Checkpoint inhibitors Since 2011, there have been 11 immune checkpoint inhibitors approved by the FDA (as of June 2023), all of which are monoclonal antibodies indicated for the treatment of malignancies . The immune checkpoints targeted by these approved checkpoint inhibitors include cytotoxic T-lymphocyte antigen-4, programmed cell death 1 (PD-1), programmed cell death-ligand 1 (PD-L1), and lymphocyte activation gene 3 . There have not yet been any checkpoint inhibitors approved specifically for wound healing indications. In a study by Afroj et al . , mouse and human fibrocytes were shown to express PD-L1, and subsequent antibody blockade of the PD-1/PD-L1 interaction enhanced the antigen-presenting and CD8 + T-cell-stimulatory activities of these cells. Moreover, Wang et al. have delved into the function of PD-L1 in regulating wound inflammation. They found that PD-L1 expression was detected in mouse fibroblast-like cells within wound granulation tissue. PD-L1 knockout mice used in this study were shown to have delayed healing of excisional wounds, both by gross examination and the relative area of the wound bed that remained over the span of 10 d. Although the most notable disparity in the healed area between wild-type and intervention occurred during days 3, 5 and 7, wild-type mice had healed on average 85% of the excision compared to 70% in PD-L1 knockout mice by the end of the treatment period. Moreover, PD-L1 knockout mice displayed higher expression levels of inflammatory cytokines TNF-α (up to 10% more relative to wild-type) and IL-6 (up to 30% more relative to wild-type). The authors propose that PD-L1 expression of fibroblast-like cells in granulation tissue may positively regulate wound healing via PD-L1-mediated formation of an immunosuppressive microenvironment, initiation of inflammation resolution, and regulation of M1 to M2 macrophage transition . Additionally, TGF-β has been found to play roles in inducing PD-L1 expression of fibroblast and fibroblast-like cells . These findings, when considered together, indicate that inhibition of PD-L1 expressed by fibroblasts may impair wound healing function or reverse the activity of pathological fibroblast phenotypes. Macrophages and platelets are also involved in wound healing and have been reported to express immune checkpoints in the context of malignancy [ – ]. Immune checkpoint CD83 has been identified as a key mediator of macrophage transition from M1 to M2 phenotypes . Peckert-Maier et al. generated mice with conditional knockout of CD83 expression of macrophages and inflicted full thickness wounds. Compared to wild-type mice, mice with CD83-deficient macrophages displayed an accelerated initial inflammatory wound healing phase, which resulted in an approximately 25% increase in wound closure by day 3. Interestingly, by day 6, both wild-type and CD3 knockout mice had achieved wound closure. Despite this result, histological and expression analysis of CD83 knockout mice revealed an aberrant wound healing process with expanded epidermis, no hair follicle migration, and lacking dermis compared to wild-type mice. Moreover, inflammatory markers such as TGF-β (twofold increase), alpha-actin 2 (38% increase), and type I collagen alpha 1 (40% increase) were elevated in CD83 knockout mice, suggesting that these wounds recovered at least in part via fibrosis . Interestingly, studies have also found that systemic and topical administration, rather than depletion or inhibition, of checkpoint inhibitors have also shown to be beneficial in wound healing. Royzman et al. reported that CD83 application accelerated healing in mouse full thickness excisional wounds compared to control treatment. This group also found that treatment of wounds with macrophages pre-incubated with soluble CD83 improved wound closure compared to phosphate-buffered saline and mock-incubated macrophage treatments, indicating a role for soluble CD83 in modulating macrophage wound healing activity . With respect to PD-L1, Su et al. studied the role of exosomal PD-L1 in vitro and in vivo in mice. In vitro experiments of exosomal PD-L1 administration demonstrated T-cell immunosuppression, which led to enhanced skin cell migration. Topical application of a hydrogel embedded with exosomal PD-L1 on full thickness excisional wounds in mice resulted in complete gross re-epithelization by day 10 compared to control, which still maintained a large scab. Moreover, histological and quantitative polymerase chain reaction analysis revealed markedly reduced levels of inflammatory markers such as TNF-a, IL-6, and granzyme B, as well as diminished immune infiltration, in PD-L1 treatment groups relative to control . Following these findings, Kuai et al. studied external PD-L1 application in a murine model of diabetic ulcers. The authors reported that diabetic ulcers underwent PD-L1 downregulation relative to normal wounds and proposed that endogenous PD-L1 deficiency may be a potential contributor to impaired healing in this context. Administration of topical PD-L1 on ulcers improved healing and re-epithelization rates, even out-performing recombinant bovine basic FGF treatment . The benefits of checkpoint immunotherapy, however, are often accompanied by unacceptable immune-related adverse effects (irAEs). irAEs are common and can manifest in numerous ways, including gastrointestinal, rheumatological, hepatic, neurological, endocrine, adrenal, and cutaneous, amongst many others . Unfortunately, cutaneous irAEs, in particular, are the most common irAEs, which may have implications for wound healing . For example, immune checkpoint inhibitor therapy has been reported to be potentially associated with wound complications in head and neck cancer patients . Notably, the pre-clinical studies listed above did not mention adverse events related to their treatment. We ultimately believe that the pre-clinical evidence warrants further research to evaluate its applicability in human wounds. This should be accompanied by innovation in topical and local delivery systems to avoid systemic irAEs. Small molecule inhibitors (SMIs) SMIs are defined as organic compounds with molecular weights < 500 Da that affect a given biological process. Their small size and oral administration allow them to have low treatment attrition, attain high bio-availabilities, and have the ability to penetrate cell membranes, targeting specific macromolecules to alter their function . Despite the wide-spread use of SMIs in cancer, research regarding their applicability in the treatment of non-healing wounds is sparse. Further, there are no FDA-approved SMIs in use for chronic wounds to date. The few emerging SMI therapies in the literature that have focused on wound healing target the two prominent pathways of the wound healing response: the Wnt and hypoxia inducible factor-1α (HIF-1α) pathways . Wnt pathway Saraswati et al. have assessed the potential of pyrvinium, a Wnt inhibitor, as a wound repair therapeutic. In their study, a polyvinyl alcohol (PVA) sponge implanted subcutaneously in mice was used as a granulation tissue deposition model. They found that pyrvinium-treated PVA sponges had a general improvement in cellular proliferation (150% of Ki-67 + expression increase over control), vascularity (120% increase in PECAM-1 + expression over control), and overall granulation tissue architecture. Moreover, in a follow-up study, they demonstrated pyrvinium’s potential to enhance MSC proliferation, engraftment, and stemness in an in vivo murine model . These results combined suggest that pyrvinium may be able to serve as an ancillary wound healing therapy. HIF-1α pathway Groups that target HIF-1α with SMIs have found some evidence of improved tissue repair. Considering HIF-1α is degraded in the presence of oxygen and iron, Thangarajah et al. found that the application of deferoxamine (DFO), an FDA-approved iron chelator, to a humanized, diabetic, murine wound improves perfusion and wound healing outcomes by inhibiting HIF-1α degradation and, consequently, maintaining elevated levels of VEGF. In fact, by day 7, mice treated with DFO had a 13-fold and 2.7-fold increase in CD31 cell and VEGF expression compared to non-treated mice, respectively. These outcomes were further validated by Li et al. , who identified the cyclometalated iridium (III) metal complex 1a as a disruptor of the von Hippel-Lindau-HIF-1α protein interaction, helping accumulate HIF-1α in cellulo. Moreover, they showed that wild-type and three distinct diabetic murine models with excisional wounds treated with the iridium (III) complex SMI both showed a significantly accelerated rate of wound healing. In iridium-treated wild-type mice, wound closure was nearly complete by day 8, compared to 75% wound closure in non-treated mice. And across the three diabetic murine models, iridium treatment increased wound closure by up to 25% after 4 d. These changes were accompanied by increased skin thickness, collagen deposition, and importantly, neovascularization. Finally, Parker et al. provide a narrative review that further explores the successes of DFO in pre-clinical models of wound healing and how those results may promote human trials to evaluate its effectiveness in clinical practice. Monoclonal antibodies have quickly become a leading drug class, with indications across cancer, autoimmune disease, transplant rejection, and infectious diseases (Fig. ) . According to The Antibody Society, however, there are currently no therapeutic antibodies approved or under review by the FDA for chronic wounds . Acute and chronic wounds contain diverse microbiological populations which may limit wound recovery . We briefly comment below on a prominent use of antibodies: the targeting of bacteria within wound beds. Antibodies against a host of bacterial and fungal elements , and exhibit anti-biofilm activity in vitro and in vivo have been reviewed by Watson et al. , although the authors deny the present availability of any licensed anti-biofilm antibodies. They explain that anti-biofilm antibody therapies are enhanced by both targeting multiple biofilm components and pairing their use with antibacterial drugs. In combining these two ideas, antibodies conjugated to nanomaterials or antibiotics are becoming extensively studied as a potential therapy for biofilm disruption and elimination . We explore the below antibodies that target pathogenic toxins as well as those conjugated to nanomaterials. Antibodies conjugated to drugs are explored in further detail in the Biofilm section . Antibodies against Staphylococcus aureus ( S. aureus ) virulence factors have been shown to have wound healing applications [ – ]. By exploiting the heat generated by the interaction between alternating magnetic field (AMF) and magnetic nanoparticles conjugated with antibody against S. aureus protein A, Kim et al. reported an in vitro inactivation of 80% in colony forming units in AMF treatment compared to 50% with an alternative treatment control. The reduction of bacterial burden consequently enhanced wound healing rates and outcomes in AMF-treated mice compared to controls. In a similar vein, Chen et al. used magneto-ovoid strain (MO-1) magnetotactic bacteria and coated them with anti-MO-1 polyclonal antibodies. They successfully targeted the magnetotactic bacteria to S. aureus through interactions between S. aureus protein A and the antibodies’ Fc fragments. AMF exposure produced magnetic hyperthermia conditions in vitro leading to a 50% increase in the killing efficiency of S. aureus in suspension compared to approximately 30% efficiency for AMF-treated, uncoated MO-1 cells . Moreover, significant reductions in wound lengths at 7 d post-wounding were observed in mice receiving treatment of antibody-coated MO-1 with AMF compared to groups who received no treatment, AMF-only, and antibody-coated MO-1 without AMF . In a diabetic murine model of polymicrobial wounds published by Tkaczyk et al. , triple monoclonal antibody treatment against multiple S. aureus virulence factors (alpha toxin, four secreted bicomponent leukotoxins, and fibrinogen binding cell-surface adhesin clumping factor A) resulted in full skin re-epithelization within 21 d compared to control-IgG, which did not see any wound closure. The polymicrobial wounds also contained Pseudomonas aeruginosa ( P. aeruginosa ) and Streptococcus pyogenes, whose numbers were significantly decreased with the triple antibody treatment compared to control-IgG . The authors argued that, due to an over-time decline in NETosis activity and pro-inflammatory mediators in diabetic mice polymicrobial skin lesions following treatment, other pathogens were more easily targeted . Antibodies against other bacterial pathogens have been investigated as well. Barnea et al. investigated the healing of P. aeruginosa- infected burn wounds in mice using polyclonal antibodies against the N-terminal of P . aeruginosa type b flagellin. Results of this study showed that infected mice were systemically treated with different regimens of anti- P. aeruginosa antibodies had a reduced mortality rate of 0 – 17%, which was comparable to the mortality rate in the imipenem-treated group, and significantly lower than the control mortality rate of 58 – 83%. Moreover, the IgG-treatment group displayed a significantly faster mean wound healing time (15 d vs. 23 d for non-IgG control) with improved histopathological regeneration and no apparent necrosis, ulceration, or abscess formation as was observed in control groups. It should be noted that there were no differences in mortality and wound healing time between anti- P. aeruginosa antibody treatment and imipenem antibiotic treatment . Animal studies of wound healing illustrate the promise of bacteria-directed antibodies and their application alongside anti-microbial treatments. We note that no ongoing clinical trials or FDA-approved antibody therapies for chronic, non-healing wounds are available to date. Rapid proteolysis and clearance of antibodies in inflammatory microenvironments, high costs of manufacturing, potential side effects, and lack of effective delivery systems are all significant challenges that these therapies must overcome if they are to be applied in the clinical setting. However, it is important to highlight those promising innovations in the vehicle space, ranging from nanomaterials to physical and chemical penetration enhancers, that may improve delivery, bioavailability, and persistence in the wound [ – ]. Ultimately, we believe scientific evidence points to the therapeutic potential of antibody-based methods for wound healing. Since 2011, there have been 11 immune checkpoint inhibitors approved by the FDA (as of June 2023), all of which are monoclonal antibodies indicated for the treatment of malignancies . The immune checkpoints targeted by these approved checkpoint inhibitors include cytotoxic T-lymphocyte antigen-4, programmed cell death 1 (PD-1), programmed cell death-ligand 1 (PD-L1), and lymphocyte activation gene 3 . There have not yet been any checkpoint inhibitors approved specifically for wound healing indications. In a study by Afroj et al . , mouse and human fibrocytes were shown to express PD-L1, and subsequent antibody blockade of the PD-1/PD-L1 interaction enhanced the antigen-presenting and CD8 + T-cell-stimulatory activities of these cells. Moreover, Wang et al. have delved into the function of PD-L1 in regulating wound inflammation. They found that PD-L1 expression was detected in mouse fibroblast-like cells within wound granulation tissue. PD-L1 knockout mice used in this study were shown to have delayed healing of excisional wounds, both by gross examination and the relative area of the wound bed that remained over the span of 10 d. Although the most notable disparity in the healed area between wild-type and intervention occurred during days 3, 5 and 7, wild-type mice had healed on average 85% of the excision compared to 70% in PD-L1 knockout mice by the end of the treatment period. Moreover, PD-L1 knockout mice displayed higher expression levels of inflammatory cytokines TNF-α (up to 10% more relative to wild-type) and IL-6 (up to 30% more relative to wild-type). The authors propose that PD-L1 expression of fibroblast-like cells in granulation tissue may positively regulate wound healing via PD-L1-mediated formation of an immunosuppressive microenvironment, initiation of inflammation resolution, and regulation of M1 to M2 macrophage transition . Additionally, TGF-β has been found to play roles in inducing PD-L1 expression of fibroblast and fibroblast-like cells . These findings, when considered together, indicate that inhibition of PD-L1 expressed by fibroblasts may impair wound healing function or reverse the activity of pathological fibroblast phenotypes. Macrophages and platelets are also involved in wound healing and have been reported to express immune checkpoints in the context of malignancy [ – ]. Immune checkpoint CD83 has been identified as a key mediator of macrophage transition from M1 to M2 phenotypes . Peckert-Maier et al. generated mice with conditional knockout of CD83 expression of macrophages and inflicted full thickness wounds. Compared to wild-type mice, mice with CD83-deficient macrophages displayed an accelerated initial inflammatory wound healing phase, which resulted in an approximately 25% increase in wound closure by day 3. Interestingly, by day 6, both wild-type and CD3 knockout mice had achieved wound closure. Despite this result, histological and expression analysis of CD83 knockout mice revealed an aberrant wound healing process with expanded epidermis, no hair follicle migration, and lacking dermis compared to wild-type mice. Moreover, inflammatory markers such as TGF-β (twofold increase), alpha-actin 2 (38% increase), and type I collagen alpha 1 (40% increase) were elevated in CD83 knockout mice, suggesting that these wounds recovered at least in part via fibrosis . Interestingly, studies have also found that systemic and topical administration, rather than depletion or inhibition, of checkpoint inhibitors have also shown to be beneficial in wound healing. Royzman et al. reported that CD83 application accelerated healing in mouse full thickness excisional wounds compared to control treatment. This group also found that treatment of wounds with macrophages pre-incubated with soluble CD83 improved wound closure compared to phosphate-buffered saline and mock-incubated macrophage treatments, indicating a role for soluble CD83 in modulating macrophage wound healing activity . With respect to PD-L1, Su et al. studied the role of exosomal PD-L1 in vitro and in vivo in mice. In vitro experiments of exosomal PD-L1 administration demonstrated T-cell immunosuppression, which led to enhanced skin cell migration. Topical application of a hydrogel embedded with exosomal PD-L1 on full thickness excisional wounds in mice resulted in complete gross re-epithelization by day 10 compared to control, which still maintained a large scab. Moreover, histological and quantitative polymerase chain reaction analysis revealed markedly reduced levels of inflammatory markers such as TNF-a, IL-6, and granzyme B, as well as diminished immune infiltration, in PD-L1 treatment groups relative to control . Following these findings, Kuai et al. studied external PD-L1 application in a murine model of diabetic ulcers. The authors reported that diabetic ulcers underwent PD-L1 downregulation relative to normal wounds and proposed that endogenous PD-L1 deficiency may be a potential contributor to impaired healing in this context. Administration of topical PD-L1 on ulcers improved healing and re-epithelization rates, even out-performing recombinant bovine basic FGF treatment . The benefits of checkpoint immunotherapy, however, are often accompanied by unacceptable immune-related adverse effects (irAEs). irAEs are common and can manifest in numerous ways, including gastrointestinal, rheumatological, hepatic, neurological, endocrine, adrenal, and cutaneous, amongst many others . Unfortunately, cutaneous irAEs, in particular, are the most common irAEs, which may have implications for wound healing . For example, immune checkpoint inhibitor therapy has been reported to be potentially associated with wound complications in head and neck cancer patients . Notably, the pre-clinical studies listed above did not mention adverse events related to their treatment. We ultimately believe that the pre-clinical evidence warrants further research to evaluate its applicability in human wounds. This should be accompanied by innovation in topical and local delivery systems to avoid systemic irAEs. SMIs are defined as organic compounds with molecular weights < 500 Da that affect a given biological process. Their small size and oral administration allow them to have low treatment attrition, attain high bio-availabilities, and have the ability to penetrate cell membranes, targeting specific macromolecules to alter their function . Despite the wide-spread use of SMIs in cancer, research regarding their applicability in the treatment of non-healing wounds is sparse. Further, there are no FDA-approved SMIs in use for chronic wounds to date. The few emerging SMI therapies in the literature that have focused on wound healing target the two prominent pathways of the wound healing response: the Wnt and hypoxia inducible factor-1α (HIF-1α) pathways . Wnt pathway Saraswati et al. have assessed the potential of pyrvinium, a Wnt inhibitor, as a wound repair therapeutic. In their study, a polyvinyl alcohol (PVA) sponge implanted subcutaneously in mice was used as a granulation tissue deposition model. They found that pyrvinium-treated PVA sponges had a general improvement in cellular proliferation (150% of Ki-67 + expression increase over control), vascularity (120% increase in PECAM-1 + expression over control), and overall granulation tissue architecture. Moreover, in a follow-up study, they demonstrated pyrvinium’s potential to enhance MSC proliferation, engraftment, and stemness in an in vivo murine model . These results combined suggest that pyrvinium may be able to serve as an ancillary wound healing therapy. HIF-1α pathway Groups that target HIF-1α with SMIs have found some evidence of improved tissue repair. Considering HIF-1α is degraded in the presence of oxygen and iron, Thangarajah et al. found that the application of deferoxamine (DFO), an FDA-approved iron chelator, to a humanized, diabetic, murine wound improves perfusion and wound healing outcomes by inhibiting HIF-1α degradation and, consequently, maintaining elevated levels of VEGF. In fact, by day 7, mice treated with DFO had a 13-fold and 2.7-fold increase in CD31 cell and VEGF expression compared to non-treated mice, respectively. These outcomes were further validated by Li et al. , who identified the cyclometalated iridium (III) metal complex 1a as a disruptor of the von Hippel-Lindau-HIF-1α protein interaction, helping accumulate HIF-1α in cellulo. Moreover, they showed that wild-type and three distinct diabetic murine models with excisional wounds treated with the iridium (III) complex SMI both showed a significantly accelerated rate of wound healing. In iridium-treated wild-type mice, wound closure was nearly complete by day 8, compared to 75% wound closure in non-treated mice. And across the three diabetic murine models, iridium treatment increased wound closure by up to 25% after 4 d. These changes were accompanied by increased skin thickness, collagen deposition, and importantly, neovascularization. Finally, Parker et al. provide a narrative review that further explores the successes of DFO in pre-clinical models of wound healing and how those results may promote human trials to evaluate its effectiveness in clinical practice. Saraswati et al. have assessed the potential of pyrvinium, a Wnt inhibitor, as a wound repair therapeutic. In their study, a polyvinyl alcohol (PVA) sponge implanted subcutaneously in mice was used as a granulation tissue deposition model. They found that pyrvinium-treated PVA sponges had a general improvement in cellular proliferation (150% of Ki-67 + expression increase over control), vascularity (120% increase in PECAM-1 + expression over control), and overall granulation tissue architecture. Moreover, in a follow-up study, they demonstrated pyrvinium’s potential to enhance MSC proliferation, engraftment, and stemness in an in vivo murine model . These results combined suggest that pyrvinium may be able to serve as an ancillary wound healing therapy. Groups that target HIF-1α with SMIs have found some evidence of improved tissue repair. Considering HIF-1α is degraded in the presence of oxygen and iron, Thangarajah et al. found that the application of deferoxamine (DFO), an FDA-approved iron chelator, to a humanized, diabetic, murine wound improves perfusion and wound healing outcomes by inhibiting HIF-1α degradation and, consequently, maintaining elevated levels of VEGF. In fact, by day 7, mice treated with DFO had a 13-fold and 2.7-fold increase in CD31 cell and VEGF expression compared to non-treated mice, respectively. These outcomes were further validated by Li et al. , who identified the cyclometalated iridium (III) metal complex 1a as a disruptor of the von Hippel-Lindau-HIF-1α protein interaction, helping accumulate HIF-1α in cellulo. Moreover, they showed that wild-type and three distinct diabetic murine models with excisional wounds treated with the iridium (III) complex SMI both showed a significantly accelerated rate of wound healing. In iridium-treated wild-type mice, wound closure was nearly complete by day 8, compared to 75% wound closure in non-treated mice. And across the three diabetic murine models, iridium treatment increased wound closure by up to 25% after 4 d. These changes were accompanied by increased skin thickness, collagen deposition, and importantly, neovascularization. Finally, Parker et al. provide a narrative review that further explores the successes of DFO in pre-clinical models of wound healing and how those results may promote human trials to evaluate its effectiveness in clinical practice. Biofilms are complex assemblies of microorganisms embedded in a self-produced ECM that significantly impact wound healing. Primarily composed of bacteria, biofilms create a protective environment for microbial communities, allowing them to resist host immune responses and a variety of antimicrobial treatments . Moreover, while biofilms are present in almost 60% of chronic wounds, they are only present in about 10% of acute wounds . Common biofilm-forming bacteria include S. aureus , P. aeruginosa , and various subspecies of Streptococcus and Enterococcus . The presence of biofilms in wounds can lead to prolonged inflammation, delayed healing, and increased risk of systemic infection (Fig. ) . While there are indeed articles that elucidate the complex interaction between wound healing and biofilm formation, it is surprising that experimental and in vivo wound models that investigate the effects of cellular therapies on wound healing often do not include the consideration of biofilm formation and presence. Therefore, future research is warranted to decipher the bio-pathological mechanisms of biofilm formation in wound beds [ – ]. This lack of consideration may in part explain why promising pre-clinical studies often do not successfully translate into meaningful clinical data. Tvilum et al. recently developed an anti- S. aureus antibody conjugate with mitomycin C, which displayed significant bactericidal properties against S. aureus suspensions in vitro. They also assessed its effectiveness against biofilm formations, reporting a 100-fold decrease in colony-forming unit enumeration relative to control and stand-alone vancomycin treatment. However, it was not able to achieve a significant difference against mitomycin C individually, although the group posits that the maximum tolerated dose of the antibody-conjugate was not evaluated. Finally, they reported that the antibody-conjugate formulation of mitomycin C is less toxic in vitro compared to unconjugated mitomycin C. In another study led by Le et al. , S. aureus biofilms were targeted using an anti- S. aureus antibody conjugated to nanoparticle-encapsulated rifampicin. In comparison to phosphate-buffered saline (control)-treated S. aureus biofilms, the antibody-nanoparticle conjugate was able to reduce non-biofilm bacterial counts by over 3 orders of magnitude in vitro . These findings persisted when applied to a biofilm model, where there was a nearly 140% increase in the relative percentage of live/dead cells compared to control and free-standing rifampicin. Finally, the antibody-nanoparticle conjugate treatment was evaluated in vivo using a prosthesis-associated S. aureus biofilm infection murine model and a bioluminescent bacterial strain to quantify bacterial death. Starting at a baseline bioluminescent intensity of 100%, the antibody-nanoparticle conjugate treatment was able to reduce bioluminescence to a mere 5% compared to saline (which saw an increase in bioluminescence to 150%) and free rifampicin (which saw a reduction in bioluminescence to 25%) . Furthermore, Xin et al. have examined the wound healing activity of an ultrasound-activated, antibody-conjugated, perfluoropentane- and meropenem-loaded nanoparticle treatment against P. aeruginosa biofilm. The group used a 3D confocal laser scanning microscopy in combination with live/dead staining to qualify the bactericidal effect of treatment, reporting a significant amount of bacterial death in the sonication/nanoparticle group compared to the control, which had maintained high levels of bacteria embedded in biofilm. The treatment was also effective at disrupting biofilm formation in an in vivo murine model. P. aeruginosa -infected full-thickness wounds that received the combined sonication/nanoparticle treatment had significantly restricted the bacterial survival rate to 20% at day 3, compared to nearly 100% with control treatment. Moreover, qualitative histological analysis demonstrated complete granulation, hair follicles, and organized connective tissue in treated mice compared to varying levels of inflammation and persistent immune infiltration in other treatment groups including control. Importantly, they concluded their study by assaying for bio-compatibility and safety and found that there was no significant red blood cell lysis or lung, spleen, heart, kidney, or liver inflammation. These results suggest that this treatment may be highly translatable to clinical models. Limitations, however, do exist with antibody-based therapies against biofilms. Chief amongst them is that biofilms are often composed of several distinct species, current studies seem to achieve similar effectiveness to existing SOCs (although several groups state that the antibody-conjugate titers have yet to be perfected), and the financial burden of antibody synthesis and storage. However, Rembe et al. have tested the application of an in vitro biofilm model consisting of human plasma to cultivate biofilms in a reliable wound model for the laboratory setting. They demonstrated a significantly lower effect of hypochlorous wound irrigation solutions when confronted with biofilm compared to a non-challenged planktonic approach and suggested that this model may be suitable for various applications in biofilm studies. With all this taken into consideration, we believe antibody-based strategies against biofilms are overall supported by literature evidence and continued development will likely yield clinically relevant therapies. Traditional wound treatments typically involve the combination of surgical debridement with the application of antiseptics, antibiotics, and dressings to prevent infection and facilitate healing . While effective for managing most wounds, these treatments may have limited efficacy in promoting the healing of chronic and complex wounds. Consequently, we look to innovation in wound therapies to enhance effectiveness in chronic wounds. Cellular therapeutics harness the regenerative potential of living cells, such as stem cells or PRP, to stimulate tissue repair and regeneration. Immunotherapies, on the other hand, modulate the immune response at the wound site using growth factors or cytokines to promote healing and reduce inflammation. Cellular and immune therapeutics demonstrate great potential in overcoming persistent challenges in chronic wound healing disorders. As for currently available therapies, only a few which FDA-approved, and all of which are cell-based. Results from clinical trials regarding stem cell- and immune cell-based wound dressings have shown promise, but results in this area are often not challenged with the presence or consideration of biofilms. To that extent, pre-clinical results of antibiofilm research are also promising and justify further elucidation. We propose that a multi-pronged approach, in which stem cell delivery and induction provide a matrix for wound remodeling, secretes growth factors, and offers support for other cell types (such as fibroblast proliferation or T-reg recruitment to the wound bed) whilst directly targeting biofilms could be a future avenue of treatment on the way to improve patients’ outcomes and quality of life. Recent research beyond the scope of biofilms has suggested leveraging inflammation drivers to re-convert the chronic wound to an acute wound microenvironment with elevated pro-inflammatory cytokine and chemokine levels. For instance, Zhu et al. used pro-inflammatory IFN-γ and TNF-α to enrich and amplify the secretome of MSCs. The so-derived supernatant showed promising potential in promoting angiogenesis, constricting collagen deposition, upregulating VEGFC, and accelerating wound closure in a murine cutaneous excision model. Additionally, pro-inflammatory cytokines, including IL-1, IL-17, and TNF-α, have been demonstrated to promote hair follicle neogenesis and epithelialization in chronic wound healing, with IL-1 specifically driving the proliferation and mobilization of stem cells . Overall, these findings highlight the potential positive effects of pro-inflammatory agents on wound healing. Based on these insights, balancing pro-inflammatory vs. anti-inflammatory cytokines and chemokines may represent a promising pathway warranting future research. In summary, our review sheds light on recent trends and findings on cellular therapeutics and immunotherapies on wound healing. These lines of research may unlock untapped potential and guide further research work. With this in mind, the future of wound care may involve integrating cellular therapeutics and immunotherapies with established treatments to harness synergistic effects and optimize outcomes. Advancements in tissue engineering, regenerative medicine, and immunomodulatory techniques hold promise for developing novel therapies that address the underlying mechanisms of wound healing more effectively. To that effect, we have summarized the mechanism and outcomes of all aforementioned therapies in Table . Moreover, although the intended purpose of this paper was to provide a concise but encompassing review of established and developing therapies, we look to other groups to provide more in-depth analyses regarding each therapeutic class. Domaszewska-Szostek et al. provide a comprehensive review of the application and limitations of several different cell-based therapies, including keratinocytes, fibroblasts, BMSCs, and ADSCs, in clinical trials, concluding that although cell-based therapies may in time serve as an alternative to surgical-based treatment, more definitive clinical outcomes are still required. Additionally, Berry-Kilgour et al. extensively detail a review of the use of growth factors and cytokines as immunotherapies for the treatment of chronic wounds. Despite the underwhelming results that cytokine or growth factor clinical trials have had so far, her group notes that as our collective understanding of the physiology underlying chronic wounds improves, tailored growth factor/cytokine therapies, in combination with improved biomaterials and delivery systems, will likely yield improved results. In conclusion, while traditional wound treatments remain essential for managing acute wounds and preventing infections, cellular therapeutics, and immunotherapies offer exciting opportunities to revolutionize wound care in complex cases. By promoting faster healing, reducing complications, and improving outcomes, these innovative approaches have the potential to significantly impact the field of wound management and improve patient outcomes.
Dental anxiety and dental care - a comparison between Albania and Germany
753e06ec-54a5-4e56-8f19-da540bda6875
11421110
Dentistry[mh]
Despite constant medical advances, dental anxiety remains a condition that is reversible through desensitization of the general population. Nearly 80% of adults in industrialized countries experience discomfort before dental treatment, with 20% expressing a genuine fear of dental procedures, and 5% actively avoiding dental care altogether . The prevalence of dental anxiety is evident across all age groups, with even young children exhibiting avoidance behaviour towards dental treatment, often influenced by parental attitudes . An extreme manifestation of dental anxiety is termed dental phobia, which can be classified using the International Classification of Diseases (ICD). According to the ICD-10 Chapter V, F40.0, a phobia falls under the category of anxiety disorders. It is characterized as an irrational fear of a specific, generally non-threatening situation that is either completely avoided or endured with significant distress . This classification is distinct from phobias related to specific stimuli encountered during dental treatment, such as injections . One way to differentiate between anxiety and phobia stages is by assessing their impact on the individual’s daily routine and life. If it disrupts social life, occupation, and normal functioning, it may be considered a specific (dental) phobia . Dental anxiety proves stressful for both the patient and the dentist, leading to reduced cooperation, prolonged treatment times, and an uncomfortable treatment environment . This can lead to inaccurate diagnosis and inappropriate treatment, including the evaluation of tooth vitality . Patients who entirely avoid dental care often suffer from poor dental and periodontal health . Such individuals typically seek dental attention only when the pain becomes unbearable, requiring complex interventions like root canal therapy or extractions. This perpetuates a negative cycle that undermines the development of a healthy dentist-patient relationship . Factors such as the dental clinic environment, stress experienced during procedures, cognitive capacities of the individual, and cultural practices are known to influence dental fear and anxiety (DFA) . DFA poses daily challenges for dentists treating both children and adults. In pediatric dentistry, with a prevalence of 9%, DFA significantly complicates patient management . In adults, dental fear and anxiety often reflects past negative dental experiences from childhood or adolescence. Dental fear is an acute, distressing response to perceived threats . Studies from various countries have reported dental fear and anxiety prevalence rates of 12.5% in Canada , 12.6% in Russia , 13.5% in France , 16.1% in Australia , and 30% in China . Research in Saudi Arabia indicates DFA rates among adults range from 27 to 51% , among children, the rates range from 43,1% to 47,6% . DFA can hinder the use of dental services, impacting early disease detection and management. Among children in Eastern Europe, significant levels of anxiety were reported, with varying rates across countries. A total of 12.5% of children from Croatia, 26.67% from Macedonia, 10.94% from Bosnia and Herzegovina, 20.31% from Montenegro, 23.08% from Slovenia and 16.10% from Serbia showed a high level of anxiety . An observational study in Albania involving 180 participants aged 15 to 55 found that 70% displayed high dental fear regarding orthodontic treatments and fillings, 59% towards dental implants, and 74% exhibited extreme fear of extractions . 64% of the surveyed participants reported having gingivitis, and 61% indicated they suffered from dental caries, in contrast to 53% who had undergone tooth extractions. The data analysis revealed that tooth extractions and dental caries significantly affected high blood pressure, with a P ˂ 0.0001 . Taheri et al. explored the connections between dental pain perception and its relationship to pain anxiety, dental anxiety, and mental pain, finding significant correlations ( p = .001) between pain perception with dental anxiety ( r = .38), pain anxiety ( r = .45), and mental pain ( r = .25) . The approach to a dentist’s office significantly influences dental fear scores. Patients often view surgical and restorative procedures as unpleasant and intimidating, with past negative experiences potentially exacerbating fears during subsequent visits . Furthermore, previous negative experiences in the dental office can instigate fear during subsequent visits. Dental anxiety (DA) is more intense and irrational compared to general fear . This type of anxiety leads patients to avoid treatment, reflecting a shortfall in modern dentistry’s evolution toward minimal invasiveness. Although modern anesthetics can minimize pain, the fear of pain often exceeds the actual sensation of pain. Anxiety disorders are widespread, with 25% of general practitioners recognizing symptoms in their patients . Mental health is crucial, recognized by the WHO as a state where individuals achieve their potential, handle life’s normal stresses, work productively, and contribute to their community . Recently, mental disorders and psychosocial disabilities have gained recognition as significant global development issues . The WHO estimates that one in four individuals will encounter a mental health condition in their lifetime, with around 600 million people worldwide disabled due to mental health issues . The public health significance of mental illnesses underscores their multifaceted causes, primarily rooted in social issues. In Albania, seeking and receiving psychological support often faces significant prejudice. The Albanian culture, characterized by extremes, readily accepts or rejects things. In this regard, there is a notable lack of empathy, emphasizing the need for an improved attitude among Albanians toward psychological services and mental health. As the stigma surrounding mental health continues to decrease, more individuals are seeking professional help for their mental health issues. This trend is driving the growth of therapy and counseling services in the country . Comparing the Albanian healthcare system to the German counterpart, dental treatment is guaranteed for specific citizens only through public dental health services, excluding all private clinics . The Ministry of Health has approved free fluoridation for all children up to the age of 18, although this service is underutilized. In Albania, health insurance covers only dental emergencies, primarily focusing on tooth extractions . According to a survey conducted by the European Commission on the quality of life of Albanians, it was discovered that 41% of Albanians consistently postpone or entirely avoid visiting the doctor in order to save money . Consequently, dental care is not easily accessible, as evidenced by half of those surveyed in Albania stating that they either never visit the dentist or only seek dental care when the pain becomes unbearable . A study in Kosovo conducted that in total, 2,556 school children the caries prevalence for 7- to 14-year- old school children was 94.4% . The healthcare system in Germany offers a variety of options for dental care. As with all medical services, the statutory health insurance covers the cost of treatments only if the patient consults a dentist who is accredited to provide contracted dental care . Dental services in Germany include: (1) An annual check-up, (2) Dental care for children and adolescents from six months to 17 years old, (3) Oral health services for individuals with disabilities or those in need of nursing care, (4) General dental treatments primarily include the removal of tartar, fillings, root canal treatments, oral surgery, periodontal services, and treatments for oral mucosal diseases. These services are generally free of co-payments for the insured, (5) Orthodontic treatments until the age of 18, (6) Costs for dental prostheses . In 2021, the German Dental Association (BZÄK) outlined the oral health goals for Germany’s health system for 2030, based on robust epidemiological evidence . The 2030 agenda includes both disease-oriented and health-promotion goals. Key targets are achieving a caries-free rate of 90% among 3-year-olds and 12-year-olds, reducing the prevalence of severe periodontal disease to below 10% in middle-aged adults (35–44 years old), and enhancing oral health-related behaviors . Behavioral objectives aim to increase the frequency of twice-daily toothbrushing to 87.5% among children, 85.3% among adults, and 89.1% among seniors. Additionally, the agenda seeks to increase the proportion of individuals who attend regular dental check-ups annually to 86.9% for children, 75% for adults, and 94.6% for seniors . This marks the first Albanian scientific study in the fields of dentistry and psychosocial medicine and no prior research had explored dental anxiety. Consequently, we initiated a study to address this gap in the existing literature. The objective of this study is to investigate potential differences in dental anxiety between individuals from Albania, categorized as a “third country” and Germany, classified as an “industrialized country”. Additionally, the study aims to compare the dental care systems of both countries. Special emphasis is placed on assessing the anxiety levels of dental patients during a single visit to a clinic in Germany and Albania, with the overarching goal of identifying and comparing preventive behaviors and oral health status among these groups. In Plauen, Germany and in Tirana, Albania the research group consists of dentists from both countries, collected data over the course of eight months (12.2019–07.2020). The questionnaires included various instruments such as the Dental Anxiety Scale (DAS) , the Brief Symptom Inventory-18 , and a set of descriptive questions gathering information about preventive behavior and oral health status, were handed out to a total of N = 263 patients, 133 patients from a private dental clinic in Plauen, Saxony (Germany), and 130 patients from the dentistry university clinic in Tirana (Albania) before treatment. The age range of participants varied from 14 to 80 years. All patients had to voluntarily take part in this study. The study was divided into two groups: Albanian and German patients. They were selected based on their explicit admission, made at the reception, of being afraid of the dentist. They were required to complete our questionnaires before treatment in the dental clinic in the waiting room. The questionnaire was administered by the dentists, who distributed it to the patients. All questions are designed to be easily understandable and free of medical jargon to avoid misunderstandings. The method of questioning was consistent and systematically applied to all participants. A structured and constant procedure ensures that all respondents are treated the same way, which is crucial for the validity and reliability of the results. In Germany, there were four refusals to participate in the study due to various reasons, in contrast to Albania, where there were no dropouts. The questionnaires were then examined in 2020 by our research group. Other inclusion criteria included having sufficient knowledge of the German and Albanian languages, possessing the physical and mental ability to complete the questionnaires, being oriented in terms of time and place, and displaying no psychiatric symptoms. This study did not involve a specific screening for psychological problems, and dental phobia or a high level of dental anxiety were not considered exclusion criteria. All patients provided written informed consent, and only those patients who gave written informed consent were included as study participants. For individuals younger than the age of 18 consent to participate were obtained from their parents or legal guardians. For Albanian patients, the German (validated) versions of the scales were utilized, and they were translated into the Albanian language by translators by hand when necessary. Statistical procedure All questionnaires underwent analysis using the statistical program ‘Statistical Package for the Social Sciences’ (SPSS). Mean total values were calculated and subsequently analyzed using an independent sample t-test. Chi-squared tests were employed to ascertain significance between questionnaire categories and sample characteristics. The level of significance was set at p < .05. The required sample size was determined using G*Power 3.1.3 . For comparing two groups with T-Tests (two independent means, two-tailed), with a significance level of α = 0.05, an effect size of Cohen’s d = 0.5, and a power of 95% (1- β = 0.95), a sample size of at least N = 105 per group (total N = 210) was necessary. Dental anxiety scale The Dental Anxiety Scale (DAS) was initially introduced in 1969 by Corah and is widely utilized for assessing dental fear in patients . The total dental anxiety score is calculated by summing up the scores from the four questions. The scores range from 4 to 20, and the patient’s level of anxiety is quantified as follows: a total score of 4 indicates “no fear”, a score between 5 and 8 corresponds to “low fear”, a score between 9 and 14 indicates “moderate fear”, and a score between 15 and 20 corresponds to “high fear” . These scores help evaluate the level of dental anxiety experienced by the patient. The reliability of the Dental Anxiety Scale was found to be rtt = 0.86 . In this study, Cronbach’s alpha was calculated as 0.76 (N = 263). The questionnaire was chosen to assess dental anxiety in this study due to its brevity and scientifically proven reliability. Brief symptom Inventory-18 The Brief Symptom Inventory-18 (BSI-18) was introduced in 2000 by Derogatis as a further condensed version of the BSI, which originally comprised 53 items from the Symptom-Checklist 90-R. Developed to assess the state of psychological stress with only 18 items , the BSI-18 has been applied in various contexts, including with cancer patients, victims of terrorist attacks, individuals with posttraumatic stress, those dealing with alcohol addiction, and other populations. The three scales—depression, anxiety, and somatization—each consist of six items and contribute to the Global Severity Index (GSI). Scores can range from 0 to 90, with each of the 18 items reflecting the respondent’s experiences over the last seven days on a scale offering four choices from ‘Not at all’ to ‘Extremely.’ The reliability of the three scales was assessed in 2010 on a sample of 638 psychotherapeutic patients: somatization α = 0.79, depression α = 0.84, anxiety α = 0.84, and GSI α = 0.91 . In our study, the reliability of the different BSI-18 scales was as follows: somatization α = 0.78, depression α = 0.72, anxiety α = 0.81, and GSI α = 0.90. Oral health In this study, patients were asked to provide answers to questions regarding their assessment of oral health and dental care. Questions followed: How many times a day do you brush your teeth? (Never, 1x/day, ≥2x/day) How often do you go to the dentist? (For example, for prophylaxis). (Never, 1x/year, ≥2x/year) How often do you have tartar removed? (Never, 1x/year, ≥2x/day) How often do you have a professional teeth cleaning appointment? (Never, 1x/year, ≥2x/day) How much do you think you can do to maintain the health of your teeth? (Nothing at all, little, some, much, very much) All questionnaires underwent analysis using the statistical program ‘Statistical Package for the Social Sciences’ (SPSS). Mean total values were calculated and subsequently analyzed using an independent sample t-test. Chi-squared tests were employed to ascertain significance between questionnaire categories and sample characteristics. The level of significance was set at p < .05. The required sample size was determined using G*Power 3.1.3 . For comparing two groups with T-Tests (two independent means, two-tailed), with a significance level of α = 0.05, an effect size of Cohen’s d = 0.5, and a power of 95% (1- β = 0.95), a sample size of at least N = 105 per group (total N = 210) was necessary. The Dental Anxiety Scale (DAS) was initially introduced in 1969 by Corah and is widely utilized for assessing dental fear in patients . The total dental anxiety score is calculated by summing up the scores from the four questions. The scores range from 4 to 20, and the patient’s level of anxiety is quantified as follows: a total score of 4 indicates “no fear”, a score between 5 and 8 corresponds to “low fear”, a score between 9 and 14 indicates “moderate fear”, and a score between 15 and 20 corresponds to “high fear” . These scores help evaluate the level of dental anxiety experienced by the patient. The reliability of the Dental Anxiety Scale was found to be rtt = 0.86 . In this study, Cronbach’s alpha was calculated as 0.76 (N = 263). The questionnaire was chosen to assess dental anxiety in this study due to its brevity and scientifically proven reliability. The Brief Symptom Inventory-18 (BSI-18) was introduced in 2000 by Derogatis as a further condensed version of the BSI, which originally comprised 53 items from the Symptom-Checklist 90-R. Developed to assess the state of psychological stress with only 18 items , the BSI-18 has been applied in various contexts, including with cancer patients, victims of terrorist attacks, individuals with posttraumatic stress, those dealing with alcohol addiction, and other populations. The three scales—depression, anxiety, and somatization—each consist of six items and contribute to the Global Severity Index (GSI). Scores can range from 0 to 90, with each of the 18 items reflecting the respondent’s experiences over the last seven days on a scale offering four choices from ‘Not at all’ to ‘Extremely.’ The reliability of the three scales was assessed in 2010 on a sample of 638 psychotherapeutic patients: somatization α = 0.79, depression α = 0.84, anxiety α = 0.84, and GSI α = 0.91 . In our study, the reliability of the different BSI-18 scales was as follows: somatization α = 0.78, depression α = 0.72, anxiety α = 0.81, and GSI α = 0.90. In this study, patients were asked to provide answers to questions regarding their assessment of oral health and dental care. Questions followed: How many times a day do you brush your teeth? (Never, 1x/day, ≥2x/day) How often do you go to the dentist? (For example, for prophylaxis). (Never, 1x/year, ≥2x/year) How often do you have tartar removed? (Never, 1x/year, ≥2x/day) How often do you have a professional teeth cleaning appointment? (Never, 1x/year, ≥2x/day) How much do you think you can do to maintain the health of your teeth? (Nothing at all, little, some, much, very much) The mean score for the patients’ current subjective overall health was 2.49 (SD 1.18). The patients’ Dental Anxiety Scale (DAS) averaged 13.10 (SD 2.74). Consequently, the psychological distress of the patients, as assessed by the BSI-18, revealed mean values of 3.45 (SD 3.95) for the anxiety scale, 2.10 (SD 3.00) for the depression scale, and 2.56 (SD 3.31) for the somatization scale. The global trait score GSI had a mean of 8.11 (SD 9.13). Table presents a comparison of patient groups interviewed in Albania and Germany concerning their psychological well-being. The t-test results indicate a significant difference between the two patient populations across all measures, with effect sizes (Cohen’s d) falling within the medium to high range. Statistical analysis revealed that Albanian patients rated their overall health worse than German patients. Additionally, significant differences emerged between the two groups in responses to the Dental Anxiety Scale (DAS), with Germans reporting higher levels of dental anxiety. Furthermore, it became evident that German patients experience significantly more psychological distress, as observed across the depression, somatization, and anxiety subscales. Table provides a comparison of the oral health status and preventive behaviour between the two patient groups. Patients in the Albanian group reported brushing their teeth significantly less often than their German counterparts. Correspondingly, German patients also visited the dentist significantly more frequently than Albanians. In terms of tartar removal and professional teeth cleaning, there is a descriptive difference between German and Albanian participants, with Germans undergoing these treatments more frequently, although this result did not reach statistical significance. Additionally, a significant difference was observed in the perceptions of the two groups regarding their contribution to the health and maintenance of their own teeth. The majority of German subjects (75.9%) believed they could contribute a lot or very much to their own oral health, whereas in the Albanian group, only 40.8% thought the same, indicating a significant difference. This is the first study to investigate the prevalence of dental anxiety and mental health problems in Albania and compare it with Germany. Due to the sample size and the study’s restricted scope to one city in Germany (Plauen/Saxony) and the capital of Albania (Tirana), it’s important to note that the data, including the study results, may not be fully representative of the entire populations of Germany and Albania. In this study the mean value for the Dental Anxiety Scale (DAS) in the patient collective was 13.10 (2.74). However, when comparing the expression of dental treatment anxiety, significant differences between the patient groups specifically, the German and Albanian subjects were observed. The average DAS value was higher for Germans and slightly lower for Albanian patients. Notably, the DAS values of the German group exceeded the German average value established by Kunzelmann and Dünninger (1990) . Thus, the study participants, in terms of the expression of their dental treatment anxiety, fall within the German average, with a significant value. The German findings align with those of other industrialized nations: in France, an estimated 13.5% of people suffer from moderate to severe dental anxiety , in Europe , in North America , and in Australia (10–18%) but significantly lower than in countries like China, where the rate is 30% . This study emphasizes the need for preventive measures against dental anxiety. Since dental anxiety often begins in childhood, young patients should be the primary focus of prevention efforts . Early education has been shown to positively impact dental anxiety, leading to better long-term dental care . Despite the strong correlation between dental anxiety and general state anxiety , patients frequently describe dental anxiety as an iatrogenic outcome of dental treatment . This highlights the responsibility of the dental profession and individual practitioners. Additionally, this study could advocate for the establishment of access centers for individuals with dental fear, particularly in Albania. Addressing dental fear requires a multidisciplinary team and is time-intensive. Training and rehabilitation are feasible in a supportive environment . In Northern Europe , specialized units with multidisciplinary skills and defined protocols provide prevention and treatment for anxious patients. However, Albania lacks such teams, although there are developments in behavior management and sedation techniques. Furthermore, dental anxiety is often viewed as an inevitability rather than a treatable condition, despite classifications based on DSM-IV psychiatric criteria . Consequently, there is little motivation to develop specialized services. For patients who do access the limited centers addressing both dental fear and dental disease, the costs are not covered by social security, exacerbating oral health inequalities for those with dental anxiety in Albania. Nevertheless, this study revealed a generally higher level of psychological distress using the Brief Symptom Inventory-18 (BSI-18). In terms of psychological distress, significant differences were observed between the two groups on all subscales as well as the Global Severity Index (GSI), with Germans reporting higher levels of psychological distress. In a sample of patients with anxiety disorders most comparable to ours, the following values for Cronbach’s alpha were found for the BSI subscales: somatization = 0.79, depressiveness = 0.87, and anxiety = 0.81 . In our study, the corresponding values were 0.78, 0.72, and 0.81, respectively, suggesting that the BSI-18 is nearly identical. However, the average score for both patient groups on the “Somatization” subscale was higher than the average values reported by Spitzer for a group of psychologically healthy individuals On the “Anxiety” subscale, only the average score of the German patients was higher than these values, while the score on the “Depression” subscale for Albanian patients was lower than that of German patients . The reasons for these findings in the German population are detailed in the following sections: Data from the 2015 Health Monitoring of the Robert Koch Institute (RKI) shows that in Germany at that time, nearly one in four men (22.0%) and nearly one in three women (33.3%) between the ages of 18 and 79 had experienced fully developed mental disorders at some point. The most common mental disorders were anxiety disorders (15.3%) and depressive disorders (7.7%), followed by somatic disorders (3.5%) . Despite the increasing demand for psychological services, there remains a stigma towards mental illness among Albanians. As a result, individuals often seek the assistance of a psychologist only when the problem has become very serious, and the issues, after having consulted various doctors, appear to be uncontrollable. A comparison of the preventive care behavior of the two patient populations revealed that Albanian patients had a significantly lower preventive care score than German patients. This discrepancy is particularly evident in the frequency of tooth brushing and dentist visits, as well as in the frequency of tartar removal and professional dental cleaning. The most common reason that school children visit the dentist was a toothache. A regular recall and check-up was rarely reported. Usually, were accompanied by their parents. Their first comments regarding their dental visit were “my child a terrible toothache all night” and “we couldn’t sleep at all.” The children with toothaches had bad experiences at the dentist and thus refused future visits. Even though there were dental offices in some of the schools in this study, they were often dysfunctional and poorly equipped. Often, there were no dentists specializing in the fields of pedodontics . The present study in Kosovo showed also that the mean DMFT (5.8) of school children in Kosovo was higher in comparison with school children of the following developed countries: Netherlands (1.1), Finland (1.2), Denmark (1.3), USA (1.4), United Kingdom (1.4), Sweden (1.5), Norway (2.1), Ireland (2.1), Germany (2.6) and Croatia (2.6) (16). The mean DMFT of Kosovo’s children (age 12) was similar to the mean values in Latvia (7.7), Poland (5.1) and a group of 12- to 14-year-olds in Sarajevo, Bosnia, Albania(7.18) . Surveys of schoolchildren and teachers revealed a lack of knowledge about oral health, making teachers ineffective as an educational tool on the subject . Nevertheless, it’s essential to note that the two patient groups differed significantly from each other.Distinct differences between German and Albanian patients were identified, with 42.9% of German patients never having undergone professional teeth cleaning, compared to a higher figure of 55.4% for Albanian patients. Similar to other dental treatments, professional teeth cleaning can evoke anxiety in certain patients as it involves the removal of impurities and tartar from the tooth surface. For many patients, the use of dental tools automatically triggers fear of associated pain. Another contributing factor could be that professional tooth cleaning is often considered a private service, not fully covered by statutory health insurance in Germany. The situation is even more challenging in Albania, where statutory health insurance funds do not contribute to oral health. In Albania, patients are required to bear the full cost of dental services themselves. Meanwhile, an increasing number of statutory health insurance companies in Germany have acknowledged the significance of prophylactic services and offer support through subsidies, such as bonus programs. However, additional initiatives should be implemented, particularly in the realm of education and information dissemination about the importance of prophylactic treatments and dental cleanings. This is crucial for preventing periodontal diseases and arresting their progression, given that the development and progression of caries are strongly influenced by individual behavior . Contrary to the hypothesis that individuals interviewed outside ‘developed’ countries might exhibit higher levels of anxiety and psychological distress due to potential avoidance of dental visits, this study did not confirm such a trend. The positive finding in the oral health-related survey, where the majority of both patient groups expressed confidence in their ability to maintain healthy teeth, is a significant step forward. Importance of the study The study revealed that patients outside German dental practices did not exhibit increased anxiety levels. However, it underscores the continued relevance of dental anxiety in those settings. Given the potential for dental avoidance behavior leading to severe dental issues, there is a recommendation for heightened awareness of dental anxiety among Albanian dentists. It is advised that Albanian dentists familiarize themselves with their patients’ oral health, promptly identify and address dental phobias. Essential to this is comprehensive healthcare and risk assessment by both general practitioners and dentists to effectively inform and advise individuals about the risks associated with neglecting dental treatment and prophylaxis. Implications for the research To conduct a more comprehensive investigation into dental treatment anxiety, additional studies should be undertaken with participants from non-European countries. It is also advisable to include the recording of DMF-T/S values and PSI for the involved patients. Given that dental anxiety frequently emerges in early childhood, conducting an extra survey focusing on dental anxiety among children and adolescents could be beneficial and pertinent for future research. Limitations Individuals aged 18 and older autonomously completed all the questionnaires in this study, while those below the legal age were included solely with explicit parental consent obtained through signed declarations. This introduces the possibility that some patients may not have been entirely candid in their responses, potentially downplaying the seriousness of their answers to avoid being identified as having dental anxiety. It’s important to recognize that the dataset might not comprehensively reflect the prevalence of dental anxiety in the population, particularly as it could exclude severely phobic patients actively avoiding dental treatment. Furthermore, the questionnaires did not inquire about the type of treatment participants anticipated post-survey. Those in acute pain might already be psychologically vulnerable, expecting more discomfort, and consequently, exhibiting greater apprehension toward treatment compared to those anticipating routine dental check-ups. The study revealed that patients outside German dental practices did not exhibit increased anxiety levels. However, it underscores the continued relevance of dental anxiety in those settings. Given the potential for dental avoidance behavior leading to severe dental issues, there is a recommendation for heightened awareness of dental anxiety among Albanian dentists. It is advised that Albanian dentists familiarize themselves with their patients’ oral health, promptly identify and address dental phobias. Essential to this is comprehensive healthcare and risk assessment by both general practitioners and dentists to effectively inform and advise individuals about the risks associated with neglecting dental treatment and prophylaxis. To conduct a more comprehensive investigation into dental treatment anxiety, additional studies should be undertaken with participants from non-European countries. It is also advisable to include the recording of DMF-T/S values and PSI for the involved patients. Given that dental anxiety frequently emerges in early childhood, conducting an extra survey focusing on dental anxiety among children and adolescents could be beneficial and pertinent for future research. Individuals aged 18 and older autonomously completed all the questionnaires in this study, while those below the legal age were included solely with explicit parental consent obtained through signed declarations. This introduces the possibility that some patients may not have been entirely candid in their responses, potentially downplaying the seriousness of their answers to avoid being identified as having dental anxiety. It’s important to recognize that the dataset might not comprehensively reflect the prevalence of dental anxiety in the population, particularly as it could exclude severely phobic patients actively avoiding dental treatment. Furthermore, the questionnaires did not inquire about the type of treatment participants anticipated post-survey. Those in acute pain might already be psychologically vulnerable, expecting more discomfort, and consequently, exhibiting greater apprehension toward treatment compared to those anticipating routine dental check-ups. The study’s conclusion is that individuals interviewed in Albania tend to avoid visiting the dentist not due to anxiety or other psychological distress but because they underestimate the importance of oral health. In comparison, German patients exhibit higher levels of dental anxiety and other psychological distress, possibly because they visit the dentist more frequently and, consequently, have had more negative experiences. Nonetheless, both Albanian and German dentists should heighten their awareness of the topic of ‘dental anxiety’ to be better equipped in dealing appropriately with patients experiencing increased anxiety. Further studies are needed to reveal other factors related to dental anxiety and psychological distress. The findings of the present study call for early implementation of preventive dentistry elements, oral health knowledge especially in Albanian curricula.
Patient-centered care - evidence in the context of professional health practice
f800c1a9-cabc-4845-a1ed-1bcd6b253ffb
10561417
Patient-Centered Care[mh]
Care, in the context of health, is generally related to carrying out procedures from a biological and medical perspective, reducing individuals’ health to physical and biological aspects, and impacting health professionals’ training and practice. When based on patterns established between normal and pathological, this practice compromises the implementation of approaches that recognize individuals and their health as biopsychosocially constituted ( - ) . In 1969, the hegemony of the biomedical model was greatly explored and, on this occasion, patient-centered care (PCC) begins to be described as a type of care committed to “understanding the patient as a single human being”, further mentioning that it should be placed at the heart of care, opposing principles and approaches pertinent to the biomedical model ( ) . In its Report of the Health Quality Committee in America, the Institute of Medicine (2001) considered PCC as a comprehensive part of the six pillars of quality health. According to the report, health care should be safe, effective, patient-centered, timely, efficient, and equitable. This document defines PCC as “respectful and responsive to the individual preferences, needs and values of the patient, and ensuring that patient values guide all clinical decisions” ( ) . Studies also indicate that PCC has promoted important benefits for patients in what regards communication, greater satisfaction, and biomedical results ( , - ) , in addition to contributing to the professional satisfaction of its providers ( - ) . From the involvement of both the patient and the different health professionals, the benefits of a patient-centered approach reiterate the relevance of the principles, actions, and procedures underlying it as well as the speed of assessment of its effects towards improved evidence-based health quality ( , , - ) . Studies in which the Patient Practitioner Scale (PPOS) ( ) was used allowed to analyze patient-centered or disease-centered attitudes, in which the scores varied depending on the location, context, or professional education ( - , , - ) . These studies usually indicate that patient-centeredness involves aspects such as greater importance attributed to patient participation in choices and decisions about their health care as well as the need to create a therapeutic relationship of balance of power between patients and professionals ( ) . Studies recognize this orientation towards patient-centeredness as a determinant of these relationships, being relevant to better health care quality standards ( - ) , including Brazilian studies ( - , ) . On the premise that health processes are historically constituted, and that man should be conceived as the leading actor of this process, studies that address that PCC, including scoping reviews and World Health Organization (WHO) reports, contribute to overcoming technical barriers. Furthermore, these studies allow patients’ voice as well as the report of their needs, desires, and expectations to be advisors to their health care ( - ) , opposing functional and organic precepts based on a biomedical logic ( , , ) . Based on the above, this study aimed to analyze the centrality attitudes of nurses, speech therapists, dentists, and medical professionals in the Brazilian context, considering the care and sharing dimensions. Previous studies conducted in Brazil ( - , ) have focused on medical students, which justifies the development of a new study. To analyze patient-centered attitudes in caring and sharing practices by nursing, speech therapy, dentistry, and medicine professionals. Ethical aspects Data collection followed the procedures for research with human beings, and its release was approved by the research ethics committee, under CAAE ( Certificado de Apresentação para Apreciação Ética - Certificate of Presentation for Ethical Consideration). Study design The present study was reported according to the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) ( ) . This is a methodological and cross-sectional study, conducted from September to December 2020, throughout Brazil. Study population and eligibility criteria A total of 411 health professionals in the areas of nursing, speech therapy, dentistry, and medicine participated in the study. Participants older than 18 years, who have completed a degree in health sciences in the areas of nursing, speech therapy, dentistry, and medicine, acting in direct patient care, in public or private care institutions, were included. Sample recruitment was carried out using the snowball sampling technique ( - ) , which consists of a non-probabilistic sample form that uses reference chains. Data were collected individually from September to December 2020through the online research platform SurveyMonkey Audience ( ) . For collection, there was no restriction of the respondent’s location, as long as the research remained within the Brazilian territory. Variables After consenting to participate in the research, each health professional completed a sociodemographic questionnaire to characterize the sample. Subsequently, to analyze centrality attitudes, the translated version of the PPOS ( ) was also administered, with EOMP ( ) being the Brazilian Portuguese version. This instrument presents adequate values of internal consistency (Cronbach’s alpha = 0.605) and test-retest reliability (intraclass correlation coefficient = 0.670). The results obtained from this scale indicate whether a health professional has a more patient-centered or disease-centered orientation. The scale comprises eighteen statements regarding two patient-related dimensions: sharing and caring. These statements must be classified on a six-point Likert scale, in which value 1 corresponds to “completely in agreement”, and value 6 to “completely at odds”. For all items, the highest values represent PCC, while the lowest values correspond to a physicianor disease-centered orientation. The authors of the original scale divide the total result into three groups: high (score ≥ 5.00, corresponding to a patient-centered orientation), medium (4.57 < score < 5.00), and low (score ≤ 4.57, corresponding to a diseaseor health professional-centered orientation). The results of the sharing and caring dimensions can be obtained, respectively, from the mean values of the nine items corresponding to each domain ( ) . PCC assessment scores were also analyzed through a sociodemographic questionnaire, considering possible confounding factors, such as age, gender, area of expertise, academic level, assisted subjects, level of care, and hospitalization experience. To reduce the possible sources of bias, explanatory variables age, gender, area of expertise, academic level, assisted subjects, level of care, and hospitalization experience were investigated through a linear regression model, with the aim to assess their influence on the scores obtained by the questionnaires in the different areas. Statistical methods The association between independent variables and the impact on the alteration of PPOS questionnaires’ scores were assessed considering the caring, sharing, and total domains. The scores for each domain were subjected to univariate analysis of variance (ANOVA). When ANOVA showed significance, a pair-to-peer comparison was performed using the post-hoc Tukey test. The variables that presented statistical significance in univariate analysis (α = 5%) were included in a multivariate linear regression model to assess the influence of these variables on the total score obtained. All analyses were performed using the software Statistical Package for the Social Sciences (SPSS) ( ) and Jamovi v.1.6 ( ) , adopting a significance level of 5%. Data collection followed the procedures for research with human beings, and its release was approved by the research ethics committee, under CAAE ( Certificado de Apresentação para Apreciação Ética - Certificate of Presentation for Ethical Consideration). The present study was reported according to the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) ( ) . This is a methodological and cross-sectional study, conducted from September to December 2020, throughout Brazil. A total of 411 health professionals in the areas of nursing, speech therapy, dentistry, and medicine participated in the study. Participants older than 18 years, who have completed a degree in health sciences in the areas of nursing, speech therapy, dentistry, and medicine, acting in direct patient care, in public or private care institutions, were included. Sample recruitment was carried out using the snowball sampling technique ( - ) , which consists of a non-probabilistic sample form that uses reference chains. Data were collected individually from September to December 2020through the online research platform SurveyMonkey Audience ( ) . For collection, there was no restriction of the respondent’s location, as long as the research remained within the Brazilian territory. After consenting to participate in the research, each health professional completed a sociodemographic questionnaire to characterize the sample. Subsequently, to analyze centrality attitudes, the translated version of the PPOS ( ) was also administered, with EOMP ( ) being the Brazilian Portuguese version. This instrument presents adequate values of internal consistency (Cronbach’s alpha = 0.605) and test-retest reliability (intraclass correlation coefficient = 0.670). The results obtained from this scale indicate whether a health professional has a more patient-centered or disease-centered orientation. The scale comprises eighteen statements regarding two patient-related dimensions: sharing and caring. These statements must be classified on a six-point Likert scale, in which value 1 corresponds to “completely in agreement”, and value 6 to “completely at odds”. For all items, the highest values represent PCC, while the lowest values correspond to a physicianor disease-centered orientation. The authors of the original scale divide the total result into three groups: high (score ≥ 5.00, corresponding to a patient-centered orientation), medium (4.57 < score < 5.00), and low (score ≤ 4.57, corresponding to a diseaseor health professional-centered orientation). The results of the sharing and caring dimensions can be obtained, respectively, from the mean values of the nine items corresponding to each domain ( ) . PCC assessment scores were also analyzed through a sociodemographic questionnaire, considering possible confounding factors, such as age, gender, area of expertise, academic level, assisted subjects, level of care, and hospitalization experience. To reduce the possible sources of bias, explanatory variables age, gender, area of expertise, academic level, assisted subjects, level of care, and hospitalization experience were investigated through a linear regression model, with the aim to assess their influence on the scores obtained by the questionnaires in the different areas. The association between independent variables and the impact on the alteration of PPOS questionnaires’ scores were assessed considering the caring, sharing, and total domains. The scores for each domain were subjected to univariate analysis of variance (ANOVA). When ANOVA showed significance, a pair-to-peer comparison was performed using the post-hoc Tukey test. The variables that presented statistical significance in univariate analysis (α = 5%) were included in a multivariate linear regression model to assess the influence of these variables on the total score obtained. All analyses were performed using the software Statistical Package for the Social Sciences (SPSS) ( ) and Jamovi v.1.6 ( ) , adopting a significance level of 5%. Data involving a total of 503 participants were included in this study. However, 92 participants were excluded from the sample because they did not meet the eligibility criteria. Thus, the research had 411 participants (n = 411). The population’s mean age was 42 ± 10.4 years. Regarding gender, 13.7% were male and 86.3% female. Most participants were nurses, followed by speech therapists, dentists, and physicians, according to . Among the participants, 10.2% had an academic graduation level, 55.1% had at least one specialization in the area, 18.3% had a master’s degree, 13.2% had a doctoral degree, and 3.2% underwent postdoctoral internship. All data on the characteristics of the study population are available in . shows the situation, presenting the scores and their standard deviations for each individual question of the PPOS questionnaire. As can been seen, in , the score of full scale shows the following order: medicine, speech therapy, nursing and dentistry. According to , the academic level variable differed significantly for the sharing domain and for the total scores, with lower scores for specialist professionals. The only independent variables that demonstrated statistical significance for the different domains of the PPOS questionnaire were area of expertise and academic level (p < 0.05). These professionals differed statistically from those with master’s and doctoral degrees (p < 0.05). Physicians presented higher mean scores, reflecting a patient-centered orientation, shared control, and focus on the person, with statistical difference for all domains (p < 0.02). Dentists were the professionals who presented lower scores, especially in the sharing domain, with statistical difference in relation to nurses, speech therapists, and physicians (p < 0.05). Increased academic level from specialization tends to increase PCC ( (A) and 1 (B)). When data were taken to a multivariate model, the area of expertise and academic level variables proved to be a significant predictor for the variation observed in the total scores of the PPOS questionnaire (p < 0.05). Moreover, the area of expertise variable was more important in the model than the academic level variable ( ). This research found higher PCC scores for physicians and speech therapists than for nurses and dentists. The findings of this research on PPOS scores thus corroborate previous studies that indicated a self-reported preference for patient-centeredness. Among them, a multicenter study conducted in Portugal, India, and Iran ( ) , with a sample composed of audiologists, demonstrated a significantly increased preference for patient-centered attitudes. The same occurred in investigations conducted in Saudi Arabia ( ) , Sri Lanka ( ) and Spain ( ) with medical students, physicians, nurses and patients, respectively. In the present study, despite the statistical significance found when considering the area of expertise, the smaller sample size of the group of physicians and the use of non-probabilistic sampling decrease sample representativeness, requiring careful visualization of these findings. In Brazil, three other studies were conducted using the same scale to assess centeredness attitudes. One of them was elaborated only for cultural adaptation and testing of psychometric properties for PPOS validity into Brazilian Portuguese ( ) . The others, although performed with students ( - ) , corroborate our findings, pointing to more patient-centered attitudes. As opposed to the present study, investigations carried out in Portugal ( ) , with nurses, and in Kazakhstan ( ) , with physicians and nurses, reveal that professionals have disease-centered attitudes based on a biomedical and care model. In the same sense, another study conducted before and after medical residency education points to a significant decline in patient-centered attitudes in male residents just after one year of residency ( ) . As for the variables that impacted the different assessment domains (caring, sharing), i.e., which presented a higher predictive factor for patient-centeredness, area of expertise and academic level stand out. It is important to highlight that the variation observed in the total scores of the PPOS questionnaire points to a greater importance of academic level in relation to area of expertise. It is noteworthy that the medicine area presented higher scores in caring, sharing, and total, maintaining the highest score in 9 of the 18 items that make up the scale. These results indicate a trend of change from the dominant paradigm (biomedical) to a comprehensive view of both the care and the subject. Furthermore, aspects related to information sharing and decision-making factors as well as interpersonal relationships and professional-patient dialogue were listed among the issues most scored by this area. Likewise, a study conducted in China ( ) between 2019 and 2020 reported that the scores of the subscales of physicians in China pointed to a preference for patient-centeredness. In that study, the scores of subscales were higher in the caring domain and lower in the sharing domain. Moreover, the results for PPOS scores are essentially in agreement with our findings, although our scores were higher. Regarding the area of speech therapy, research conducted in Portugal, Iran, and Iraq corroborates the findings of this study. This is because these authors present PPOS scores with self-reported preference for patient-centeredness in caring, sharing, and total. Although demonstrating a trend to patient-centeredness in speech therapy, the content of the items that show a lower score is consistent with traditionally implemented audiology practices, focusing on the application of diagnostic tests ( ) . Nursing and dentistry professionals had the lowest PPOS scores. Regarding dentistry, Madhan reports a score that points to disease-centeredness (3.38) and that gradually increases depending on the academic level ( ) as well as in our study. With regard to nurses, our research points to average levels of patient-centered attitudes, revealing higher scores in caring to the detriment of sharing. Consistent with other studies ( , ) , all participants scored lower on the sharing than on the caring domain. According to the authors, this result may be because health professionals have a strong belief in patients’ emotional and psychosocial factors, but they are less supportive in sharing information and empowering patients in decision-making. Such inconsistency between the two scores may be due to the traditional domain of the biomedical model that still prevails in most clinical practice in nursing ( ) . The central question of the biomedical model, in this sense, lies in the fact that it is too restricted in its explanatory power. This can be considered an obstacle to clinical practice, since it does not answer many questions related to the subject’s biopsychosocial aspects and the socioeconomic problems surrounding the disease ( ) . Regarding the academic level variable, the higher the level of education, the greater the impact on the scores, especially from specialization. Likewise, a study states that this educational progression related to professional experience can develop patient-centeredness in health professionals. As it rises academically, professionals seem more predisposed to sharing knowledge and information, which consequently contributes to patient empowerment regarding health-disease processes ( ) . Patient-centered attitudes are presented as mean, following PPOS scores. This is because none of the scores reached a value greater than or equal to 5.00 or a value lower than 4.30, as presented by dentistry. It is also noteworthy that the average score per explanatory variable was not lower than 3.76, also demonstrating a tendency of these health professionals to favor the focus on patient care to the detriment of sharing care with patients. Patient-centeredness is an important determinant of health practices, which has its foundation in the precepts that guide the principle of comprehensiveness of care. It is thus closely related to patients’ results, such as greater satisfaction and treatment compliance, highlighting the social and historical determinants that involve the subject. This directly implies health promotion, interprofessional and collaborative practice, professional-patient relationships, dialogue that permeates health systems, and overall health quality and safety. Study limitations With the study under a cross-sectional design, it was possible to observe that it could be more appropriate to analyze centrality of care practice in patients based on longitudinal drawings and in line with patients, in order to correlate, more accurately, both health professionals and patients within this model of care. Moreover, the use of non-probabilistic sampling does not eliminate the risk of confounding factors, regardless of sample size. The reduced sample size in the group of physicians reduces the representativeness of the assessed sample. Contributions to health This study reports important notes on PCC in professional care, both nationally and internationally, as well as the conceptions that guide and determine ways of understanding and developing health practice. However, it is important to emphasize that, although PCC is an important movement in relation to health practices in the search for overcoming the historically dominant biomedical model, nursing and dentistry professionals still tend to prioritize a care that is not very patient-centered to the detriment of medical and speech therapy professionals, pointing us that giving care a new meaning still timidly moves towards an effective paradigm shift. With the study under a cross-sectional design, it was possible to observe that it could be more appropriate to analyze centrality of care practice in patients based on longitudinal drawings and in line with patients, in order to correlate, more accurately, both health professionals and patients within this model of care. Moreover, the use of non-probabilistic sampling does not eliminate the risk of confounding factors, regardless of sample size. The reduced sample size in the group of physicians reduces the representativeness of the assessed sample. This study reports important notes on PCC in professional care, both nationally and internationally, as well as the conceptions that guide and determine ways of understanding and developing health practice. However, it is important to emphasize that, although PCC is an important movement in relation to health practices in the search for overcoming the historically dominant biomedical model, nursing and dentistry professionals still tend to prioritize a care that is not very patient-centered to the detriment of medical and speech therapy professionals, pointing us that giving care a new meaning still timidly moves towards an effective paradigm shift. In the present study, we reported for the first time a complete description and comparison of self-reports of physicians, nurses, dentists, and speech therapists about patient-centeredness. We also estimated the mean scores of these professionals regarding the gender, area of expertise, academic level, assisted subjects, level of care, and hospitalization experience predictor variables, using a PPOS scale validated in caring, sharing and total. For these domains, the only independent variables that demonstrated statistical significance were academic level and area of expertise, the latter being of greater importance. It is noteworthy that physicians and speech therapists had the highest PCC scores, followed by nurses and dentists. However, these findings must be viewed with caution due to the reduced sample representativeness. PCC constitutes an important movement regarding health practices, committed to overcoming the historically dominant biomedical paradigm. It also consolidates conceptions and practices capable of conceiving and prioritizing beyond the disease, the social and historical determinants that involve the subject. Finally, we hope that this research can encourage future studies on PCC and health professionals.
Severe Vivax Malaria: Newly Recognised or Rediscovered?
30fc12f5-fb36-4a2a-b6d8-22e0d0094470
2429947
Preventive Medicine[mh]
Ric Price and colleagues, in a prospective study based in Timika, southern Papua in Indonesia , and Blaise Genton and colleagues, in a prospective study in the Wosera region of Papua New Guinea , report similar rates and outcomes of severe malaria due to P. vivax or P. falciparum . The two studies had different settings, and the cultural and ethnic characteristics of the patient populations were also different. Price and colleagues collected data from all patients attending the outpatient and inpatient departments of the only hospital in the region, using systematic data forms and computerised hospital records. In contrast, Genton and colleagues investigated patients presenting at two rural health facilities. Linked Research Articles This Research in Translation discusses the following new studies published in PLoS Medicine : Genton B, D'Acremont V, Rare L, Baea K, Reeder JC, et al. (2008) Plasmodium vivax and mixed infections are associated with severe malaria in children: A prospective cohort study from Papua New Guinea. PLoS Med 5(6): e127. doi: 10.1371/journal.pmed.0050127 In a study carried out in Papua New Guinea, Blaise Genton and colleagues show that Plasmodium vivax is associated with severe malaria. Tjitra E, Anstey NM, Sugiarto P, Warikar N, Kenangalem E, et al. (2008) Multidrug-resistant Plasmodium vivax associated with severe and fatal malaria: A prospective study in Papua, Indonesia. PLoS Med 5(6): e128. doi: 10.1371/journal.pmed.0050128 Ric Price and colleagues present data from southern Papua, Indonesia, suggesting that malaria resulting from infection with Plasmodium vivax is associated with substantial morbidity and mortality. In both settings, clinical and severe disease was most common in young children, with P. vivax cases peaking at an earlier age than those of P. falciparum . In Timika, in children under five years old, around 30% of cases of either P. falciparum or P. vivax were classed as severe, and around 80% of such cases were accompanied by severe anaemia (haemoglobin less than 5 g/dl). The remainder presented with respiratory distress, impaired consciousness (i.e., cerebral malaria), or overlapping syndromes. In the Wosera, in children under five years attending the health centre, around 9% of vivax and 12% of falciparum infections were classified as severe malaria. However, respiratory distress was defined less stringently in the Wosera (more than 40 breaths/minute from two to 60 months) than in Timika (more than 50 breaths/minute in this age group). When cough and diarrhoea (which are poorly specific for malaria) were used as exclusions in the Wosera study, severe malaria rates in children under five years fell to 7% and 4% of P. falciparum and P. vivax infections, respectively. In the Wosera, anaemia occurred in about 20% and 40% of severe cases of vivax and falciparum, respectively, while respiratory distress occurred in 60% and 40% of severe cases of vivax and falciparum, respectively. Neurologic symptoms were present in 25% of severe cases of either species. The cases of cerebral malaria due to P. vivax reported in both studies, occurring in all age groups, are intriguing, because this malaria complication has rarely been reported previously in association with P. vivax infection. In P. falciparum , cerebral malaria is primarily attributed to sequestration of infected erythrocytes in cerebral vessels. As P. vivax does not sequester, coma must arise by other means—perhaps of systemic metabolic origin—in vivax malaria. This aetiology may also underlie some cerebral malaria presentations in children. Both studies, inevitably, have limitations. First, comorbidities, including concomitant bacterial or viral infections, which could have decreased the malaria-attributable fraction of disease , were not actively investigated. Second, microscopy was used for parasite detection and speciation, which routinely leads to marked underestimation of mixed infections in particular . Some “severe vivax” cases may actually have been mixed infections. Five Key Papers in the Field Grimberg et al., 2007 The investigators show that antibodies directed against “region two” of PvDBP inhibit merozoite invasion, suggesting that vaccines targeted towards this region could reduce blood stage infection. Barcus et al., 2007 Along with the two new papers in this issue of PLoS Medicine , Barcus and colleagues' study contributes to our understanding of the importance of P. vivax as a cause of severe clinical disease. Panichakul et al., 2007 This paper represents a significant advance in the difficult process of developing robust in vitro culture systems for P. vivax . Ratcliff et al., 2007 Both treatments cured falciparum and vivax malaria with high efficacy, but dihydroartemisinin–piperaquine was much better than artemether–lumefantrine at preventing reinfections with P. falciparum and relapses of P. vivax during a six-week follow-up. Singh et al., 2006 This paper shows how the Duffy binding protein interacts with its ligand, which is also relevant to understanding structure and interactions of other important adhesive proteins of malaria parasites. Despite these limitations, a striking feature of the two studies is the overall comparable incidence of severe disease in P. vivax and P. falciparum infections in each setting. There were differences in the prevalence of the components of severe disease in the two locations and a notable disparity in the overall rates of severe disease. Thus, in Timika, anaemia defined severe malaria more often than in Wosera, where anaemia was especially infrequent in severe vivax. This difference may be attributable to the fact that in Timika, both P. vivax and P. falciparum were commonly resistant to chloroquine, the first-line malaria treatment (with sulfadoxine–pyrimethamine in the case of P. falciparum ), for much of the study period. Because persistent blood infections with malaria are likely to increase anaemia, drug treatment failure could have contributed to the high anaemia rates in Timika. In Wosera, where chloroquine resistance is common in P. falciparum but rare in P. vivax , severe anaemia defined 40% of severe falciparum cases, but only 20% of severe vivax cases. Thus the Wosera data are also consistent with parasite drug resistance as a factor in anaemia. If these conjectures are correct, the recent adoption of dihydroartemisinin–piperaquine treatment in Papua may lead to a significant decrease in severe malarial anaemia. Other factors, such as host genetics, could also have influenced the differences in severe disease manifestation between study sites. It has been shown, for example, that children in Papua New Guinea with the red cell genetic polymorphism “South East Asian ovalocytosis”, a trait common in this population, are almost completely protected from cerebral malaria . In Timika, indigenous Papuans were much more likely to be severely anaemic than were immigrants. Might host genetics have contributed to the high anaemia rates reported in the Timika study? There is a view that malarial disease severity in New Guinea may be lower overall than in Africa. For example, case fatality rates for P. falciparum among hospitalised children in New Guinea ranged from 1.6% to 3.5% versus 2% to 9% in recent African studies . This difference raises the question of whether P. vivax , which is common in New Guinea but rare in Africa, may actually ameliorate the severity of P. falciparum where the two species are co-endemic. Maitland and colleagues have suggested that the earlier age peak for P. vivax disease may protect against the later-acquired P. falciparum through species-transcending immunity . Studies have shown that symptoms of P. vivax or P. falciparum infection are, indeed, significantly reduced by recent previous P. vivax infection (but not P. falciparum infection) . In contrast, the two new studies in PLoS Medicine report that mixed species infections had worse outcomes than infections of either species alone . If previous vivax infection does protect against severe P. falciparum infection, further investigations are needed so that policies for P. vivax control can balance reduction in severe vivax with possible loss of vivax-associated protection against severe P. falciparum infection. The findings of the two new studies should be considered from a wider biological and historical perspective, which shows that malarial disease severity is highly context dependent. For example, from the 16th through the 19th centuries in England, mortality rates associated with “marsh agues”—probably due largely to vivax malaria—gradually fell from high to negligible levels. This fall occurred either because of attenuation of P. vivax infection through adaptation of parasite and/or host, or through environmental change, such as reduced exposure to other diseases or improved nutrition . Indeed, host under-nutrition and co-infections, including HIV, helminth, and acute bacterial or viral infections, can all influence the outcomes in malaria . Access to antimalarial drugs and presence of parasite drug resistance are clearly also relevant to the clinical outcome. P. vivax infection is characterised by relapse in which dormant liver stages awaken, initiating new bouts of blood infection. Repeated malarial blood infections, from whatever source, have debilitating consequences including cachexia, spontaneous abortions, male infertility, developmental arrest, and impaired mental function . All of these debilitating clinical features have been present in populations where P. vivax is known, or presumed, to have been the main prevalent malaria. However, these manifestations are also context dependent—they are influenced by circumstances such as the level of endemicity, existence of co-infections, access to treatment, and presence of parasite resistance. It is interesting that the present reports of severe vivax malaria come from New Guinea, where drug-resistant P. vivax was first reported. This drug resistance itself could be a driver of severity together with the particularly high regional malaria transmission force. Were severe vivax to be found more widely, it would have significant implications for control of the infection, especially as P. vivax invariably increases relative to P. falciparum under effective transmission reduction . With calls for increased efforts to control malaria internationally, it will be important to ensure that P. vivax receives appropriate attention. We still lack reliable estimates of its global burden, and are only now starting to appreciate certain aspects of disease presentation of P. vivax malarial infection. The burden and severity of vivax in different settings requires further study. With the availability of the P. vivax genome (GenBank accession no. AAKM00000000), it will be possible to compare isolates associated with severe or uncomplicated disease. However, connecting genetic differences with pathogenic potential may not be straightforward . Better disease control (see ) might include preventive strategies (vaccines, insecticide-treated bed nets, and intermittent preventive treatment), more effective curative treatment (finding the most suitable alternative to chloroquine), and better relapse prevention (long-acting drugs such as dihydroartemisinin–piperaquine decrease early relapse compared to other artemisinin-based combination therapies ). Elimination of dormant parasites in the liver (hypnozoites) is especially difficult because available drugs can cause severe haemolysis in individuals with glucose-6-phosphate dehydrogenase (G6PD) deficiency. A simple point-of-care assay for G6PD deficiency, or new, safer drugs that target liver stage parasites, would be invaluable tools for eradication of hypnozoites. Box 1. Crucial Tools for P. vivax Control Effective first-line treatment. Artemisinin-based combination therapies are generally effective, but longer-acting combinations may additionally prevent early relapses. Highly sensitive rapid diagnostic tests. Present tests are inadequately sensitive at low parasitaemia, which is common in P. vivax . Preventive therapy. The role of intermittent preventive treatment in pregnancy or childhood against vivax is unknown. Small studies suggest that insecticide-treated bed nets decrease disease. Safe drugs for elimination of liver stages. G6PD deficiency, selected for by malaria, can result in fatal haemolysis from current agents. Vaccination. New vaccines are entering human trials. If efficacious, their deployment will require major planning efforts. Vaccines against P. vivax are in development. One target is the P. vivax Duffy binding protein (PvDBP), which mediates merozoite invasion of erythrocytes ; human vaccine studies using this target are likely to begin this year. Another target is the P. vivax gamete surface protein Pvs25, a candidate for a transmission-blocking vaccine. Indeed, this vaccine candidate is already in Phase I human trials . Should either candidate progress to large-scale trials, it will be critical to examine impact on both P. vivax infection (including, potentially, increased chronicity of infection and anaemia following PvDBP vaccination) and P. falciparum infection. While New Guinea is an ideal site in which to evaluate such vaccines, evaluation must also be done in low-transmission areas where most P. vivax infections are symptomatic, and where diminished symptomatology could dangerously postpone treatment. The two reports by Price et al. and Genton et al. provide information about disease burden critical to improved decision making for the public health management of P. vivax malaria .
Innovating Pediatric Emergency Care and Learning Through Interprofessional Briefing and Workplace-Based Assessment
4b8f314a-d9db-4888-8993-ca65a9f771e2
7709919
Pediatrics[mh]
Design We used a constructivist thematic analysis (TA) approach. Thematic analysis is a pragmatic approach to qualitative analysis involving the search for themes across a data set. Although it draws on some of the techniques of grounded theory, TA remains theoretically flexible and can be adapted to suit the specific affordances of a particular study. Setting This study was conducted at the Department of Pediatric Emergency Medicine, University Children's Hospital, Inselspital, Bern, Switzerland. To prepare our residents for the clinical encounter, we first created a briefing checklist (Table ). This included A (airways), B (breathing), C (cardiopulmonary), D (disability), and E (environment), and this is a standard procedure for handling critical ill patients in pediatric EDs. Once the supervisor received notification of a critical ill child, he and the resident in charge went through the checklist before arrival of the patient. Time permitting, the resident was expected to first describe their planned approach and then receive, as necessary, further direction and feedback from the supervisor. The presence of nurses was suggested but not mandated. Second, to foster direct observation during the encounter, the resident was observed by the supervisor and nurse who were assigned to this patient. After encounter, residents (self-assessment), nurses, and supervisors used the Christen et al's (2015) previously validated iWBA form (Fig. ). The study period April to June 2016 was chosen to overlap with a single 3-month resident rotation. Before the study start, supervisors, nurses, and residents were informed about the study and trained in workshops using the briefing and iWBA checklists. In the same workshop, they were trained in giving constructive feedback, which was developed based on previously described best practices. , , The duration of the workshop was half an hour and was conducted by I.S. In addition, handouts with the information were distributed to all participants (see additional information in the supplementary appendix). During the study period, monthly reminders were sent out to all participating nurses and physicians. Researchers Because the researchers play an active role in data collection and analysis in qualitative research, it is important to provide information about them. The study group comprised 4 researchers. Three of them are physicians. Two of them are pediatricians (I.S. and S.H.) who are highly engaged in medical education. I.S. is a consultant in pediatric emergency medicine and did implement this project at the pediatric ED. M.G. is a general internist and PhD education-research scientist with expertise in practice-based research. One author has a background in psychology (A.B.) and did support this study within a research internship at the Institute for Medical Education. The senior author has also experience in qualitative research and focus group moderation (S.H.). Subjects Pediatric emergency medicine residents were at different training levels. Four pediatric surgery residents, 7 pediatric medicine residents, and 3 internal medicine residents were on a 3-month rotation at the ED (1 resident stayed 6 months). One pediatric emergency medicine fellow stayed 1 year at the ED. All supervisors (n = 7) and nurses (n = 32) participated in the study. Data Collection At the end of the 3-month study period, 4 focus group interviews were held, 2 with residents (4 participants and 5 participants), 1 with supervisors (5 participants), and 1 with nurses (4 participants). All participants agreed to participating in and videotaping of the focus group interviews. The focus groups were interviewed separately per profession and hierarchy to foster open discussion. The sessions took place on different days, each moderated by one of the authors (S.H.), who is an experienced moderator of focus groups. Consistency across group interviews was established by a questioning route. Emphasizing different opinions and views as well as enabling in-depth discussions were of great importance. Therefore, participants were asked to write down their thoughts on a topic before group discussion started. Open-ended questions were used to initiate a discussion, and the moderator looked for clarification if necessary. The moderator also encouraged all participants to contribute. Important issues elaborated in one group, which were not addressed in the following group, were brought to discussion in the next group. One assistant moderator (I.S.) was responsible for the video and audio recording, both moderators took comprehensive notes. Data Analysis The recordings were transcribed literally by 2 authors (A.B. and I.S.). In accordance with guidelines for TA, 3 authors (I.S., A.B., S.H.) first read all the transcripts while identifying and highlighting preliminary themes. Next, they established themes in an iterative process in which coded themes were discussed by the research team and the discussions, in turn, informed the coding process. The process continued until consensus was reached. The authors not only paid special attention to the frequency but also to the extensiveness of an expressed opinion. The focus was on representing the range of views as accurate as possible. All participants received no financial incentive and attended the focus groups voluntarily. They all signed an informed consent form to allow videotaping and audio recording. They were assured that all data would be handled confidentially and they cannot be identified by the presented material. Given the nature of the study, in this country, it was considered ethics exempt. We used a constructivist thematic analysis (TA) approach. Thematic analysis is a pragmatic approach to qualitative analysis involving the search for themes across a data set. Although it draws on some of the techniques of grounded theory, TA remains theoretically flexible and can be adapted to suit the specific affordances of a particular study. This study was conducted at the Department of Pediatric Emergency Medicine, University Children's Hospital, Inselspital, Bern, Switzerland. To prepare our residents for the clinical encounter, we first created a briefing checklist (Table ). This included A (airways), B (breathing), C (cardiopulmonary), D (disability), and E (environment), and this is a standard procedure for handling critical ill patients in pediatric EDs. Once the supervisor received notification of a critical ill child, he and the resident in charge went through the checklist before arrival of the patient. Time permitting, the resident was expected to first describe their planned approach and then receive, as necessary, further direction and feedback from the supervisor. The presence of nurses was suggested but not mandated. Second, to foster direct observation during the encounter, the resident was observed by the supervisor and nurse who were assigned to this patient. After encounter, residents (self-assessment), nurses, and supervisors used the Christen et al's (2015) previously validated iWBA form (Fig. ). The study period April to June 2016 was chosen to overlap with a single 3-month resident rotation. Before the study start, supervisors, nurses, and residents were informed about the study and trained in workshops using the briefing and iWBA checklists. In the same workshop, they were trained in giving constructive feedback, which was developed based on previously described best practices. , , The duration of the workshop was half an hour and was conducted by I.S. In addition, handouts with the information were distributed to all participants (see additional information in the supplementary appendix). During the study period, monthly reminders were sent out to all participating nurses and physicians. Because the researchers play an active role in data collection and analysis in qualitative research, it is important to provide information about them. The study group comprised 4 researchers. Three of them are physicians. Two of them are pediatricians (I.S. and S.H.) who are highly engaged in medical education. I.S. is a consultant in pediatric emergency medicine and did implement this project at the pediatric ED. M.G. is a general internist and PhD education-research scientist with expertise in practice-based research. One author has a background in psychology (A.B.) and did support this study within a research internship at the Institute for Medical Education. The senior author has also experience in qualitative research and focus group moderation (S.H.). Pediatric emergency medicine residents were at different training levels. Four pediatric surgery residents, 7 pediatric medicine residents, and 3 internal medicine residents were on a 3-month rotation at the ED (1 resident stayed 6 months). One pediatric emergency medicine fellow stayed 1 year at the ED. All supervisors (n = 7) and nurses (n = 32) participated in the study. At the end of the 3-month study period, 4 focus group interviews were held, 2 with residents (4 participants and 5 participants), 1 with supervisors (5 participants), and 1 with nurses (4 participants). All participants agreed to participating in and videotaping of the focus group interviews. The focus groups were interviewed separately per profession and hierarchy to foster open discussion. The sessions took place on different days, each moderated by one of the authors (S.H.), who is an experienced moderator of focus groups. Consistency across group interviews was established by a questioning route. Emphasizing different opinions and views as well as enabling in-depth discussions were of great importance. Therefore, participants were asked to write down their thoughts on a topic before group discussion started. Open-ended questions were used to initiate a discussion, and the moderator looked for clarification if necessary. The moderator also encouraged all participants to contribute. Important issues elaborated in one group, which were not addressed in the following group, were brought to discussion in the next group. One assistant moderator (I.S.) was responsible for the video and audio recording, both moderators took comprehensive notes. The recordings were transcribed literally by 2 authors (A.B. and I.S.). In accordance with guidelines for TA, 3 authors (I.S., A.B., S.H.) first read all the transcripts while identifying and highlighting preliminary themes. Next, they established themes in an iterative process in which coded themes were discussed by the research team and the discussions, in turn, informed the coding process. The process continued until consensus was reached. The authors not only paid special attention to the frequency but also to the extensiveness of an expressed opinion. The focus was on representing the range of views as accurate as possible. All participants received no financial incentive and attended the focus groups voluntarily. They all signed an informed consent form to allow videotaping and audio recording. They were assured that all data would be handled confidentially and they cannot be identified by the presented material. Given the nature of the study, in this country, it was considered ethics exempt. All residents, supervisors, and nurses who participated in the focus group interviews had experiences with both, iB and iWBA, however conducted more often iB than iWBA. Results of Focus Groups As depicted in Figure , our results suggest that when iB was used, residents, supervisors, and nurses all felt that it had positive impacts on learning, teamwork, and patient care. Moreover, participants suggested that using iB was important in enhancing the learning impact of the iWBA. Over time, with increasing uptake across ED teams, participants also described how cultural changes began to take place, which further enhanced especially the way iB was taken up. Although overall iB and iWBA seemed to be highly accepted and judged as feasible, there were challenges faced when trying to integrate iB and iWBA into current practice. In the following sections, both facilitators and challenges will be described. The overall findings will be presented in relation to the following 4 key areas: impact on learning, impact on teamwork and patient care, feasibility, and acceptance. Representative quotes from the interviews are provided with letters indicating which type of participant (resident [R], supervisors [S], nurse [N]), numbers indicating which focus group session they participated in and a second number indicating which person from that group. Impact on Learning Our analysis revealed 4 overlapping learning themes. The first theme related to the identification of knowledge gaps and the creation of teaching moments. This seemed to be true for residents at all levels of training who described its impact on identifying areas of uncertainty and better preparing them for stressful situations. According to residents the learning effect was greatest, when they went through the briefing checklist first and afterward discussed it with the supervisor. “The checklist helps identifying certain knowledge gaps, and that supervisors are aware of a certain need for teaching considering the resident.” (R1.4) As a result, the supervisor was able to estimate, in advance, how well the resident knew the topic and how much he had to involve himself during the encounter. If time allowed, the residents could also be given the opportunity to preread around the case. “And in this case I said, look it up in the pediatric emergency handbook and we can discuss it together.” (S3.4). For nurses, briefing could also be instructive: “The checklist not only helps identifying knowledge gaps of residents, but also where I may be uncertain.” (N4.1) . Similarly, self-assessment was felt to be a key characteristic of the iWBA because it supported the development of resident judgment; both what they recognize about themselves and also what they fail to notice: “I find both aspects important. One aspect is to recognize where the resident sees a problem he wishes to discuss, the other aspect is to discuss important points which occurred especially if they concern patient safety.” (S3.5) Nurses also commented that timing was crucial. If the resident had not been given sufficient time for self-reflection before the iWBA, it was less meaningful. “For me it was a bit difficult because the resident did not self-assess herself beforehand. We had a lot of stress that day … and it may was not the optimal time to do the WBA.” (N4.4) The second theme related to the benefits of interprofessional involvement and more holistic assessments. Residents perceived iWBA as educationally helpful, because nurses were also present during patient care and had a different perspective on resident's performance: “And I think in any case it would be helpful, if someone from each stakeholder group could be present. Someone may focus more on the medical part, or someone more on interaction among each other or empathy. Therefore, I believe that it makes more sense to do the WBA interprofessionally.” (R2.2) Residents especially appreciated how nurses could provide more insight into how effective the residents were in their interactions with the patients and their families. As the feedback ensuing from this type of iWBA was more concrete, residents also felt that it led to more actionable steps in the future. Supervisors found iWBA meaningful, because feedback from nurses went directly to residents and not via supervisors: “Especially when there are new residents, often nurses come to us and say, could you please give this feedback to the resident. iWBA is therefore meaningful, because the resident receives the feedback from the nurse in the concrete situation.” (S3.4) The third learning theme related to the synergistic effect of iB and iWBA. All stakeholder groups regarded the combination of iB and iWBA as ideal especially for handling complex patients. In these situations, recognizing and discussing knowledge gaps before and after the clinical encounter was even of greater importance than in less complex/ill patients. “That you can really relate the feedback on something concrete. And this is more feasible when you have discussed something beforehand…therefore I find it very helpful.” (S3.2) “I like to be structured and I think iB and afterwards iWBA is really meaningful. In this way one is prepared and you can expect a feedback afterwards. Therefore, I think that we need both.” (N4.4) Although it was not always feasible for both to occur, participants agreed that the ideal was using them in combination: “If you discuss the case beforehand, the resident should also receive WBA from the supervisor afterwards.” (R2.5) The fourth theme related to cultural change. More specifically, the valuing of the iB. Over the course of the innovation, nurses, supervisors, and residents all described how they began to change their practices to increasingly incorporate the iB. “Briefing was very helpful for me, because I then exactly knew what requires special attention and what I should do next.” (R1.1) “And I think in this regard we all agree that we really implement the briefing checklist in the daily routine and we use it for each patient where it makes sense to use it.” (S3.2) “Even if you know a lot and are quite experienced, it is helpful to systematically use the checklist…. And to figure out, where my uncertainties are.” (N4.1) Participants also described missing the iB when it was not used. For supervisors and nurses, they described how using the iB seemed to also enhance resident competence and impacted on the plans they ultimately developed for their patients. Participants also commented on how they felt it enhanced collaboration as the nurses could now better anticipate what the physicians would be doing. Impact on Teamwork and Patient Care We identified 3 themes related to impact on teamwork and patient care. The first theme related to better preparation of residents. According to residents the calmness and feeling of security through iB had a positive impact on patient care: “And I was quite grateful for the iB in terms of patient care, that even in stressful situations and also as an experienced resident you are grateful to have a certain guideline, which gives a bit of a structure.” (R1.4) Residents also commented on how they felt prepared for eventualities that may occur and left them less worried about the surprise effect. Having discussed the procedure with the supervisor before meeting with the family further enabled the residents to discuss these with families. That the resident was better prepared by iB; thus, more competent also toward patients, was confirmed by supervisors and nurses. In addition, nurses remarked that residents seemed more relaxed during patient care. “Certainly it is beneficial for the patient. Because the resident is structured and well prepared, therefore the patient may be better looked after.” (S3.5) “The resident was more relaxed handling the patient because she then exactly knew what she had to look for.” (N4.2) The second theme related to clear role allocation and a common plan. All 3 stakeholder groups appreciated the clear role allocation supported by iB, especially regarding complex patients. Each person knew what should be done when and who was responsible for what: “And I find it very positive if everyone in the room knows what is going on. And this is through iB possible.” (R2.4) “One of the most important points in iB is the clear role allocation. Therefore I think that all involved persons should participate during iB.” (N4.4) “Together with all three stakeholder groups you really have a better plan as a team what will be done first and what will be done afterwards.” (S3.5) The residents perceived that their own increased feeling of security also had a positive impact on teamwork. Through the clear, standardized process supported by the iB, all team members knew the pretreatment plan and therefore the situation in the team was more calm and enabled mutual trust. Interprofessional briefing was also helpful for the supervisor as it allowed them to better estimate beforehand how much involvement on their part would be required. “With briefing it is helpful for us to see, how much do I have to involve myself regarding patient care, how sure is the resident.” (S3.5) When iB took place with the supervisor only, participants felt that more work was needed to close the loop with nursing staff. The third theme related to building relationships and long-term team cohesion. Participants felt that in the long term, both iB and iWBA were helpful in building team cohesion. It forced the whole team to get in contact together and discuss difficult situations. It promoted the exchange of information openly and not secretly. It allowed progress in a team and prevented that certain prejudices occurred, which were not beneficial for the whole team: “It also prevents certain prejudices or opinions which occur in a team, which may be hindering for the teamwork. Hence you are able to say, the situation was such and therefore this and this happened.” (R1.4) According to the supervisors, “ideally, iWBA occurs together with nurses, in this case you can also look at the team performance.” (S3.1). Nurses seemed to agree with this sentiment adding that it might even be beneficial to have debriefings with mutual feedback: “ But we always speak of the residents, but I think it could also be a feedback for us and maybe also for the supervisors.” (N4.2) Feasibility Overall, participants felt that both iB and iWBA was in general feasible; however, iB was more easy to integrate into the workflow. Whereas iB could save time by making the whole team more efficient during patient care, with iWBA, it was sometimes an issue to find time amid competing next activities. For it to work, supervisors stressed the importance of giving feedback as soon as possible after the emergency situation. This, however, was not always possible for all team members in the context of the busy ED. Although participants felt that the ideal iWBA took place in a quiet private room, this too was not always possible. Two other challenges to iWBA related to the superviors' and nurses' ability to offer meaningful feedback. When patients were severely ill and the supervisors needed to take over care, they felt that they had less insight into meaningful feedback for the residents. “The problem was that I was not always present for the full time with the patient, because as you said we have discussed it beforehand and the patient then was not as ill. Or the patient was very ill and I had to get involved actively. And then iWBA was difficult.” (S3.4) Similarly, nurses also found it difficult to focus on collecting their impressions for iWBA during stressful situations as they were so focused on doing their own roles. By contrast, in less ill patients, supervisors sometimes did not see the necessity to stay during the complete emergency situation after iB was conducted and as a result also had less insight into resident performance. Acceptance Overall acceptance of both the iB and iWBA was high across all participants. Residents particularly appreciated how it balanced their roles as clinicians and learners: “An advantage for me was that I found for the first time the focus was on learning that I personally learn from the case. And not that I am just there to work off the cases.” (R1.1) In addition, their supervisors seemed to agree and support this as well: “I believe that feedback is extremely important, and when it is structured like this, this is helpful. Therefore we should think of it and do it.” (S3.4) Because it also enhanced team work, patient care, and safety, it also had the support of supervisors and nurses who felt that they needed to continue to be used following the study period: “And I thought that we have been having everything quite optimized so far. But I must say that these assessments are great. You can work with these instruments and I think we will only profit from it in the future.” (N4.4) As depicted in Figure , our results suggest that when iB was used, residents, supervisors, and nurses all felt that it had positive impacts on learning, teamwork, and patient care. Moreover, participants suggested that using iB was important in enhancing the learning impact of the iWBA. Over time, with increasing uptake across ED teams, participants also described how cultural changes began to take place, which further enhanced especially the way iB was taken up. Although overall iB and iWBA seemed to be highly accepted and judged as feasible, there were challenges faced when trying to integrate iB and iWBA into current practice. In the following sections, both facilitators and challenges will be described. The overall findings will be presented in relation to the following 4 key areas: impact on learning, impact on teamwork and patient care, feasibility, and acceptance. Representative quotes from the interviews are provided with letters indicating which type of participant (resident [R], supervisors [S], nurse [N]), numbers indicating which focus group session they participated in and a second number indicating which person from that group. Our analysis revealed 4 overlapping learning themes. The first theme related to the identification of knowledge gaps and the creation of teaching moments. This seemed to be true for residents at all levels of training who described its impact on identifying areas of uncertainty and better preparing them for stressful situations. According to residents the learning effect was greatest, when they went through the briefing checklist first and afterward discussed it with the supervisor. “The checklist helps identifying certain knowledge gaps, and that supervisors are aware of a certain need for teaching considering the resident.” (R1.4) As a result, the supervisor was able to estimate, in advance, how well the resident knew the topic and how much he had to involve himself during the encounter. If time allowed, the residents could also be given the opportunity to preread around the case. “And in this case I said, look it up in the pediatric emergency handbook and we can discuss it together.” (S3.4). For nurses, briefing could also be instructive: “The checklist not only helps identifying knowledge gaps of residents, but also where I may be uncertain.” (N4.1) . Similarly, self-assessment was felt to be a key characteristic of the iWBA because it supported the development of resident judgment; both what they recognize about themselves and also what they fail to notice: “I find both aspects important. One aspect is to recognize where the resident sees a problem he wishes to discuss, the other aspect is to discuss important points which occurred especially if they concern patient safety.” (S3.5) Nurses also commented that timing was crucial. If the resident had not been given sufficient time for self-reflection before the iWBA, it was less meaningful. “For me it was a bit difficult because the resident did not self-assess herself beforehand. We had a lot of stress that day … and it may was not the optimal time to do the WBA.” (N4.4) The second theme related to the benefits of interprofessional involvement and more holistic assessments. Residents perceived iWBA as educationally helpful, because nurses were also present during patient care and had a different perspective on resident's performance: “And I think in any case it would be helpful, if someone from each stakeholder group could be present. Someone may focus more on the medical part, or someone more on interaction among each other or empathy. Therefore, I believe that it makes more sense to do the WBA interprofessionally.” (R2.2) Residents especially appreciated how nurses could provide more insight into how effective the residents were in their interactions with the patients and their families. As the feedback ensuing from this type of iWBA was more concrete, residents also felt that it led to more actionable steps in the future. Supervisors found iWBA meaningful, because feedback from nurses went directly to residents and not via supervisors: “Especially when there are new residents, often nurses come to us and say, could you please give this feedback to the resident. iWBA is therefore meaningful, because the resident receives the feedback from the nurse in the concrete situation.” (S3.4) The third learning theme related to the synergistic effect of iB and iWBA. All stakeholder groups regarded the combination of iB and iWBA as ideal especially for handling complex patients. In these situations, recognizing and discussing knowledge gaps before and after the clinical encounter was even of greater importance than in less complex/ill patients. “That you can really relate the feedback on something concrete. And this is more feasible when you have discussed something beforehand…therefore I find it very helpful.” (S3.2) “I like to be structured and I think iB and afterwards iWBA is really meaningful. In this way one is prepared and you can expect a feedback afterwards. Therefore, I think that we need both.” (N4.4) Although it was not always feasible for both to occur, participants agreed that the ideal was using them in combination: “If you discuss the case beforehand, the resident should also receive WBA from the supervisor afterwards.” (R2.5) The fourth theme related to cultural change. More specifically, the valuing of the iB. Over the course of the innovation, nurses, supervisors, and residents all described how they began to change their practices to increasingly incorporate the iB. “Briefing was very helpful for me, because I then exactly knew what requires special attention and what I should do next.” (R1.1) “And I think in this regard we all agree that we really implement the briefing checklist in the daily routine and we use it for each patient where it makes sense to use it.” (S3.2) “Even if you know a lot and are quite experienced, it is helpful to systematically use the checklist…. And to figure out, where my uncertainties are.” (N4.1) Participants also described missing the iB when it was not used. For supervisors and nurses, they described how using the iB seemed to also enhance resident competence and impacted on the plans they ultimately developed for their patients. Participants also commented on how they felt it enhanced collaboration as the nurses could now better anticipate what the physicians would be doing. We identified 3 themes related to impact on teamwork and patient care. The first theme related to better preparation of residents. According to residents the calmness and feeling of security through iB had a positive impact on patient care: “And I was quite grateful for the iB in terms of patient care, that even in stressful situations and also as an experienced resident you are grateful to have a certain guideline, which gives a bit of a structure.” (R1.4) Residents also commented on how they felt prepared for eventualities that may occur and left them less worried about the surprise effect. Having discussed the procedure with the supervisor before meeting with the family further enabled the residents to discuss these with families. That the resident was better prepared by iB; thus, more competent also toward patients, was confirmed by supervisors and nurses. In addition, nurses remarked that residents seemed more relaxed during patient care. “Certainly it is beneficial for the patient. Because the resident is structured and well prepared, therefore the patient may be better looked after.” (S3.5) “The resident was more relaxed handling the patient because she then exactly knew what she had to look for.” (N4.2) The second theme related to clear role allocation and a common plan. All 3 stakeholder groups appreciated the clear role allocation supported by iB, especially regarding complex patients. Each person knew what should be done when and who was responsible for what: “And I find it very positive if everyone in the room knows what is going on. And this is through iB possible.” (R2.4) “One of the most important points in iB is the clear role allocation. Therefore I think that all involved persons should participate during iB.” (N4.4) “Together with all three stakeholder groups you really have a better plan as a team what will be done first and what will be done afterwards.” (S3.5) The residents perceived that their own increased feeling of security also had a positive impact on teamwork. Through the clear, standardized process supported by the iB, all team members knew the pretreatment plan and therefore the situation in the team was more calm and enabled mutual trust. Interprofessional briefing was also helpful for the supervisor as it allowed them to better estimate beforehand how much involvement on their part would be required. “With briefing it is helpful for us to see, how much do I have to involve myself regarding patient care, how sure is the resident.” (S3.5) When iB took place with the supervisor only, participants felt that more work was needed to close the loop with nursing staff. The third theme related to building relationships and long-term team cohesion. Participants felt that in the long term, both iB and iWBA were helpful in building team cohesion. It forced the whole team to get in contact together and discuss difficult situations. It promoted the exchange of information openly and not secretly. It allowed progress in a team and prevented that certain prejudices occurred, which were not beneficial for the whole team: “It also prevents certain prejudices or opinions which occur in a team, which may be hindering for the teamwork. Hence you are able to say, the situation was such and therefore this and this happened.” (R1.4) According to the supervisors, “ideally, iWBA occurs together with nurses, in this case you can also look at the team performance.” (S3.1). Nurses seemed to agree with this sentiment adding that it might even be beneficial to have debriefings with mutual feedback: “ But we always speak of the residents, but I think it could also be a feedback for us and maybe also for the supervisors.” (N4.2) Overall, participants felt that both iB and iWBA was in general feasible; however, iB was more easy to integrate into the workflow. Whereas iB could save time by making the whole team more efficient during patient care, with iWBA, it was sometimes an issue to find time amid competing next activities. For it to work, supervisors stressed the importance of giving feedback as soon as possible after the emergency situation. This, however, was not always possible for all team members in the context of the busy ED. Although participants felt that the ideal iWBA took place in a quiet private room, this too was not always possible. Two other challenges to iWBA related to the superviors' and nurses' ability to offer meaningful feedback. When patients were severely ill and the supervisors needed to take over care, they felt that they had less insight into meaningful feedback for the residents. “The problem was that I was not always present for the full time with the patient, because as you said we have discussed it beforehand and the patient then was not as ill. Or the patient was very ill and I had to get involved actively. And then iWBA was difficult.” (S3.4) Similarly, nurses also found it difficult to focus on collecting their impressions for iWBA during stressful situations as they were so focused on doing their own roles. By contrast, in less ill patients, supervisors sometimes did not see the necessity to stay during the complete emergency situation after iB was conducted and as a result also had less insight into resident performance. Overall acceptance of both the iB and iWBA was high across all participants. Residents particularly appreciated how it balanced their roles as clinicians and learners: “An advantage for me was that I found for the first time the focus was on learning that I personally learn from the case. And not that I am just there to work off the cases.” (R1.1) In addition, their supervisors seemed to agree and support this as well: “I believe that feedback is extremely important, and when it is structured like this, this is helpful. Therefore we should think of it and do it.” (S3.4) Because it also enhanced team work, patient care, and safety, it also had the support of supervisors and nurses who felt that they needed to continue to be used following the study period: “And I thought that we have been having everything quite optimized so far. But I must say that these assessments are great. You can work with these instruments and I think we will only profit from it in the future.” (N4.4) We set out to study the impact, feasibility, and acceptance of a combination of 2 innovations (iB and iWBA) to address 4 dominant challenges faced by pediatric EDs: (1) balancing care and learning in the ED care of critically ill children; (2) lack of feedback based on direct observation time; (3) inadequate involvement of nurses in resident learning and feedback; and (4) the post hoc nature of feedback. Our results suggest that when iB and iWBA were effectively combined in practice, all 4 problems were addressed. Balancing care and learning can be extremely challenging in the pediatric ED context. , In addition, during times when patients present with high acuity, patient care always trumps learning. This may therefore explain why our innovation, especially the iB, seemed to be well accepted by residents, supervisors, and nurses alike. According to the participants, the iB improved care and safety and saved time by enhancing teamwork. It also could serve to reduce stress for all members of the team by enabling them to establish clear role allocation, a common plan, and preparation of medication. Because residents were better prepared for the situation, it also seemed to result in increasing their involvement in these cases. Whereas briefing during simulation and in relation to surgery shares many similar features from an enhanced teamwork perspective, this is one of the first studies exploring its impact on enhancing resident involvement. It is also noteworthy that over time, participants described using the iB for noncritically ill patients suggesting the beginnings of a cultural change in the way work is carried out in the ED and supporting our claims around the importance of the iB in enhancing clinical work and learning. In its original formulation, Miflin et al (1997) suggested briefing as a strategy for enhancing medical student involvement in clinical work. Emergency departments are also challenging in the diversity of residents' backgrounds and level of experience. In our study, not only inexperienced but also experienced residents and nurses described benefiting from iB. What seemed to be at the heart of its success was the focus on resident self-identification of areas of uncertainty and needs, which could be addressed before a clinical situation. Somewhat surprisingly, nurses also flagged personal benefits to being involved in the iB as they also identified areas of uncertainty and learned from these. Unlike the iB, the iWBA required teams to spend additional time. As a result, it was not as frequently used. However, participants (especially residents) believed that it offered additional long-term learning benefits. Moreover, when iB was performed before an iWBA, the iWBA was felt to be an even better learning experience. In part, this was due to how much easier the iWBA was to deliver when a pre-encounter briefing had taken place and the iB checklist reviewed. Other features attributed to successful iWBAs included focusing on only a few feedback points and the presence of the nurses for both the iB and iWBA allowing possible topics related to a broader set of perspectives. Although nurses increasingly play an important role in the education of physicians, , only 1 study—to our knowledge—conducted iWBA. In our study, nurses were supportive of both and further suggested the inclusion of mutual feedback to improve education, clinical care, and teamwork. Interprofessional briefing was described as feasible and effective in a busy ED. However, consistently conducting both iB and iWBA was described as organizationally challenging. In particular, both time and available space were limiting factors for the iWBA. In addition, if the patient was severely ill and required the supervisor to take a very active role, their attention could be split in such a way as to limit their attention to what the resident was doing, thus limiting their later feedback. However, as shown by Jarris et al (2011), resident-initiated feedback can lead to improved compliance with feedback giving and satisfaction with its receipt. This study has several strengths and limitations. In terms of strengths, we would cite our data collection and analysis methods, which involved enhancing trustworthiness and credibility of the data through both participant and investigator triangulation. We would also cite the innovation itself, which was theoretically grounded but novel in its application; to our knowledge, our innovation represents a unique application of briefing (iB) and formatively assessing (iWBA). Our major limitations also relate to our methods and the innovation itself. Our study was done in a relatively small program, which limited the number of potential participants and the type of data we could collect. Program size may also have contributed to our success; in a larger program, it may be more difficult to engender the initial buy-in and investment in time from the multiple stakeholders necessary for ensuring that everyone knows how to participate in the iB and iWBA. Future work is also needed in relation to uptake and impact of the innovation. Ideally, this research should capture more longitudinal data and data sources beyond participant perceptions. This study explored residents', supervisors', and nurses' perceptions on the impact, feasibility, and acceptance of an innovation involving a combination of iB and iWBA in the pediatric ED setting. When iB was used correctly, all felt that it had not only a positive impact on learning but also on teamwork and patient care. Interprofessional briefing was felt to be more feasible alone than in combination with iWBA. However, when iB was performed before an iWBA, the iWBA was felt to be a better learning experience. Over time, with increasing uptake across ED teams, the introduction of both shaped cultural change, which served to further enhance their enactment. Future research should be longitudinal, capture other sources of data related to impact, and, if possible, multi-institutional.
Eyes and movement differences in unconscious state during microscopic procedures
4db0de24-092e-4a92-9906-879938b31399
11861892
Microsurgery[mh]
Microsurgery is increasingly being adopted across various surgical fields. However, the acquisition and transfer of microsurgical skills largely depend on experience. Their opportunities to acquire skills are limited compared to athletes . Appropriate education and training are essential for acquiring skills in microsurgery, as with other surgical procedures. Microsurgical techniques, which are widely used to reattach amputated limbs and various skin flaps, involve vascular anastomoses and nerve sutures in the extremities; these are often less than 1 mm in diameter and require the use of an operating microscope. In general, learning surgical techniques involves apprenticeship. However, in cases of finger amputation and severe limb trauma—where microsurgery is essential—the target tissue can be soft and deformed, making it difficult to quantify. Additionally, the timing for these surgeries is often urgent, precluding the use of various simulation technologies and intraoperative navigation that are becoming more common in other surgical training contexts. Recently, various simulation and augmented reality (AR) technologies have been enhancing the efficiency of learning common surgical procedures. In Europe, microscopes that incorporate these technologies are available, although they are not yet widespread. In daily activities like walking or reaching, it is possible to respond unconsciously to slight environmental changes or uncertainties. A seminal experiment highlighting this unconscious adaptation involves the treadmill walking test in decerebrated cats . These animals were shown to maintain locomotion on a treadmill and adjust their gait patterns according to the treadmill’s speed, illustrating an implicit adaptive response to environmental dynamics. Furthermore, the anticipatory nature of motion control plays a pivotal role in unconscious adaptation. Experimental , and theoretical evidence suggests that human movements are fine-tuned to align with environmental conditions prior to the conscious recognition of such changes. This principle of unconscious movement adaptation has been applied to the control of arm prostheses, enhancing their effective use in real-world settings . Surgical experience involves the unconscious processing of surgical procedures. And surgeons must be acutely aware of their fingertips, the tips of their instruments, and even the surgical site itself. “Responsiveness” to unquantifiable uncertainties, which involves unconscious processing, is essential. This research was initiated based on the premise that it might be possible to visualize this phenomenon. The gaze analysis technique used in this study is non-invasive. It has started to be applied across various fields because it can reflect subtle unconscious changes in brain activity through the analysis of eye movement patterns, their distribution, and pupil diameter changes. There are reports suggesting that gaze analysis is beneficial for understanding decision-making processes in the manufacturing and distribution sectors . In the medical field, numerous studies have utilized this method in laparoscopic surgery , , indicating its reliability as quantitative and objective data. This method may also enhance surgical training to improve performance . There is a common saying that “the eyes are as expressive as the mouth,” suggesting that by observing eye movements, one can understand latent cognitive behaviors not expressed in words. Eye movements are indicative of brain function, and it has been reported that saccade movements can help detect diseases . As for electromyograms, the use of surface electromyograms allows non-invasive and continuous data collection. The experimental results demonstrated a positive correlation between manipulation performance and maneuvering experience . Additionally, this method helps to depict unconscious cognitive behavioral models . Groups E and N required average suture times of 15 min and 17 min, respectively. Four patients in group N failed to suture six stitches within 20 min. Furthermore, the mean times for a single-stitch suture were 2.5 min for group E and 4.1 min for group N; this difference was not statistically significant ( P > 0.05). A heat map of the distribution of eye gazing is presented in Fig. . The suture work area of group E on the 3D display can be seen as a concentrated red area in the gaze heatmap (see the right column in Fig. ). The gaze of group N (left column, Fig. ) was broadly distributed, possibly because the participants spent time looking at their hands. A comparison of this heatmap with color-weighted averages revealed that the area of the N group ( p < 0.01) was predominantly larger than that of the E group. Graphs of the pupil diameter are shown in Fig. . The pupil diameter transition for each phase in groups E and N showed a line that was reproducible over several phases. The pupil diameter increased during the advancing and tying phases of the needle, especially in group E. We recorded the suture movements individually on video and followed the trajectory of the gaze. First, we found that the gaze of group E remained concentrated around the suture area and close to the knot throughout the series of operations without showing significant movement (Fig. ). We then examined the movement of forceps in the microscopic video. Group E did not waste much time (see lower panel of Fig. ). Furthermore, the left hand barely moved, while the right hand moved to tie the thread. The tying thread of group N was located far. Hence, group N showed common movement for picking up the tying thread. The trajectory of the right hand was not stable, and the left hand pulling the thread also moved significantly. We marked the red dots as the centers of gravity of the gaze; we also marked each hand movement for one suture. Furthermore, we calculated the distance between each point and the center of gravity. Additionally, we determined the variance. The standard deviations of group E were smaller than those of group N, especially for gaze. The SSI was close to one in the expert group, and the same muscle synergy was used to generate the movement with six sutures. On the other hand, beginners had an SSI close to zero and generated similar movements using completely different muscle synergies each time. The relationship between eye movement and proficiency has been reported in various fields. The time required for training for safety quality checks in corporate manufacturing lines and construction sites can be significantly reduced by providing feedback on eye movement results . In the field of medicine, compared to novices, skilled operators perform tasks during head and neck 3D endoscopic surgery within a narrower range of starting times and places, possibly because the conversion of interpretation from 2D to 3D is empirically unnecessary for skilled operators . Several studies on gaze analysis in the field of laparoscopy have also been reported, with some reporting that evaluations using gaze analysis are more sensitive to expertise. Variations in pupil diameter and workload have also been reported – . Our experiment showed that both groups, E and N, had increased pupil diameters in the two needle and ligature tasks, indicating the involvement of stress in that task. However, stress does exist even when the subjects concentrate on the task. The smaller fluctuations in pupil diameter of Group N than those of Group E during the task suggests that Group N constantly felt stress throughout the test and could not concentrate, when necessary, whereas Group E had a moderate level of stress and could concentrate sufficiently. Several studies have examined the relationship between hand motions and skills in laparoscopy and arthroscopy. Yamamoto et al. used a 3D position tracker to acquire the arthroscopic motion in skilled and unskilled elbow arthroscopists and found that the amount of rotation in the y-axis direction was predominantly greater in skilled operators than in unskilled operators. In addition, Ignacio et al. showed validity in assessing proficiency in laparoscopic surgery with the time, distance, and depth of movement of manipulating instruments. Furthermore, Grober et al. studied hand motion distance with sensors on hands in microscopic techniques; they reported that this distance was objective and sensitive. However, the hand barely moves in microsurgery; hence, the dependence is often on the movement of the fingertips. This study manually extracted and analyzed the coordinates from recorded gaze videos owing to the lack of sensor or image recognition software to capture minute movements of the fingertips and the tips of the forceps. The predominance of a larger gaze distribution area of group N suggests a more focused approach of group E. Changes in pupillary diameter showed the presence of a workload during needle movement and ligation. The work stress of group E increased gradually, whereas group N remained stressed throughout the task. The forceps and eye movements during ligature demonstrated that group E showed fewer hand and eye movements centered on the knot where the gaze was focused. We believe that group E included experiential anticipatory movements. The long movement distance of the forceps in group N may be a point to consider during training, where trainees can be instructing to be mindful of a compact movement in the next action of picking up the thread. In surface myoelectricity, from the results of the SSI, it was found that experts with an SSI close to one generate similar movements using the same muscle synergy every time. As experts are already aware of the needle movement during the suture procedure “advance,” they can reduce the dimensionality of the difference in the situation and environment and can respond to it “unconsciously.” Novices use completely different muscle synergies to generate similar movements each time; hence, it is thought that beginners may have not yet learned how to move needles, and that they respond to differences in situations and environments through “conscious” thinking. Participants The procedures of this study were carried out in accordance with approved guidelines. This study was approved by the Clinical Research Board (CRB) of the Nagoya University Hospital (2022-0078). Written informed consent was obtained from all subjects and all experiments were conducted in accordance with the Declaration of Helsinki. Nine hand surgeons and six orthopedic surgery residents comprised the expert group E and the novice group N, respectively. The characteristics of each group are presented in Table . Six senior residents lacked experience in microsurgery; however, one had assisted in microsurgical procedures. Equipments and procedures The entire process was performed in a simulated operating room set up in the laboratory with appropriate voltage conditions, lighting, and other operating environment requirements, in accordance with the equipment standards and manufacturer’s recommendations. We employed an external video microscope (MM51/YOH, Mitaka Kohki Co., Ltd.) for heads-up surgery along with a Tobii Glass 2 eye tracker (Tobii Technology, Inc.) to record eye gaze and pupil diameter. Postures were captured from three directions using three video cameras (HC-WX1M/WZX1M, Panasonic). Surface electromyogram measurements were taken using FreeEGM1000 (BTS Bioengineering), with electrodes attached to the following muscles: Hand: first dorsal interosseous; Arms: flexor carpi, extensor carpi, biceps, triceps, anterior deltoid, posterior deltoid; Trunk: upper trapezius, pectoralis major, latissimus dorsi. A total of 20 channels of electrodes were attached bilaterally. A 1.5-mm diameter silicone artificial blood vessel (Astec Corporation, GF15U) was utilized for the suturing techniques. Participants donned a wearable eye-tracking device (Tobii Pro Glass 2) equipped with polarized lenses for the 3D display. Under the video microscope, participants were instructed to suture the severed end of the silicone artificial blood vessel with six stitches using a 10 − 0 nylon thread (ETHICON Corporation, ETHILON 10 − 0 circular needle 3 mm 3/8 c) within 20 min. The average time required to suture one stitch was calculated by dividing the total suture time by the number of sutures completed. Analysis We utilized Tobii Labo analysis software in conjunction with the Tobii Pro Glass 2 to analyze eye gaze distribution, record changes in pupil diameter, and monitor hand movements beyond the line of sight, comparing groups E and N. The suturing process was segmented into five distinct steps: pick-up, advance, pull, tie, and cut. For each step, pupil diameter, eye movement, and forceps tip coordinates were recorded and subsequently analyzed. In the surface EMG measurements, muscle synergy was calculated at each “advance” stage, and the muscle synergy stability index (SSI) was employed to evaluate the proficiency of experts and novices. SSI is an index that assesses whether movements are produced using stable muscle synergies. It ranges from 0 to 1, where values closer to 1 indicate consistent use of the same muscle synergies to generate movements, while values closer to 0 denote less consistency. This metric was determined by segmenting muscle activity during six “advance” tasks, analyzing it with a non-negative matrix, assessing the synergy correlation, and computing the SSI . The procedures of this study were carried out in accordance with approved guidelines. This study was approved by the Clinical Research Board (CRB) of the Nagoya University Hospital (2022-0078). Written informed consent was obtained from all subjects and all experiments were conducted in accordance with the Declaration of Helsinki. Nine hand surgeons and six orthopedic surgery residents comprised the expert group E and the novice group N, respectively. The characteristics of each group are presented in Table . Six senior residents lacked experience in microsurgery; however, one had assisted in microsurgical procedures. The entire process was performed in a simulated operating room set up in the laboratory with appropriate voltage conditions, lighting, and other operating environment requirements, in accordance with the equipment standards and manufacturer’s recommendations. We employed an external video microscope (MM51/YOH, Mitaka Kohki Co., Ltd.) for heads-up surgery along with a Tobii Glass 2 eye tracker (Tobii Technology, Inc.) to record eye gaze and pupil diameter. Postures were captured from three directions using three video cameras (HC-WX1M/WZX1M, Panasonic). Surface electromyogram measurements were taken using FreeEGM1000 (BTS Bioengineering), with electrodes attached to the following muscles: Hand: first dorsal interosseous; Arms: flexor carpi, extensor carpi, biceps, triceps, anterior deltoid, posterior deltoid; Trunk: upper trapezius, pectoralis major, latissimus dorsi. A total of 20 channels of electrodes were attached bilaterally. A 1.5-mm diameter silicone artificial blood vessel (Astec Corporation, GF15U) was utilized for the suturing techniques. Participants donned a wearable eye-tracking device (Tobii Pro Glass 2) equipped with polarized lenses for the 3D display. Under the video microscope, participants were instructed to suture the severed end of the silicone artificial blood vessel with six stitches using a 10 − 0 nylon thread (ETHICON Corporation, ETHILON 10 − 0 circular needle 3 mm 3/8 c) within 20 min. The average time required to suture one stitch was calculated by dividing the total suture time by the number of sutures completed. We utilized Tobii Labo analysis software in conjunction with the Tobii Pro Glass 2 to analyze eye gaze distribution, record changes in pupil diameter, and monitor hand movements beyond the line of sight, comparing groups E and N. The suturing process was segmented into five distinct steps: pick-up, advance, pull, tie, and cut. For each step, pupil diameter, eye movement, and forceps tip coordinates were recorded and subsequently analyzed. In the surface EMG measurements, muscle synergy was calculated at each “advance” stage, and the muscle synergy stability index (SSI) was employed to evaluate the proficiency of experts and novices. SSI is an index that assesses whether movements are produced using stable muscle synergies. It ranges from 0 to 1, where values closer to 1 indicate consistent use of the same muscle synergies to generate movements, while values closer to 0 denote less consistency. This metric was determined by segmenting muscle activity during six “advance” tasks, analyzing it with a non-negative matrix, assessing the synergy correlation, and computing the SSI . The results of this study showed differences in gaze distribution, pupil diameter, movement of forceps, and body control during suturing techniques under a microscope between experienced and novice users. These differences were due to unconscious motions. Based on the results of the experiments conducted, we are currently examining which scores improve with repeated practice. In the future, the results of the study must be validated in terms of education and transfer of skills. The setting of practice times for the most stressful tasks and the items that can be improved by these practices should also be investigated based on the trends observed in the present study.
Use of Augmented Reality for Training Assistance in Laparoscopic Surgery: Scoping Literature Review
08d5f7c6-f0be-49fe-a23b-44b657f78115
11815304
Surgical Procedures, Operative[mh]
Background Training in laparoscopic surgery is a long and demanding process that requires extensive theoretical knowledge, along with technical and nontechnical skills. Learning models in surgical training have rapidly evolved from traditional approaches based on the educational philosophy of “ see one, do one, teach one ” to more sophisticated surgical simulators that aim to increase the number of training sessions, thus dramatically enhancing the skills of medical professionals and the safety of patients . Among the formative strategies, there are some based on animal models and cadavers . However, due to the economic and ethical issues involved in some of these solutions, surgical training has rapidly shifted toward the use of simulation-based systems, mainly for the early formative stages . The emergence of immersive digital technologies such as virtual reality, augmented reality (AR), and mixed reality (MR) has led to a paradigm shift in the field of surgical training. Virtual reality allows users to be immersed in a fully digital environment replacing the physical world, whereas AR superimposes virtual elements onto the real world, enhancing or augmenting the user’s environment. The latest evolution of these immersive technologies is MR, which merges virtual and real objects, enabling realistic interactions and coexistence. The use of these technologies is becoming an important part of the training process in laparoscopic surgery, enhancing the training experience and content without putting the patient at risk . This technique can generate customized 3D models of each patient on which to train. Similarly, during the training process, this technology allows for the inclusion of enriched information, such as holographic images or 3D objects to guide the user during the training process and facilitating a more precise alignment between virtual information and physical objects in a simulator, an experimental model, or a patient. The use of AR technologies is constantly evolving and being integrated into the field of minimally invasive surgery. However, the current level of development of this technology as a tool to assist in the training process in laparoscopic surgery, as well as the available solutions (both commercial and prototypes) and the training functionalities they offer, is not precisely known. Objectives This scoping review intended to analyze the current AR and MR solutions used to assist in laparoscopic surgery training. Similarly, we reviewed the types of surgical simulators that make use of this technology, the training assistance information offered, and the main training tasks and procedures in which they have been used. This gave us a more detailed view of the current state of AR in the field of laparoscopic surgery training. Training in laparoscopic surgery is a long and demanding process that requires extensive theoretical knowledge, along with technical and nontechnical skills. Learning models in surgical training have rapidly evolved from traditional approaches based on the educational philosophy of “ see one, do one, teach one ” to more sophisticated surgical simulators that aim to increase the number of training sessions, thus dramatically enhancing the skills of medical professionals and the safety of patients . Among the formative strategies, there are some based on animal models and cadavers . However, due to the economic and ethical issues involved in some of these solutions, surgical training has rapidly shifted toward the use of simulation-based systems, mainly for the early formative stages . The emergence of immersive digital technologies such as virtual reality, augmented reality (AR), and mixed reality (MR) has led to a paradigm shift in the field of surgical training. Virtual reality allows users to be immersed in a fully digital environment replacing the physical world, whereas AR superimposes virtual elements onto the real world, enhancing or augmenting the user’s environment. The latest evolution of these immersive technologies is MR, which merges virtual and real objects, enabling realistic interactions and coexistence. The use of these technologies is becoming an important part of the training process in laparoscopic surgery, enhancing the training experience and content without putting the patient at risk . This technique can generate customized 3D models of each patient on which to train. Similarly, during the training process, this technology allows for the inclusion of enriched information, such as holographic images or 3D objects to guide the user during the training process and facilitating a more precise alignment between virtual information and physical objects in a simulator, an experimental model, or a patient. The use of AR technologies is constantly evolving and being integrated into the field of minimally invasive surgery. However, the current level of development of this technology as a tool to assist in the training process in laparoscopic surgery, as well as the available solutions (both commercial and prototypes) and the training functionalities they offer, is not precisely known. This scoping review intended to analyze the current AR and MR solutions used to assist in laparoscopic surgery training. Similarly, we reviewed the types of surgical simulators that make use of this technology, the training assistance information offered, and the main training tasks and procedures in which they have been used. This gave us a more detailed view of the current state of AR in the field of laparoscopic surgery training. Search Strategy A structured bibliographical search was conducted in the Scopus, IEEE Xplore, PubMed, and ACM databases. The same search query adapted to each database query syntax was used. We used a set of keywords related to the topic of this scoping review to identify relevant studies published up to January 1, 2024 . In general terms, the initial search encompassed all articles that included the following terms in the title or abstract or as keywords: “laparoscopic”; AND “augmented reality” OR “mixed reality” OR “extended reality”; AND “training” OR “practice.” Eligibility Criteria A series of inclusion and exclusion criteria were considered to select the articles that best applied for our objectives. In general, articles on studies that did not make use of AR technologies or did not provide information on the inclusion of such technologies in laparoscopic surgery training, articles that did not focus on laparoscopy or its skill training, articles that were written in a language other than English, reviews, or articles that were not accessible were discarded. In addition, regarding the quality of the articles, those studies that did not provide relevant information on the topics studied were excluded. Selection of Articles Results were screened by 2 independent reviewers. In case of discrepancies between reviewers regarding the inclusion or noninclusion of an article, a third reviewer was consulted. The PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines were followed to carry out this scoping review , including the completion of the corresponding checklist . Research Questions The aim of this work was to analyze the studies published in scientific literature in relation to the use of AR in laparoscopic surgery training and analyze the relevant aspects that facilitate the development of new investigations in this field. Therefore, with this scoping review, we intended to answer the following research questions (RQs): What type of devices and feedback are used for AR-based laparoscopic surgery training? (RQ 1) What type of sensorization is used for AR-based laparoscopic surgery training? (RQ 2) What type of simulator and setup is used for AR-based laparoscopic surgery training? (RQ 3) What type of evaluation is used to assess skill acquisition in AR-based laparoscopic surgery training? (RQ 4) What type of surgical tasks or procedures are used in AR-based laparoscopic surgery training? (RQ 5) To do so, we classified every article based on different dimensions, each one related to one of the RQs . Data Collection and Processing For each selected study, information on the following aspects was recorded: (1) year of publication, (2) summary of the complete study, (3) modality of teaching (tele-mentoring or conventional), (4) type of device used for AR, (5) type of information provided to the learner, (6) type of sensorization, (7) type of AR simulator (commercial or prototype, and which one if commercial), (8) type of evaluation (objective or subjective), (9) training tasks or procedures performed, and (10) specialty of laparoscopic surgery. As with the selection of articles, the data collected for each article were screened by 2 independent reviewers. In case of discrepancies between reviewers regarding the data collected for each article, a third reviewer was consulted. A structured bibliographical search was conducted in the Scopus, IEEE Xplore, PubMed, and ACM databases. The same search query adapted to each database query syntax was used. We used a set of keywords related to the topic of this scoping review to identify relevant studies published up to January 1, 2024 . In general terms, the initial search encompassed all articles that included the following terms in the title or abstract or as keywords: “laparoscopic”; AND “augmented reality” OR “mixed reality” OR “extended reality”; AND “training” OR “practice.” A series of inclusion and exclusion criteria were considered to select the articles that best applied for our objectives. In general, articles on studies that did not make use of AR technologies or did not provide information on the inclusion of such technologies in laparoscopic surgery training, articles that did not focus on laparoscopy or its skill training, articles that were written in a language other than English, reviews, or articles that were not accessible were discarded. In addition, regarding the quality of the articles, those studies that did not provide relevant information on the topics studied were excluded. Results were screened by 2 independent reviewers. In case of discrepancies between reviewers regarding the inclusion or noninclusion of an article, a third reviewer was consulted. The PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines were followed to carry out this scoping review , including the completion of the corresponding checklist . The aim of this work was to analyze the studies published in scientific literature in relation to the use of AR in laparoscopic surgery training and analyze the relevant aspects that facilitate the development of new investigations in this field. Therefore, with this scoping review, we intended to answer the following research questions (RQs): What type of devices and feedback are used for AR-based laparoscopic surgery training? (RQ 1) What type of sensorization is used for AR-based laparoscopic surgery training? (RQ 2) What type of simulator and setup is used for AR-based laparoscopic surgery training? (RQ 3) What type of evaluation is used to assess skill acquisition in AR-based laparoscopic surgery training? (RQ 4) What type of surgical tasks or procedures are used in AR-based laparoscopic surgery training? (RQ 5) To do so, we classified every article based on different dimensions, each one related to one of the RQs . For each selected study, information on the following aspects was recorded: (1) year of publication, (2) summary of the complete study, (3) modality of teaching (tele-mentoring or conventional), (4) type of device used for AR, (5) type of information provided to the learner, (6) type of sensorization, (7) type of AR simulator (commercial or prototype, and which one if commercial), (8) type of evaluation (objective or subjective), (9) training tasks or procedures performed, and (10) specialty of laparoscopic surgery. As with the selection of articles, the data collected for each article were screened by 2 independent reviewers. In case of discrepancies between reviewers regarding the data collected for each article, a third reviewer was consulted. Overview A total of 246 records were obtained (n=28, 11.4% in IEEE Explore; n=121, 49.2% in Scopus; n=16, 6.5% in ACM; and n=81, 32.9% in PubMed). After removing duplicate records, 69.9% (172/246) remained. Once the exclusion criteria were applied, 44.2% (76/172) of the articles were left. Of these 76 articles, 25 (33%) that did not meet the quality criteria were eliminated so that 51 (67%) articles were finally included in this review illustrates the complete workflow. The papers included were published between 2002 and 2023, most of them (37/51, 73%) were scientific journal articles , and the rest (14/51, 27%) were papers presented in scientific conferences . The evolution of the papers included in this scoping review was heterogeneous , although it seems to be related to the launch of new AR devices on the market. Among them, the appearance of the first AR applications in games and smartphones in 2009, the launch of the Google Glass device in 2015, or the launch of the second version of Microsoft’s HoloLens in 2019 are highlighted. In the following sections, we will analyze each of the RQs defined to provide a comprehensive analysis of the current application of AR technology in laparoscopic surgery training. RQ 1: What Types of Devices and Feedback Are Used for AR-Based Laparoscopic Surgery Training? Our first RQ was about the AR devices used and the information provided. This analysis allowed us to see the evolution of the technology used and how these systems provide relevant information to enhance laparoscopic surgery training. Regarding the devices, we classified them into 2 main types: optical see-through (OST) and AR video (ARV)–based devices . OST refers to those devices that are capable of rendering information on a medium while allowing the real world to be seen through it. Devices such as HoloLens fall into this category as they allow information to be rendered on the screen while superimposing this information on our visual perception of the real world. ARV refers to the use of conventional video devices such as monitors, tablets, or other types of screens in which reality information is displayed (such as a real-time video) and overlaid with expanded information, creating an AR system. OST systems usually include voice commands, eye tracking, and interaction with real-world objects, whereas ARV systems reduce interaction and rely mainly on the display of additional information on the screen. Some articles (2/51, 4%) did not specify the device used, and so they were classified as unspecified devices. The feedback provided by the system is closely related to the devices being used as it conditions the user-system interaction. We divided the articles by the information given to the user, differentiating among (1) execution-related information, which refers to any type of information related to the actions being performed by the trainee, such as the force exerted or time expended; (2) mentor guidance , which refers to those systems in which the AR technology allows the mentor to enhance the way in which they provide guidance to the trainee, for instance, using virtual pointers or annotations; (3) educational information , which could be general information about the training activity being performed, such as images of the organs and structures involved in the surgical activity or other general information; and, finally, (4) patient-oriented information, which corresponds to personalized information about the patient to be operated on. In this case, information was divided into medical imaging (as preoperative imaging), 3D segmentation created from preoperative imaging, or other personalized information. Other information comprised information that did not fit into any other classification. Unspecified information comprised articles that did not indicate the type of information provided to the user of the AR device . Most of the studies analyzed (43/51, 84%) used ARV devices, and only 12% (6/51) used OST devices. A total of 4% (2/51) of the articles did not specify which kind of device was used. Regarding the ARV studies, although most of them (11/43, 26%) did not specify what information they provided to the user, among those that specified this information, execution-related information was highly common (9/43, 21%), such as in the study by Zahiri et al , which showed the time remaining for the trainee to complete the current task in the form of either a numerical timer or a progress bar. Regarding mentor guidance (6/43, 14%), Andersen et al developed a tool for tele-mentoring, allowing the mentor to use different-colored dots for annotation. A common type of information was patient-oriented information (8/43, 19%). Pessaux et al conducted a 3D segmentation from preoperative images to generate a 3D model of the patient’s body, which was later superimposed over the real patient’s body during a duodenopancreatectomy. Regarding medical imaging, Koehl et al developed a system that allowed trainees to visualize and manipulate preoperative images. Arts et al provided trainees with educational videos with instructions for the ongoing task. Regarding other information, Shao et al used prerecorded ultrasound videos that were played depending on the position of the laparoscopic tools so that their system could be used for training purposes on ultrasound-guided laparoscopic interventions. Of the 6 studies using OST devices, 3 (50%) used HoloLens version 1 , and 3 (50%) used HoloLens version 2 . In addition, the year of publication of those articles was 2020, so it could be considered a technology of recent application in laparoscopic training. Although far fewer studies that used OST devices were found, they mainly provided patient-oriented information (4/6, 67%). For instance, Zorzal et al gave users the possibility to obtain access to the preoperative magnetic resonance image of the patient. Others (2/6, 33%) provided educational information, such as the study by Simone et al , which used annotations on the patient’s body and a mentor’s voice instructions. RQ 2: What Type of Sensorization Is Used for AR-Based Laparoscopic Surgery Training? To assist in the training activity, it is essential to know in real time the location of the different elements involved in the training environment, such as the training scenario, surgical instruments, patient, and posture of the surgeon. By monitoring these elements, it is possible to adapt the training process to the trainee’s educational needs. In addition, as this task is manipulative, it is relevant to consider the haptic stimuli. Therefore, in this section, we analyze which aspects related to the use of sensors (sensorization) were used in scientific literature. In this regard, we classified systems according to 2 aspects: the element being measured or sensorized and the technology used . As we have pointed out, there are different elements in the training environment that can be tracked to know their location, motion, or behavior. We distinguished user tracking (ie, monitoring the surgeon and elements related to the surgeon). In this case, we organized the tracked elements into 3 categories: body , which were systems that recorded the surgeon’s body motion for kinematic analysis; eye , to refer to the analysis of the surgeon’s gaze and where they focus their attention; and instrument , which referred to the tracking of the surgical instruments. In addition to the user, the target can be tracked ( target tracking ), that is, the patient or the simulated model of the patient. This type of analysis was usually conducted using computer vision techniques. In this case, when we talk about body , we refer to the patient’s body and organs, whereas instrument refers to other objects or markers that can be placed inside the patient and can be tracked or recognized, such as the needle used in suturing tasks . Not only the object being tracked is important but also how they were tracked, that is, the technology and techniques used to achieve this tracking. For this reason, we divided the technology into 3 categories: sensor based , optic , and force . Sensor-based technology refers to those systems that used a set of sensors specifically located to track and measure certain interactions in those locations. An example could be the use of electromagnetic trackers for recording laparoscopic tool motion . Optic technologies refer to the identification and tracking of objects using artificial vision techniques. Felinska et al used iSurgeon, an AR tool that allowed instructors to project their hand gestures in real time onto the laparoscopic screen, enhancing mentoring guidance during laparoscopic training sessions. Force refers to those systems that used devices capable of measuring the interaction force of the laparoscopic instrument . Most of the studies (46/51, 90%) used some type of sensorization of the training environment in which the surgical simulation was performed. In most AR systems, the user saw a real world augmented only with visual information and had no means to interact with virtual objects. If we want to manipulate virtual objects, we need another kind of information— haptic . Therefore, together with the display of visual information recorded by tracking devices, the availability of augmented haptic information seems relevant in laparoscopic surgery training, where manipulative tasks are fundamental. In this sense, 8% (4/51) of the studies included some type of haptic feedback in the laparoscopic surgery training environment. For example, Ivanova et al used a virtual 3D model of an organ and a sensorized physical simulator, providing force feedback on the hardness of the tissue being touched and the force exerted. Regarding the element being measured or sensorized (tracked), the most common was tracking of the surgeon (user tracking). In this case, the most relevant elements tracked were the instruments used by the surgeon (32/51, 63%). Pagador et al used electromagnetic trackers to record the instrument’s position during the training activity, whereas Lahanas et al used fiducial markers attached to the tip of the instrument for tracking instrument position and rotation. There were only a few studies (4/51, 8%) that made use of surgeon posture tracking. For example, Cizmic et al tracked hand position to project the hand movements of the trainer onto the laparoscopic image to enhance communication between trainer and trainee. The use of eye tracking was present in only 4% (2/51) of the studies analyzed. This technique was used by some authors to interact with sets of images and other elements of a user interface by means of gaze or to assess performance and quality of communication . Target tracking was the next most used sensorization in laparoscopic surgery training solutions. In this case, body tracking (tracking objects in the training environment or patient organs) was the category that comprised most of the reviewed articles. For this purpose, most of the studies (35/51, 69%) used artificial vision techniques to identify and track body structures. Viglialoro et al used AR as a training aid in simulator-based laparoscopic cholecystectomy training, mainly during the isolation of the cystic duct and artery (Calot triangle). AR technology allowed trainees to visualize these hidden structures during the training activity . Pessaux et al proposed an AR-based assistance system to superimpose information from preoperative images onto the patient’s skin using a beamer, generating body transparency with visualization of deep structures. Finally, only few studies (2/48, 4%) tracked information about artificial markers or objects (instruments) within the surgical workspace inside the patient’s body. Preetha et al tracked the surgical needle during a laparoscopic suturing task. RQ 3: What Type of Simulator and Setup Is Used for AR-Based Laparoscopic Surgery Training? Regarding the simulator and setup used for laparoscopic training, 2 main characteristics were considered. The first was the origin of the training system, which may be a widely used commercial simulator (eg, ProMIS) or a prototype developed specifically for the research being carried out or pending to be released into the market. The second was the physical characteristics of the simulator and the models used in the training activity (training model). It could be an artificial model created solely for the purpose of training or an organic model (in vivo or ex vivo) obtained from organic tissue . The use of commercial simulators was more widespread (20/51, 39%) than the use of prototypes (14/51, 27%). Commercial simulators mostly used artificial models (10/20, 50%), such as beads for the eye-hand coordination tasks . There were also some studies (6/20, 30%) that used organic models. In this case, some authors compared skill acquisition after training with human cadavers versus using AR-enhanced artificial models . Strickland et al used a lamb liver to which a piece of marshmallow was introduced to simulate a tumor to be removed, and Pagador et al used a porcine stomach for suturing tasks. There were some studies (4/20, 20%) that did not specify the model used by the commercial simulator or that could not be included in any of the defined categories. Regarding the simulator prototypes, most (8/12, 67%) used artificial models. Lahanas et al used markers inside a box trainer to superimpose images of different interactive objects, such as rings or beads that needed to be manipulated. Regarding the use of organic models, only Baumhauer et al used an ex vivo porcine kidney to test the image overlay capability of the AR-based training system presented. The rest of the papers did not specify the model used or did not fit into any of the categories. Artificial models were the most used training models (26/51, 51%), especially in simulator prototypes (10/17, 59%), followed by ex vivo training models (8/51, 16%). The latter stand out as 62% (5/8) of the studies that used ex vivo models used commercial simulators. The studies that used in vivo models did not refer to the simulator used as these were real operations (either on humans or animals). There were studies that did not specify whether they used a commercial simulator or developed a prototype (17/51, 33%). Of these studies, 41% (7/17) used artificial training models, such as the system presented by Doughty et al , which did not use a specific laparoscopic simulator as it was intended to be used in any type of surgery. Meanwhile, 6 of the studies that did not specify whether they used a commercial simulator or a prototype used organic models, with 4 (67%) of them being in vivo models. Simone et al used MR smart glasses to enable tele-mentoring in real surgical procedures during the COVID-19 pandemic. There were a few articles (4/51, 8%) that did not specify either the type of simulator or training model used. Gupta et al mentioned the need for a development framework for this type of simulators in laparoscopic surgery training and which aspects should be considered. However, they did not make use of any training system as it was a theoretical study . Another study tested the impact of AR elements on inattentional blindness during a laparoscopic operation using a prerecorded video . The study by Koehl et al focused on the generation of modular models that could be used for other applications as a 3D visual model or as a force feedback generation system. Pagador et al designed and evaluated a laparoscopic tool tracking system for further assessment purposes. RQ 4: What Type of Evaluation Is Used to Assess Skill Acquisition in AR-Based Laparoscopic Surgery Training? In total, 2 main ways of addressing the assessment of laparoscopic skill acquisition in AR-based laparoscopic surgery training solutions were reported. One way of evaluation was through objective evaluation metrics usually calculated by the training system. Another way was subjective evaluation carried out by experts or by means of interviews or self-assessment questionnaires . Some studies used both ways , while others used none . Objective evaluation was the most used method for the assessment of skill acquisition in laparoscopic surgery. In terms of the metric used, most of the studies (25/51, 49%) evaluated the performance of surgeons to assess whether the system enhanced laparoscopic skill acquisition. A variety of metrics were used, such as execution time, path length, or motion smoothness . Time, whether related to task completion or training duration, was a commonly used metric that assessed the time spent on each task as a performance indicator. There were other studies (2/31, 6%) that, instead of measuring the time needed to complete the task, measured the time that the laparoscopic instruments spent in a specific area, for instance, during a suturing task . Path length was another prevalent objective metric (8/31, 26%). This metric records the path followed by the laparoscopic instruments during the performance of the training task. There were another 6% (2/31) of the studies that, apart from measuring path length, also computed the economy of movements of the surgical instruments . Force metrics were also commonly used (4/31, 13%). Botden et al evaluated the performance of a suturing task by assessing the strength of the knot. The Objective Structured Assessment of Technical Skills (OSATS) and the Global Operative Assessment of Laparoscopic Skills (GOALS) formularies are 2 recognized standards for assessing laparoscopic skills that were also used in some of the studies, with the OSATS being used in 14% (5/36) of the studies and the GOALS being used only twice (2/36, 6%). Subjective evaluation was mostly used for evaluating the usability of the AR tools (6/51, 12%). For example, Arts et al made use of a questionnaire to assess the validity of the ProMIS as an AR laparoscopic surgery simulator. The combination of both types of evaluation (objective and subjective) was used in 22% (11/51) of the studies. Cau et al combined an expert’s subjective assessment of a suturing task with metrics obtained by the system itself. RQ 5: What Types of Surgical Tasks or Procedures Are Used in AR-Based Laparoscopic Surgery Training? Regarding the laparoscopic training activities (tasks or procedures), in most of the studies (26/51, 51%), trainees performed basic tasks commonly used in the early stages of laparoscopic surgery training programs. These tasks were divided into 5 main categories: navigation tasks focused on the skill needed to explore the surgical workspace using the camera and laparoscopic instruments; object manipulation tasks aimed to train the user to move, rotate, or manipulate objects or organs using the laparoscopic instruments; dissection tasks focused on the ability to separate tissue and anatomical structures without causing any damage to them; cutting tasks focused on training users to make precise cuts in organs and tissues; and suturing tasks focused on training for suturing tissues, including knotting skills. Finally, there were studies that used surgical procedures such as partial nephrectomy on ex vivo models or sigmoid colectomy on cadavers . Similarly, considering its main clinical features, such as the surgical procedure performed or the training model used, the category of surgical specialty into which each study fell or could fall was defined. For those studies in which a training surgical intervention was performed, the surgical specialty was the one which that procedure corresponded to. There were also some studies that included simple tasks. In this case, the surgical specialty was selected based on the training model used. For those cases in which surgical skills did not focus on a particular surgical specialty, the studies were classified as “All” specialties . A large proportion of the studies analyzed (20/51, 39%) used surgical procedures rather than simple tasks to train and assess surgical skills. General surgery was the surgical specialty of most of the studies (14/20, 70%). In the study by Viglialoro et al , they presented an AR solution that assisted in the identification of the artery and cystic duct in a training simulator for cholecystectomy. Other studies (4/20, 20%) were included under the specialty of urology, such as the study by Teber et al , who presented a system for formative assistance during laparoscopic partial nephrectomy. Other studies (3/20, 15%) performed procedures that were not classified into any specific surgical specialty but could be used in any specialty, such as the study by Doughty et al , who developed a context-aware system capable of assisting the surgeon depending on the training task being performed. Regarding basic laparoscopic surgery training tasks, the most common task used was suturing (11/25, 44%). Botden et al , for example, used suturing to analyze the evolution of the trainees’ technical skills, assessing how fast trainees performed tasks and the strength of the knotting. Object manipulation tasks was also common in the studies analyzed (9/25, 36%). Lahanas et al presented 3 different object manipulation tasks to assess trainees’ laparoscopic skills using their AR-based simulator. The use of navigation tasks was less common in the training settings (5/25, 20%). Fusaglia et al presented an overlay system to show hidden anatomical structures during the performance of laparoscopic navigation tasks. The first task was to press a set of buttons in a specific order using the laparoscopic instruments, the second task was to transfer an object using the laparoscopic tools, and the last task was to cut a virtual object (using AR technology). Another type of task were dissection tasks, but they were less common that the rest (3/25, 12%). One study proposed the use of 2 balloons (one inside the other) that the trainee had to separate keeping the inner balloon inflated (ie, without being damaged) . Only 16% (4/25) of the studies focused on cutting tasks. In the study by Brinkman et al , a full week of laparoscopic surgery training was proposed, having training sessions once a day in which some laparoscopic cutting tasks were included. Finally, there were 12% (6/51) of the articles that did not specify the task performed. A total of 246 records were obtained (n=28, 11.4% in IEEE Explore; n=121, 49.2% in Scopus; n=16, 6.5% in ACM; and n=81, 32.9% in PubMed). After removing duplicate records, 69.9% (172/246) remained. Once the exclusion criteria were applied, 44.2% (76/172) of the articles were left. Of these 76 articles, 25 (33%) that did not meet the quality criteria were eliminated so that 51 (67%) articles were finally included in this review illustrates the complete workflow. The papers included were published between 2002 and 2023, most of them (37/51, 73%) were scientific journal articles , and the rest (14/51, 27%) were papers presented in scientific conferences . The evolution of the papers included in this scoping review was heterogeneous , although it seems to be related to the launch of new AR devices on the market. Among them, the appearance of the first AR applications in games and smartphones in 2009, the launch of the Google Glass device in 2015, or the launch of the second version of Microsoft’s HoloLens in 2019 are highlighted. In the following sections, we will analyze each of the RQs defined to provide a comprehensive analysis of the current application of AR technology in laparoscopic surgery training. Our first RQ was about the AR devices used and the information provided. This analysis allowed us to see the evolution of the technology used and how these systems provide relevant information to enhance laparoscopic surgery training. Regarding the devices, we classified them into 2 main types: optical see-through (OST) and AR video (ARV)–based devices . OST refers to those devices that are capable of rendering information on a medium while allowing the real world to be seen through it. Devices such as HoloLens fall into this category as they allow information to be rendered on the screen while superimposing this information on our visual perception of the real world. ARV refers to the use of conventional video devices such as monitors, tablets, or other types of screens in which reality information is displayed (such as a real-time video) and overlaid with expanded information, creating an AR system. OST systems usually include voice commands, eye tracking, and interaction with real-world objects, whereas ARV systems reduce interaction and rely mainly on the display of additional information on the screen. Some articles (2/51, 4%) did not specify the device used, and so they were classified as unspecified devices. The feedback provided by the system is closely related to the devices being used as it conditions the user-system interaction. We divided the articles by the information given to the user, differentiating among (1) execution-related information, which refers to any type of information related to the actions being performed by the trainee, such as the force exerted or time expended; (2) mentor guidance , which refers to those systems in which the AR technology allows the mentor to enhance the way in which they provide guidance to the trainee, for instance, using virtual pointers or annotations; (3) educational information , which could be general information about the training activity being performed, such as images of the organs and structures involved in the surgical activity or other general information; and, finally, (4) patient-oriented information, which corresponds to personalized information about the patient to be operated on. In this case, information was divided into medical imaging (as preoperative imaging), 3D segmentation created from preoperative imaging, or other personalized information. Other information comprised information that did not fit into any other classification. Unspecified information comprised articles that did not indicate the type of information provided to the user of the AR device . Most of the studies analyzed (43/51, 84%) used ARV devices, and only 12% (6/51) used OST devices. A total of 4% (2/51) of the articles did not specify which kind of device was used. Regarding the ARV studies, although most of them (11/43, 26%) did not specify what information they provided to the user, among those that specified this information, execution-related information was highly common (9/43, 21%), such as in the study by Zahiri et al , which showed the time remaining for the trainee to complete the current task in the form of either a numerical timer or a progress bar. Regarding mentor guidance (6/43, 14%), Andersen et al developed a tool for tele-mentoring, allowing the mentor to use different-colored dots for annotation. A common type of information was patient-oriented information (8/43, 19%). Pessaux et al conducted a 3D segmentation from preoperative images to generate a 3D model of the patient’s body, which was later superimposed over the real patient’s body during a duodenopancreatectomy. Regarding medical imaging, Koehl et al developed a system that allowed trainees to visualize and manipulate preoperative images. Arts et al provided trainees with educational videos with instructions for the ongoing task. Regarding other information, Shao et al used prerecorded ultrasound videos that were played depending on the position of the laparoscopic tools so that their system could be used for training purposes on ultrasound-guided laparoscopic interventions. Of the 6 studies using OST devices, 3 (50%) used HoloLens version 1 , and 3 (50%) used HoloLens version 2 . In addition, the year of publication of those articles was 2020, so it could be considered a technology of recent application in laparoscopic training. Although far fewer studies that used OST devices were found, they mainly provided patient-oriented information (4/6, 67%). For instance, Zorzal et al gave users the possibility to obtain access to the preoperative magnetic resonance image of the patient. Others (2/6, 33%) provided educational information, such as the study by Simone et al , which used annotations on the patient’s body and a mentor’s voice instructions. To assist in the training activity, it is essential to know in real time the location of the different elements involved in the training environment, such as the training scenario, surgical instruments, patient, and posture of the surgeon. By monitoring these elements, it is possible to adapt the training process to the trainee’s educational needs. In addition, as this task is manipulative, it is relevant to consider the haptic stimuli. Therefore, in this section, we analyze which aspects related to the use of sensors (sensorization) were used in scientific literature. In this regard, we classified systems according to 2 aspects: the element being measured or sensorized and the technology used . As we have pointed out, there are different elements in the training environment that can be tracked to know their location, motion, or behavior. We distinguished user tracking (ie, monitoring the surgeon and elements related to the surgeon). In this case, we organized the tracked elements into 3 categories: body , which were systems that recorded the surgeon’s body motion for kinematic analysis; eye , to refer to the analysis of the surgeon’s gaze and where they focus their attention; and instrument , which referred to the tracking of the surgical instruments. In addition to the user, the target can be tracked ( target tracking ), that is, the patient or the simulated model of the patient. This type of analysis was usually conducted using computer vision techniques. In this case, when we talk about body , we refer to the patient’s body and organs, whereas instrument refers to other objects or markers that can be placed inside the patient and can be tracked or recognized, such as the needle used in suturing tasks . Not only the object being tracked is important but also how they were tracked, that is, the technology and techniques used to achieve this tracking. For this reason, we divided the technology into 3 categories: sensor based , optic , and force . Sensor-based technology refers to those systems that used a set of sensors specifically located to track and measure certain interactions in those locations. An example could be the use of electromagnetic trackers for recording laparoscopic tool motion . Optic technologies refer to the identification and tracking of objects using artificial vision techniques. Felinska et al used iSurgeon, an AR tool that allowed instructors to project their hand gestures in real time onto the laparoscopic screen, enhancing mentoring guidance during laparoscopic training sessions. Force refers to those systems that used devices capable of measuring the interaction force of the laparoscopic instrument . Most of the studies (46/51, 90%) used some type of sensorization of the training environment in which the surgical simulation was performed. In most AR systems, the user saw a real world augmented only with visual information and had no means to interact with virtual objects. If we want to manipulate virtual objects, we need another kind of information— haptic . Therefore, together with the display of visual information recorded by tracking devices, the availability of augmented haptic information seems relevant in laparoscopic surgery training, where manipulative tasks are fundamental. In this sense, 8% (4/51) of the studies included some type of haptic feedback in the laparoscopic surgery training environment. For example, Ivanova et al used a virtual 3D model of an organ and a sensorized physical simulator, providing force feedback on the hardness of the tissue being touched and the force exerted. Regarding the element being measured or sensorized (tracked), the most common was tracking of the surgeon (user tracking). In this case, the most relevant elements tracked were the instruments used by the surgeon (32/51, 63%). Pagador et al used electromagnetic trackers to record the instrument’s position during the training activity, whereas Lahanas et al used fiducial markers attached to the tip of the instrument for tracking instrument position and rotation. There were only a few studies (4/51, 8%) that made use of surgeon posture tracking. For example, Cizmic et al tracked hand position to project the hand movements of the trainer onto the laparoscopic image to enhance communication between trainer and trainee. The use of eye tracking was present in only 4% (2/51) of the studies analyzed. This technique was used by some authors to interact with sets of images and other elements of a user interface by means of gaze or to assess performance and quality of communication . Target tracking was the next most used sensorization in laparoscopic surgery training solutions. In this case, body tracking (tracking objects in the training environment or patient organs) was the category that comprised most of the reviewed articles. For this purpose, most of the studies (35/51, 69%) used artificial vision techniques to identify and track body structures. Viglialoro et al used AR as a training aid in simulator-based laparoscopic cholecystectomy training, mainly during the isolation of the cystic duct and artery (Calot triangle). AR technology allowed trainees to visualize these hidden structures during the training activity . Pessaux et al proposed an AR-based assistance system to superimpose information from preoperative images onto the patient’s skin using a beamer, generating body transparency with visualization of deep structures. Finally, only few studies (2/48, 4%) tracked information about artificial markers or objects (instruments) within the surgical workspace inside the patient’s body. Preetha et al tracked the surgical needle during a laparoscopic suturing task. Regarding the simulator and setup used for laparoscopic training, 2 main characteristics were considered. The first was the origin of the training system, which may be a widely used commercial simulator (eg, ProMIS) or a prototype developed specifically for the research being carried out or pending to be released into the market. The second was the physical characteristics of the simulator and the models used in the training activity (training model). It could be an artificial model created solely for the purpose of training or an organic model (in vivo or ex vivo) obtained from organic tissue . The use of commercial simulators was more widespread (20/51, 39%) than the use of prototypes (14/51, 27%). Commercial simulators mostly used artificial models (10/20, 50%), such as beads for the eye-hand coordination tasks . There were also some studies (6/20, 30%) that used organic models. In this case, some authors compared skill acquisition after training with human cadavers versus using AR-enhanced artificial models . Strickland et al used a lamb liver to which a piece of marshmallow was introduced to simulate a tumor to be removed, and Pagador et al used a porcine stomach for suturing tasks. There were some studies (4/20, 20%) that did not specify the model used by the commercial simulator or that could not be included in any of the defined categories. Regarding the simulator prototypes, most (8/12, 67%) used artificial models. Lahanas et al used markers inside a box trainer to superimpose images of different interactive objects, such as rings or beads that needed to be manipulated. Regarding the use of organic models, only Baumhauer et al used an ex vivo porcine kidney to test the image overlay capability of the AR-based training system presented. The rest of the papers did not specify the model used or did not fit into any of the categories. Artificial models were the most used training models (26/51, 51%), especially in simulator prototypes (10/17, 59%), followed by ex vivo training models (8/51, 16%). The latter stand out as 62% (5/8) of the studies that used ex vivo models used commercial simulators. The studies that used in vivo models did not refer to the simulator used as these were real operations (either on humans or animals). There were studies that did not specify whether they used a commercial simulator or developed a prototype (17/51, 33%). Of these studies, 41% (7/17) used artificial training models, such as the system presented by Doughty et al , which did not use a specific laparoscopic simulator as it was intended to be used in any type of surgery. Meanwhile, 6 of the studies that did not specify whether they used a commercial simulator or a prototype used organic models, with 4 (67%) of them being in vivo models. Simone et al used MR smart glasses to enable tele-mentoring in real surgical procedures during the COVID-19 pandemic. There were a few articles (4/51, 8%) that did not specify either the type of simulator or training model used. Gupta et al mentioned the need for a development framework for this type of simulators in laparoscopic surgery training and which aspects should be considered. However, they did not make use of any training system as it was a theoretical study . Another study tested the impact of AR elements on inattentional blindness during a laparoscopic operation using a prerecorded video . The study by Koehl et al focused on the generation of modular models that could be used for other applications as a 3D visual model or as a force feedback generation system. Pagador et al designed and evaluated a laparoscopic tool tracking system for further assessment purposes. In total, 2 main ways of addressing the assessment of laparoscopic skill acquisition in AR-based laparoscopic surgery training solutions were reported. One way of evaluation was through objective evaluation metrics usually calculated by the training system. Another way was subjective evaluation carried out by experts or by means of interviews or self-assessment questionnaires . Some studies used both ways , while others used none . Objective evaluation was the most used method for the assessment of skill acquisition in laparoscopic surgery. In terms of the metric used, most of the studies (25/51, 49%) evaluated the performance of surgeons to assess whether the system enhanced laparoscopic skill acquisition. A variety of metrics were used, such as execution time, path length, or motion smoothness . Time, whether related to task completion or training duration, was a commonly used metric that assessed the time spent on each task as a performance indicator. There were other studies (2/31, 6%) that, instead of measuring the time needed to complete the task, measured the time that the laparoscopic instruments spent in a specific area, for instance, during a suturing task . Path length was another prevalent objective metric (8/31, 26%). This metric records the path followed by the laparoscopic instruments during the performance of the training task. There were another 6% (2/31) of the studies that, apart from measuring path length, also computed the economy of movements of the surgical instruments . Force metrics were also commonly used (4/31, 13%). Botden et al evaluated the performance of a suturing task by assessing the strength of the knot. The Objective Structured Assessment of Technical Skills (OSATS) and the Global Operative Assessment of Laparoscopic Skills (GOALS) formularies are 2 recognized standards for assessing laparoscopic skills that were also used in some of the studies, with the OSATS being used in 14% (5/36) of the studies and the GOALS being used only twice (2/36, 6%). Subjective evaluation was mostly used for evaluating the usability of the AR tools (6/51, 12%). For example, Arts et al made use of a questionnaire to assess the validity of the ProMIS as an AR laparoscopic surgery simulator. The combination of both types of evaluation (objective and subjective) was used in 22% (11/51) of the studies. Cau et al combined an expert’s subjective assessment of a suturing task with metrics obtained by the system itself. Regarding the laparoscopic training activities (tasks or procedures), in most of the studies (26/51, 51%), trainees performed basic tasks commonly used in the early stages of laparoscopic surgery training programs. These tasks were divided into 5 main categories: navigation tasks focused on the skill needed to explore the surgical workspace using the camera and laparoscopic instruments; object manipulation tasks aimed to train the user to move, rotate, or manipulate objects or organs using the laparoscopic instruments; dissection tasks focused on the ability to separate tissue and anatomical structures without causing any damage to them; cutting tasks focused on training users to make precise cuts in organs and tissues; and suturing tasks focused on training for suturing tissues, including knotting skills. Finally, there were studies that used surgical procedures such as partial nephrectomy on ex vivo models or sigmoid colectomy on cadavers . Similarly, considering its main clinical features, such as the surgical procedure performed or the training model used, the category of surgical specialty into which each study fell or could fall was defined. For those studies in which a training surgical intervention was performed, the surgical specialty was the one which that procedure corresponded to. There were also some studies that included simple tasks. In this case, the surgical specialty was selected based on the training model used. For those cases in which surgical skills did not focus on a particular surgical specialty, the studies were classified as “All” specialties . A large proportion of the studies analyzed (20/51, 39%) used surgical procedures rather than simple tasks to train and assess surgical skills. General surgery was the surgical specialty of most of the studies (14/20, 70%). In the study by Viglialoro et al , they presented an AR solution that assisted in the identification of the artery and cystic duct in a training simulator for cholecystectomy. Other studies (4/20, 20%) were included under the specialty of urology, such as the study by Teber et al , who presented a system for formative assistance during laparoscopic partial nephrectomy. Other studies (3/20, 15%) performed procedures that were not classified into any specific surgical specialty but could be used in any specialty, such as the study by Doughty et al , who developed a context-aware system capable of assisting the surgeon depending on the training task being performed. Regarding basic laparoscopic surgery training tasks, the most common task used was suturing (11/25, 44%). Botden et al , for example, used suturing to analyze the evolution of the trainees’ technical skills, assessing how fast trainees performed tasks and the strength of the knotting. Object manipulation tasks was also common in the studies analyzed (9/25, 36%). Lahanas et al presented 3 different object manipulation tasks to assess trainees’ laparoscopic skills using their AR-based simulator. The use of navigation tasks was less common in the training settings (5/25, 20%). Fusaglia et al presented an overlay system to show hidden anatomical structures during the performance of laparoscopic navigation tasks. The first task was to press a set of buttons in a specific order using the laparoscopic instruments, the second task was to transfer an object using the laparoscopic tools, and the last task was to cut a virtual object (using AR technology). Another type of task were dissection tasks, but they were less common that the rest (3/25, 12%). One study proposed the use of 2 balloons (one inside the other) that the trainee had to separate keeping the inner balloon inflated (ie, without being damaged) . Only 16% (4/25) of the studies focused on cutting tasks. In the study by Brinkman et al , a full week of laparoscopic surgery training was proposed, having training sessions once a day in which some laparoscopic cutting tasks were included. Finally, there were 12% (6/51) of the articles that did not specify the task performed. Principal Findings Overview AR is an emerging technology that is being applied to various fields, including health care. In this sense, applications have been developed mainly oriented toward surgical planning, medical and surgical training, and surgical assistance . In this study, a review of scientific literature oriented toward solutions for assistance in laparoscopic surgery training was carried out. We analyzed aspects that we considered critical to innovate and advance in this field, such as the devices used to provide AR to the user, the training environments, the types of training tasks or procedures, and the evaluation metrics used to analyze the evolution of students during the training activities and provide them with educational feedback. For this purpose, we analyzed all the RQs raised in this work with the aim to provide some observations that may help researchers guide their new proposals and innovative solutions to achieve an improvement in the field of laparoscopic surgery training. The evolution of the number of publications included in this scoping review was heterogeneous and may be associated with the evolution of AR technology and the introduction of new applications and devices. Although there were some studies in the first years of this century associated with the appearance of the first AR applications in games and smartphones, it was in 2009 when a relevant increase in interest in the use of AR was observed, coinciding with the success of applications such as the AR toolkit, an open-source library for the creation of AR applications based on the identification of visual markers. This relevant interest continued until 2015, which could be related to the launch of Google Glass. This device was the first proposal that took AR out of the screen and into the real world using a wearable device. However, it was not until the launch of the second version of Microsoft’s HoloLens (2019) when the potential of this type of technological proposals increasingly distanced from solutions linked to screens began to be glimpsed. This fact may be related to the recent increase in interest in the use of this type of AR-linked technologies in laparoscopic surgery training. In the following sections, we will analyze the results to identify relevant insights that address each of the RQs. Devices and Feedback (RQ 1) While video devices such as monitors, laptops, smartphones, and tablets have traditionally been used in the early stages of AR technologies to enhance laparoscopic surgery training, the use of OST devices is a relatively new development due to the recent emergence of devices that facilitate these functions (the first device appeared in 2020). Therefore, only 12% (6/51) of studies were found that explored the application of OST technology in this context . OST devices offer greater versatility than traditional video devices (monitors, smartphones, and tablets) for laparoscopic surgery training. As several studies (6/51, 12%) demonstrated, OST devices provide novel interaction methods and functionalities such as eye or gaze tracking, voice commands, and gestural interactions to manipulate virtual objects or add annotations. For instance, the head gaze was used to point at the laparoscopic virtual screen to enhance communication between trainer and trainee . Felinska et al used eye-tracking technology to create a heat map and analyze differences between the visual focus areas of trainees and instructors. In another study using OST technology, the Vuforia library was used to recognize artificial visual markers in the training environment to insert virtual elements to enhance the experience of laparoscopic surgery training . However, this solution was limited to simulated environments, making it difficult to use in real clinical settings as it relied on 5 cameras for marker recognition. Sánchez-Margallo et al and Simone et al focused on the creation of 3D models from real preoperative images to implement AR-based training applications. In addition, this functionality was extended by allowing tutors to add annotations and information to the models . One of the advantages of OST devices is that they can be used in any training environment used in laparoscopic surgery. The trainee always visualizes the real training environment, and it is the augmented information (holograms) that is superimposed onto the real image through the glasses. For instance, in the study by Sánchez-Margallo et al , it was not necessary to connect the OST device to the laparoscopic simulator as it did not require the laparoscopic video feed. However, Zorzal et al accessed the video source from the laparoscopic tower and explored the ergonomic improvements that OST devices could provide by displaying the endoscopic video in any place of the operating room and even following the surgeon’s head. Regarding ARV devices, the analyzed studies highlighted their potential to enhance the experience during laparoscopic surgery training. In the study by Preetha et al , a convolutional neural network was used to predict depth from laparoscopic imaging and recognize the manipulation of laparoscopic instruments for use in surgical skill assessment. Another study also used depth prediction to generate and display optimal trajectories for surgical instruments . Solutions for depth perception support were also presented to improve the training process of novice trainees in laparoscopic surgery . The presence of assistive information in the surgeon’s field of view is not a major disadvantage as it has been shown that the use of information-enhanced holograms does not have a negative impact on the surgical working field . In another case, ARV technology was used to generate 3D models from preoperative imaging and superimpose them onto laparoscopic images as a support during the performance of the surgical task or formative procedure . Another important aspect to consider during laparoscopic surgery training is to analyze force feedback during the performance of different laparoscopic tasks and procedures. This factor was explored in an ARV training environment, showing that feedback of force exerted improves the acquisition of tissue-handling skills . Other studies (6/51, 12%) investigated the overlay of assistive content in the performance of training tasks and procedures by means of ARV applications and concluded that it helps the trainees reduce path length and deviation along the depth axis and improve their orientation skills . All in all, we can see that both OST and ARV devices show significant potential as training aids in laparoscopic surgery. Although OST devices are a more recent and less explored development, they offer enhanced versatility compared to traditional video devices used in early AR technologies. The reviewed studies highlight the various applications of OST and ARV, including novel interaction methods, access to preoperative imaging, ergonomic improvements, depth analysis, and instrument motion analysis. This suggests that both OST and ARV have the potential to enhance laparoscopic surgery training, with OST devices offering greater versatility and opportunities for innovation in this field. Sensorization (RQ 2) The studies analyzed in this review mainly presented 2 types of sensing technologies: haptics and tracking. The use of haptic technology in laparoscopic surgery training has been a topic of considerable interest, as evidenced by the number of studies found on this topic (9/51, 18% of the studies in total). These studies explored the potential benefits of haptic feedback in enhancing the acquisition of laparoscopic skills and improving surgical performance. The use of haptic feedback in laparoscopic surgery simulation was examined in several studies (4/51, 8%), demonstrating its efficacy in improving surgeons’ dexterity The incorporation of haptic feedback in AR-based laparoscopic surgery training applications has been shown to lead to improved performance among novice surgeons and improve surgeons’ skills in laparoscopic suturing . The efficacy of haptic feedback in laparoscopic skill acquisition was also studied by other authors (4/51, 8%) , who showed that it significantly enhanced surgeons’ precision and instrument manipulation skills. These studies highlighted the potential of haptic feedback as a valuable adjunct to laparoscopic surgery training. It seems evident that haptic technology promises further advances in laparoscopic surgery training and proficiency. Regarding tracking technologies, there were 2 main approaches: a sensor-based approach and an optical technology–based approach. The sensor-based approach uses sensors placed at specific locations to track the object in the training environment. Meanwhile, the optical technology–based approach uses image analysis to perform tracking of the objects of interest. The studies included in this review used artificial vision techniques and optical sensorization, mainly by using the laparoscopic camera feed. For instance, Loukas et al used 2 different algorithms to estimate the tooltip position, one of them using an adaptive model of a color strip attached to the tool and the other one tracking directly the shaft of the tool. Optical technology plays a crucial role in accurate tracking in clinical settings. Zahiri et al used this technology to evaluate the performance of an object transfer task in basic training environments by tracking colored rings. Another study focused on tracking anatomical structures such as the Calot triangle and user interaction during the training procedure . Vision-matching techniques enhance trainees’ visualization of hidden anatomical structures by superimposing 3D models onto the laparoscopic camera source, improving trainees’ understanding of spatial relationships . Vision-matching techniques were also used for pairing of real and augmented vision, some of them automatically and others semiautomatically. Thus, in the study by Hughes-Hallet et al , a semiautomated real-time overlay of preoperative models onto the anatomical field structures was performed, improving accuracy and efficiency in the AR-guided training process, whereas Pessaux et al and Condino et al used manual assistance to align the virtual objects with the patient’s anatomy. Overall, the integration of artificial vision techniques and optical technology plays a crucial role in the identification and tracking of the anatomical structures of the patient. These solutions improve the understanding of the steps to be carried out during training activities, as well as the identification of manipulated objects by trainees in laparoscopic surgery training. The eye-tracking technology present in many of the OST devices has shown great potential in various fields, including human-computer interaction and cognitive sciences. However, in the context of laparoscopic surgery training, the use of this technology remains relatively unexplored. In this scoping review, only 4% (2/51) of the studies investigated the incorporation of eye-tracking technology for assistance in laparoscopic surgery training. In one of the studies, gaze tracking (using head tracking) was performed to focus the laparoscopic camera on the area where the surgeon’s gaze was centered . This study made it possible to optimize the way of visualizing the information on the surgical working environment and the surgeon’s ergonomics. Another study aimed to use eye-tracking technology to detect convergence between the trainee’s and the trainer’s gaze, as well as between the trainee and the target area . By monitoring eye movements and fixation points, the system could provide objective feedback on the trainee’s visual attention and alignment with the desired gaze targets. This approach has the potential to enhance trainee-trainer communication and improve the trainee’s spatial awareness within the surgical field. Considering these aspects, there is a clear need for further investigation on the use of this technology to improve the training process in laparoscopic surgery. Eye tracking offers unique advantages, such as providing the trainee’s visual focus, gaze patterns, and cognitive workload in real time. The integration of eye-tracking technology into laparoscopic simulators and training platforms can enhance the visual perception, spatial orientation, and overall surgical performance of the trainee. In addition, tracking of laparoscopic instruments and deformations in bodies or organs was also performed using artificial visual markers. In some cases, these were colored markers that were placed around the distal area of the instrument shaft . Reflective markers were also used, although in this case, it was necessary to use infrared light and camera filters for identification . The use of these techniques helps improve the identification and tracking of the laparoscopic instruments, which is essential to evaluate the surgical performance of trainees and assist them during the training activities. In addition, various studies (3/33, 9%) explored the calculation and modeling of deformations in bodies or organs to be used in advanced AR applications for laparoscopic surgery training. In one study, visual markers were inserted directly on the liver so that they could be identified by the laparoscopic image and allow for the localization and superimposition of preoperative 3D models of the liver and other relevant information on the endoscopic image . Another study used retroreflective markers on the laparoscopic instruments and within the formative surgical field to facilitate the superimposition on the laparoscopic image of a preoperative 3D model . Another type of technology used for tracking surgical instruments and organs involved in the formative activity was the use of electromagnetic sensors. Viglialoro et al incorporated these sensors into the laparoscope and bile ducts or arterial tree of an artificial training model for real-time monitoring during the performance of simulator tasks. Electromagnetic sensors inserted into nitinol tubes representing the ducts and arterial trees helped infer possible deformations of the sensed tubular structures. Consequently, the virtual scene was updated in real time, augmenting the laparoscopic images with information on the real-time position of these anatomical structures. By using the aforementioned techniques for the calculation of organ deformations and the inclusion of 3D models, these studies significantly contributed to the integration of virtual models into the laparoscopic surgery environments, enhancing surgical training scenarios in real time. Training Simulator and Setup (RQ 3) Regarding the type of simulator used for AR-based laparoscopic surgery training, most of them were box trainers to which an AR assistance system was added. Of note is the ProMIS simulator (Haptica, Inc), which was one of the first commercial simulators for laparoscopic surgery training with AR functionalities. This simulator included basic training tasks such as dissection and suturing but also more complex procedures such as liver resection and sigmoid colectomy. This is the AR-based simulator with the largest number of scientific publications. Apart from this simulator, we highlight other commercial options to which AR-based laparoscopic surgery training assistance systems were added, such as the Fundamentals of Laparoscopic Surgery simulator (Limbs & Things Ltd) , the Szabo Pelvic Trainer simulator (ID Trust Medical) together with the iSurgeon system , the eoSim simulator (eoSurgical) , the SIMULAP-IC05 simulator (Jesús Usón Minimally Invasive Surgery Centre) , or the Body Torso laparoscopic trainer simulator (Pharmabotics Ltd) . Regarding the training model, the use of artificial models was most common because they are generally more accessible than organic models, although with less realism. However, more and more realistic artificial models have been achieved by simulating the anatomy and behavior of real tissues, as in the case of the study by Viglialoro et al , in which they presented a 3D replica of the patient’s own liver and gallbladder. Other options presented were the enhancement of the physical model using AR imaging . Evaluation (RQ 4) In the different studies analyzed, various types of metrics were used to evaluate the quality of surgical performance and the surgical skills of trainees. These metrics, according to their nature, were divided into objective and subjective metrics. Obtaining objective metrics is one of the main advantages of using AR techniques in laparoscopic surgery training. Some of the training simulators, as is the case of ProMIS, allow for recording the actions executed by the trainee to obtain metrics such as path length or motion smoothness of the laparoscopic instruments, which can help evaluate the trainee’s performance and learning curve without the need for constant supervision by an expert evaluator. Although there are some metrics that were recurrently used in the analyzed studies, so far there is no standardized set of objective metrics used to evaluate performance or technical skills in laparoscopic surgery. The most common metrics used were execution time, path length, economy of movement, and motion smoothness of laparoscopic instruments . Other metrics were also used for specific tasks, such as knotting strength, time spent in the correct suturing and knotting area , or force exerted by laparoscopic instruments on the target organ or tissue . Other types of surgical performance quality assessments based on questionnaires administered by external evaluators were also used, such as the OSATS, a highly standardized surgical assessment tool . Horeman et al and Leblanc et al also used similar questionnaires based on peer reviews to determine surgeons’ experience in laparoscopic surgery. Subjective evaluation was commonly used for assessing the validity of the proposed solution. None of the studies used a standard questionnaire for assessing system validity, but many similarities were found in terms of the aspects of the systems they evaluated. For example, usability, realism, and didactic value were included in several studies (7/18, 39%) . Questionnaires with Likert scales were the most commonly used format in the studies. Regarding assessment, the size and type of the samples included in the studies were also important. There was a wide variation in sample size, with samples ranging from 10 individuals to 270 participants . Some studies (2/25, 8%) included >100 participants , but most of them (23/25, 92%) presented sample sizes of <100 participants or <30 individuals . Finally, the included studies encompassed participants with different levels of laparoscopic surgery experience: novices, intermediates, and experts. Novices, referring to individuals with limited or no previous laparoscopic surgery experience, were frequently included in the studies. Intermediate-level participants, characterized by a moderate level of experience, were present in a few studies (4/25, 16%) . This may be because their inclusion makes it difficult to differentiate results between study groups as there is sometimes no clear difference between the intermediate group and the novice or expert groups. Experts, representing individuals with advanced laparoscopic skills, were also included in several studies (8/25, 32%). This stratification of participants allows for a comprehensive validation of the system’s effectiveness across different proficiency levels and helps detect the different needs that these groups may have. Tasks and Procedures (RQ 5) Regarding the tasks and procedures used as training activities in the studies analyzed, the most widespread ones were those focused on eye-hand coordination tasks, such as peg transfer, instrument movement, and instrument navigation tasks . The next most used type of task was the simulator suturing task . Although less frequent, some studies (17/51, 33%) also presented more complex models for procedures such as cholecystectomy . In this case, models of the liver and biliary anatomy were used to facilitate the training of novice surgeons in gallbladder extraction using laparoscopic techniques. These types of procedures, although still basic, are highly appropriate for learning basic tasks such as the localization of critical anatomical structures (cystic duct and cystic artery) and the performance of dissection and cutting tasks for dissection and cutting of the cystic duct, cystic artery, and gallbladder removal. In a closer-to-reality environment, AR-based assistive tools were used for procedures such as cholecystectomy in ex vivo (porcine) experimental models and sigmoid colectomy in human cadavers . Considering that the main objective of AR-based training assistance applications in laparoscopic surgery is to provide the students with support tools during their training activities, the type of task or procedure to be chosen has a great influence on the usefulness of these tools. It seems evident that, in considerably basic tasks, the training assistance systems would not provide significant value to the student, mainly due to the low level of complexity of the activity. However, it is in the more challenging tasks and procedures where training assistance systems may provide remarkable training value. For instance, a task that can present a high level of difficulty for novice laparoscopic surgeons is the performance of suturing. Support tools can enhance reality for the student with visual information to assist in the performance of the suture, such as the proper grip of the needle, the passage of the needle through the tissue, or the process of double and single knotting. In more complex procedures such as laparoscopic cholecystectomy or lymphadenectomy, these training assistance systems could support the students in locating complex or hidden anatomical structures, as well as remotely transmitting instructions from tutors who are reviewing the students’ training process, among other possibilities. Limitations Although the PRISMA-ScR guidelines were followed to carry out this scoping review to reduce possible biases, we must highlight some limitations encountered in our work. Although most relevant articles found in other databases such as Springer or Google Scholar also appeared in the databases consulted, it is possible that some papers were not included because these databases were not used directly. In addition, the specific search strategies and keywords used in the various databases may have led to the exclusion of articles that used different terminology to refer to the same topics. Although the number of articles affected was small, restricting the selection to English-language publications and excluding articles that we could not access may have led to the omission of studies that could have provided relevant information. Regarding the devices used, OST devices are relatively new, so there is still much to explore in terms of what they can contribute to the field of laparoscopic surgery training, which we will address in future studies. It should also be noted that there may be interesting technical proposals for the support of laparoscopic surgery training that, as they have not yet been used for such training, were not included in the studies analyzed in our scoping review. Furthermore, the lack of standardized metrics for evaluating the different support systems did not allow for a comprehensive comparison between systems and a comparative analysis of their results. Conclusions In conclusion, this scoping review sheds light on the dynamic landscape of AR technologies within laparoscopic surgery training. Although OST devices have recently emerged and their advantages over traditional AR methods are still being explored, it is relevant to indicate that the results obtained are promising and open up new opportunities for the use of AR in this type of training activities. In turn, haptic feedback emerges as a valuable asset for the acquisition of laparoscopic skills. Eye-tracking technology provides relevant information during learning, although its application is in an incipient phase that requires further exploration. The prevalence of commercial simulators and artificial models underscores the delicate balance between safety and realism. Regarding assessments, this review highlights the importance of extending the use of expert assessments, such as OSATS or GOALS, and underlines the need for standardized objective assessment. These assessments are based on the study of participants’ behavior in simple tasks such as navigation and object manipulation and in other tasks with a higher level of difficulty, such as suturing. In any case, it seems clear that AR would be more useful for more complex tasks related to complex surgical procedures. These findings beckon future research to analyze the complete potential of AR in enhancing laparoscopic skill acquisition. Overview AR is an emerging technology that is being applied to various fields, including health care. In this sense, applications have been developed mainly oriented toward surgical planning, medical and surgical training, and surgical assistance . In this study, a review of scientific literature oriented toward solutions for assistance in laparoscopic surgery training was carried out. We analyzed aspects that we considered critical to innovate and advance in this field, such as the devices used to provide AR to the user, the training environments, the types of training tasks or procedures, and the evaluation metrics used to analyze the evolution of students during the training activities and provide them with educational feedback. For this purpose, we analyzed all the RQs raised in this work with the aim to provide some observations that may help researchers guide their new proposals and innovative solutions to achieve an improvement in the field of laparoscopic surgery training. The evolution of the number of publications included in this scoping review was heterogeneous and may be associated with the evolution of AR technology and the introduction of new applications and devices. Although there were some studies in the first years of this century associated with the appearance of the first AR applications in games and smartphones, it was in 2009 when a relevant increase in interest in the use of AR was observed, coinciding with the success of applications such as the AR toolkit, an open-source library for the creation of AR applications based on the identification of visual markers. This relevant interest continued until 2015, which could be related to the launch of Google Glass. This device was the first proposal that took AR out of the screen and into the real world using a wearable device. However, it was not until the launch of the second version of Microsoft’s HoloLens (2019) when the potential of this type of technological proposals increasingly distanced from solutions linked to screens began to be glimpsed. This fact may be related to the recent increase in interest in the use of this type of AR-linked technologies in laparoscopic surgery training. In the following sections, we will analyze the results to identify relevant insights that address each of the RQs. Devices and Feedback (RQ 1) While video devices such as monitors, laptops, smartphones, and tablets have traditionally been used in the early stages of AR technologies to enhance laparoscopic surgery training, the use of OST devices is a relatively new development due to the recent emergence of devices that facilitate these functions (the first device appeared in 2020). Therefore, only 12% (6/51) of studies were found that explored the application of OST technology in this context . OST devices offer greater versatility than traditional video devices (monitors, smartphones, and tablets) for laparoscopic surgery training. As several studies (6/51, 12%) demonstrated, OST devices provide novel interaction methods and functionalities such as eye or gaze tracking, voice commands, and gestural interactions to manipulate virtual objects or add annotations. For instance, the head gaze was used to point at the laparoscopic virtual screen to enhance communication between trainer and trainee . Felinska et al used eye-tracking technology to create a heat map and analyze differences between the visual focus areas of trainees and instructors. In another study using OST technology, the Vuforia library was used to recognize artificial visual markers in the training environment to insert virtual elements to enhance the experience of laparoscopic surgery training . However, this solution was limited to simulated environments, making it difficult to use in real clinical settings as it relied on 5 cameras for marker recognition. Sánchez-Margallo et al and Simone et al focused on the creation of 3D models from real preoperative images to implement AR-based training applications. In addition, this functionality was extended by allowing tutors to add annotations and information to the models . One of the advantages of OST devices is that they can be used in any training environment used in laparoscopic surgery. The trainee always visualizes the real training environment, and it is the augmented information (holograms) that is superimposed onto the real image through the glasses. For instance, in the study by Sánchez-Margallo et al , it was not necessary to connect the OST device to the laparoscopic simulator as it did not require the laparoscopic video feed. However, Zorzal et al accessed the video source from the laparoscopic tower and explored the ergonomic improvements that OST devices could provide by displaying the endoscopic video in any place of the operating room and even following the surgeon’s head. Regarding ARV devices, the analyzed studies highlighted their potential to enhance the experience during laparoscopic surgery training. In the study by Preetha et al , a convolutional neural network was used to predict depth from laparoscopic imaging and recognize the manipulation of laparoscopic instruments for use in surgical skill assessment. Another study also used depth prediction to generate and display optimal trajectories for surgical instruments . Solutions for depth perception support were also presented to improve the training process of novice trainees in laparoscopic surgery . The presence of assistive information in the surgeon’s field of view is not a major disadvantage as it has been shown that the use of information-enhanced holograms does not have a negative impact on the surgical working field . In another case, ARV technology was used to generate 3D models from preoperative imaging and superimpose them onto laparoscopic images as a support during the performance of the surgical task or formative procedure . Another important aspect to consider during laparoscopic surgery training is to analyze force feedback during the performance of different laparoscopic tasks and procedures. This factor was explored in an ARV training environment, showing that feedback of force exerted improves the acquisition of tissue-handling skills . Other studies (6/51, 12%) investigated the overlay of assistive content in the performance of training tasks and procedures by means of ARV applications and concluded that it helps the trainees reduce path length and deviation along the depth axis and improve their orientation skills . All in all, we can see that both OST and ARV devices show significant potential as training aids in laparoscopic surgery. Although OST devices are a more recent and less explored development, they offer enhanced versatility compared to traditional video devices used in early AR technologies. The reviewed studies highlight the various applications of OST and ARV, including novel interaction methods, access to preoperative imaging, ergonomic improvements, depth analysis, and instrument motion analysis. This suggests that both OST and ARV have the potential to enhance laparoscopic surgery training, with OST devices offering greater versatility and opportunities for innovation in this field. Sensorization (RQ 2) The studies analyzed in this review mainly presented 2 types of sensing technologies: haptics and tracking. The use of haptic technology in laparoscopic surgery training has been a topic of considerable interest, as evidenced by the number of studies found on this topic (9/51, 18% of the studies in total). These studies explored the potential benefits of haptic feedback in enhancing the acquisition of laparoscopic skills and improving surgical performance. The use of haptic feedback in laparoscopic surgery simulation was examined in several studies (4/51, 8%), demonstrating its efficacy in improving surgeons’ dexterity The incorporation of haptic feedback in AR-based laparoscopic surgery training applications has been shown to lead to improved performance among novice surgeons and improve surgeons’ skills in laparoscopic suturing . The efficacy of haptic feedback in laparoscopic skill acquisition was also studied by other authors (4/51, 8%) , who showed that it significantly enhanced surgeons’ precision and instrument manipulation skills. These studies highlighted the potential of haptic feedback as a valuable adjunct to laparoscopic surgery training. It seems evident that haptic technology promises further advances in laparoscopic surgery training and proficiency. Regarding tracking technologies, there were 2 main approaches: a sensor-based approach and an optical technology–based approach. The sensor-based approach uses sensors placed at specific locations to track the object in the training environment. Meanwhile, the optical technology–based approach uses image analysis to perform tracking of the objects of interest. The studies included in this review used artificial vision techniques and optical sensorization, mainly by using the laparoscopic camera feed. For instance, Loukas et al used 2 different algorithms to estimate the tooltip position, one of them using an adaptive model of a color strip attached to the tool and the other one tracking directly the shaft of the tool. Optical technology plays a crucial role in accurate tracking in clinical settings. Zahiri et al used this technology to evaluate the performance of an object transfer task in basic training environments by tracking colored rings. Another study focused on tracking anatomical structures such as the Calot triangle and user interaction during the training procedure . Vision-matching techniques enhance trainees’ visualization of hidden anatomical structures by superimposing 3D models onto the laparoscopic camera source, improving trainees’ understanding of spatial relationships . Vision-matching techniques were also used for pairing of real and augmented vision, some of them automatically and others semiautomatically. Thus, in the study by Hughes-Hallet et al , a semiautomated real-time overlay of preoperative models onto the anatomical field structures was performed, improving accuracy and efficiency in the AR-guided training process, whereas Pessaux et al and Condino et al used manual assistance to align the virtual objects with the patient’s anatomy. Overall, the integration of artificial vision techniques and optical technology plays a crucial role in the identification and tracking of the anatomical structures of the patient. These solutions improve the understanding of the steps to be carried out during training activities, as well as the identification of manipulated objects by trainees in laparoscopic surgery training. The eye-tracking technology present in many of the OST devices has shown great potential in various fields, including human-computer interaction and cognitive sciences. However, in the context of laparoscopic surgery training, the use of this technology remains relatively unexplored. In this scoping review, only 4% (2/51) of the studies investigated the incorporation of eye-tracking technology for assistance in laparoscopic surgery training. In one of the studies, gaze tracking (using head tracking) was performed to focus the laparoscopic camera on the area where the surgeon’s gaze was centered . This study made it possible to optimize the way of visualizing the information on the surgical working environment and the surgeon’s ergonomics. Another study aimed to use eye-tracking technology to detect convergence between the trainee’s and the trainer’s gaze, as well as between the trainee and the target area . By monitoring eye movements and fixation points, the system could provide objective feedback on the trainee’s visual attention and alignment with the desired gaze targets. This approach has the potential to enhance trainee-trainer communication and improve the trainee’s spatial awareness within the surgical field. Considering these aspects, there is a clear need for further investigation on the use of this technology to improve the training process in laparoscopic surgery. Eye tracking offers unique advantages, such as providing the trainee’s visual focus, gaze patterns, and cognitive workload in real time. The integration of eye-tracking technology into laparoscopic simulators and training platforms can enhance the visual perception, spatial orientation, and overall surgical performance of the trainee. In addition, tracking of laparoscopic instruments and deformations in bodies or organs was also performed using artificial visual markers. In some cases, these were colored markers that were placed around the distal area of the instrument shaft . Reflective markers were also used, although in this case, it was necessary to use infrared light and camera filters for identification . The use of these techniques helps improve the identification and tracking of the laparoscopic instruments, which is essential to evaluate the surgical performance of trainees and assist them during the training activities. In addition, various studies (3/33, 9%) explored the calculation and modeling of deformations in bodies or organs to be used in advanced AR applications for laparoscopic surgery training. In one study, visual markers were inserted directly on the liver so that they could be identified by the laparoscopic image and allow for the localization and superimposition of preoperative 3D models of the liver and other relevant information on the endoscopic image . Another study used retroreflective markers on the laparoscopic instruments and within the formative surgical field to facilitate the superimposition on the laparoscopic image of a preoperative 3D model . Another type of technology used for tracking surgical instruments and organs involved in the formative activity was the use of electromagnetic sensors. Viglialoro et al incorporated these sensors into the laparoscope and bile ducts or arterial tree of an artificial training model for real-time monitoring during the performance of simulator tasks. Electromagnetic sensors inserted into nitinol tubes representing the ducts and arterial trees helped infer possible deformations of the sensed tubular structures. Consequently, the virtual scene was updated in real time, augmenting the laparoscopic images with information on the real-time position of these anatomical structures. By using the aforementioned techniques for the calculation of organ deformations and the inclusion of 3D models, these studies significantly contributed to the integration of virtual models into the laparoscopic surgery environments, enhancing surgical training scenarios in real time. Training Simulator and Setup (RQ 3) Regarding the type of simulator used for AR-based laparoscopic surgery training, most of them were box trainers to which an AR assistance system was added. Of note is the ProMIS simulator (Haptica, Inc), which was one of the first commercial simulators for laparoscopic surgery training with AR functionalities. This simulator included basic training tasks such as dissection and suturing but also more complex procedures such as liver resection and sigmoid colectomy. This is the AR-based simulator with the largest number of scientific publications. Apart from this simulator, we highlight other commercial options to which AR-based laparoscopic surgery training assistance systems were added, such as the Fundamentals of Laparoscopic Surgery simulator (Limbs & Things Ltd) , the Szabo Pelvic Trainer simulator (ID Trust Medical) together with the iSurgeon system , the eoSim simulator (eoSurgical) , the SIMULAP-IC05 simulator (Jesús Usón Minimally Invasive Surgery Centre) , or the Body Torso laparoscopic trainer simulator (Pharmabotics Ltd) . Regarding the training model, the use of artificial models was most common because they are generally more accessible than organic models, although with less realism. However, more and more realistic artificial models have been achieved by simulating the anatomy and behavior of real tissues, as in the case of the study by Viglialoro et al , in which they presented a 3D replica of the patient’s own liver and gallbladder. Other options presented were the enhancement of the physical model using AR imaging . Evaluation (RQ 4) In the different studies analyzed, various types of metrics were used to evaluate the quality of surgical performance and the surgical skills of trainees. These metrics, according to their nature, were divided into objective and subjective metrics. Obtaining objective metrics is one of the main advantages of using AR techniques in laparoscopic surgery training. Some of the training simulators, as is the case of ProMIS, allow for recording the actions executed by the trainee to obtain metrics such as path length or motion smoothness of the laparoscopic instruments, which can help evaluate the trainee’s performance and learning curve without the need for constant supervision by an expert evaluator. Although there are some metrics that were recurrently used in the analyzed studies, so far there is no standardized set of objective metrics used to evaluate performance or technical skills in laparoscopic surgery. The most common metrics used were execution time, path length, economy of movement, and motion smoothness of laparoscopic instruments . Other metrics were also used for specific tasks, such as knotting strength, time spent in the correct suturing and knotting area , or force exerted by laparoscopic instruments on the target organ or tissue . Other types of surgical performance quality assessments based on questionnaires administered by external evaluators were also used, such as the OSATS, a highly standardized surgical assessment tool . Horeman et al and Leblanc et al also used similar questionnaires based on peer reviews to determine surgeons’ experience in laparoscopic surgery. Subjective evaluation was commonly used for assessing the validity of the proposed solution. None of the studies used a standard questionnaire for assessing system validity, but many similarities were found in terms of the aspects of the systems they evaluated. For example, usability, realism, and didactic value were included in several studies (7/18, 39%) . Questionnaires with Likert scales were the most commonly used format in the studies. Regarding assessment, the size and type of the samples included in the studies were also important. There was a wide variation in sample size, with samples ranging from 10 individuals to 270 participants . Some studies (2/25, 8%) included >100 participants , but most of them (23/25, 92%) presented sample sizes of <100 participants or <30 individuals . Finally, the included studies encompassed participants with different levels of laparoscopic surgery experience: novices, intermediates, and experts. Novices, referring to individuals with limited or no previous laparoscopic surgery experience, were frequently included in the studies. Intermediate-level participants, characterized by a moderate level of experience, were present in a few studies (4/25, 16%) . This may be because their inclusion makes it difficult to differentiate results between study groups as there is sometimes no clear difference between the intermediate group and the novice or expert groups. Experts, representing individuals with advanced laparoscopic skills, were also included in several studies (8/25, 32%). This stratification of participants allows for a comprehensive validation of the system’s effectiveness across different proficiency levels and helps detect the different needs that these groups may have. Tasks and Procedures (RQ 5) Regarding the tasks and procedures used as training activities in the studies analyzed, the most widespread ones were those focused on eye-hand coordination tasks, such as peg transfer, instrument movement, and instrument navigation tasks . The next most used type of task was the simulator suturing task . Although less frequent, some studies (17/51, 33%) also presented more complex models for procedures such as cholecystectomy . In this case, models of the liver and biliary anatomy were used to facilitate the training of novice surgeons in gallbladder extraction using laparoscopic techniques. These types of procedures, although still basic, are highly appropriate for learning basic tasks such as the localization of critical anatomical structures (cystic duct and cystic artery) and the performance of dissection and cutting tasks for dissection and cutting of the cystic duct, cystic artery, and gallbladder removal. In a closer-to-reality environment, AR-based assistive tools were used for procedures such as cholecystectomy in ex vivo (porcine) experimental models and sigmoid colectomy in human cadavers . Considering that the main objective of AR-based training assistance applications in laparoscopic surgery is to provide the students with support tools during their training activities, the type of task or procedure to be chosen has a great influence on the usefulness of these tools. It seems evident that, in considerably basic tasks, the training assistance systems would not provide significant value to the student, mainly due to the low level of complexity of the activity. However, it is in the more challenging tasks and procedures where training assistance systems may provide remarkable training value. For instance, a task that can present a high level of difficulty for novice laparoscopic surgeons is the performance of suturing. Support tools can enhance reality for the student with visual information to assist in the performance of the suture, such as the proper grip of the needle, the passage of the needle through the tissue, or the process of double and single knotting. In more complex procedures such as laparoscopic cholecystectomy or lymphadenectomy, these training assistance systems could support the students in locating complex or hidden anatomical structures, as well as remotely transmitting instructions from tutors who are reviewing the students’ training process, among other possibilities. AR is an emerging technology that is being applied to various fields, including health care. In this sense, applications have been developed mainly oriented toward surgical planning, medical and surgical training, and surgical assistance . In this study, a review of scientific literature oriented toward solutions for assistance in laparoscopic surgery training was carried out. We analyzed aspects that we considered critical to innovate and advance in this field, such as the devices used to provide AR to the user, the training environments, the types of training tasks or procedures, and the evaluation metrics used to analyze the evolution of students during the training activities and provide them with educational feedback. For this purpose, we analyzed all the RQs raised in this work with the aim to provide some observations that may help researchers guide their new proposals and innovative solutions to achieve an improvement in the field of laparoscopic surgery training. The evolution of the number of publications included in this scoping review was heterogeneous and may be associated with the evolution of AR technology and the introduction of new applications and devices. Although there were some studies in the first years of this century associated with the appearance of the first AR applications in games and smartphones, it was in 2009 when a relevant increase in interest in the use of AR was observed, coinciding with the success of applications such as the AR toolkit, an open-source library for the creation of AR applications based on the identification of visual markers. This relevant interest continued until 2015, which could be related to the launch of Google Glass. This device was the first proposal that took AR out of the screen and into the real world using a wearable device. However, it was not until the launch of the second version of Microsoft’s HoloLens (2019) when the potential of this type of technological proposals increasingly distanced from solutions linked to screens began to be glimpsed. This fact may be related to the recent increase in interest in the use of this type of AR-linked technologies in laparoscopic surgery training. In the following sections, we will analyze the results to identify relevant insights that address each of the RQs. While video devices such as monitors, laptops, smartphones, and tablets have traditionally been used in the early stages of AR technologies to enhance laparoscopic surgery training, the use of OST devices is a relatively new development due to the recent emergence of devices that facilitate these functions (the first device appeared in 2020). Therefore, only 12% (6/51) of studies were found that explored the application of OST technology in this context . OST devices offer greater versatility than traditional video devices (monitors, smartphones, and tablets) for laparoscopic surgery training. As several studies (6/51, 12%) demonstrated, OST devices provide novel interaction methods and functionalities such as eye or gaze tracking, voice commands, and gestural interactions to manipulate virtual objects or add annotations. For instance, the head gaze was used to point at the laparoscopic virtual screen to enhance communication between trainer and trainee . Felinska et al used eye-tracking technology to create a heat map and analyze differences between the visual focus areas of trainees and instructors. In another study using OST technology, the Vuforia library was used to recognize artificial visual markers in the training environment to insert virtual elements to enhance the experience of laparoscopic surgery training . However, this solution was limited to simulated environments, making it difficult to use in real clinical settings as it relied on 5 cameras for marker recognition. Sánchez-Margallo et al and Simone et al focused on the creation of 3D models from real preoperative images to implement AR-based training applications. In addition, this functionality was extended by allowing tutors to add annotations and information to the models . One of the advantages of OST devices is that they can be used in any training environment used in laparoscopic surgery. The trainee always visualizes the real training environment, and it is the augmented information (holograms) that is superimposed onto the real image through the glasses. For instance, in the study by Sánchez-Margallo et al , it was not necessary to connect the OST device to the laparoscopic simulator as it did not require the laparoscopic video feed. However, Zorzal et al accessed the video source from the laparoscopic tower and explored the ergonomic improvements that OST devices could provide by displaying the endoscopic video in any place of the operating room and even following the surgeon’s head. Regarding ARV devices, the analyzed studies highlighted their potential to enhance the experience during laparoscopic surgery training. In the study by Preetha et al , a convolutional neural network was used to predict depth from laparoscopic imaging and recognize the manipulation of laparoscopic instruments for use in surgical skill assessment. Another study also used depth prediction to generate and display optimal trajectories for surgical instruments . Solutions for depth perception support were also presented to improve the training process of novice trainees in laparoscopic surgery . The presence of assistive information in the surgeon’s field of view is not a major disadvantage as it has been shown that the use of information-enhanced holograms does not have a negative impact on the surgical working field . In another case, ARV technology was used to generate 3D models from preoperative imaging and superimpose them onto laparoscopic images as a support during the performance of the surgical task or formative procedure . Another important aspect to consider during laparoscopic surgery training is to analyze force feedback during the performance of different laparoscopic tasks and procedures. This factor was explored in an ARV training environment, showing that feedback of force exerted improves the acquisition of tissue-handling skills . Other studies (6/51, 12%) investigated the overlay of assistive content in the performance of training tasks and procedures by means of ARV applications and concluded that it helps the trainees reduce path length and deviation along the depth axis and improve their orientation skills . All in all, we can see that both OST and ARV devices show significant potential as training aids in laparoscopic surgery. Although OST devices are a more recent and less explored development, they offer enhanced versatility compared to traditional video devices used in early AR technologies. The reviewed studies highlight the various applications of OST and ARV, including novel interaction methods, access to preoperative imaging, ergonomic improvements, depth analysis, and instrument motion analysis. This suggests that both OST and ARV have the potential to enhance laparoscopic surgery training, with OST devices offering greater versatility and opportunities for innovation in this field. The studies analyzed in this review mainly presented 2 types of sensing technologies: haptics and tracking. The use of haptic technology in laparoscopic surgery training has been a topic of considerable interest, as evidenced by the number of studies found on this topic (9/51, 18% of the studies in total). These studies explored the potential benefits of haptic feedback in enhancing the acquisition of laparoscopic skills and improving surgical performance. The use of haptic feedback in laparoscopic surgery simulation was examined in several studies (4/51, 8%), demonstrating its efficacy in improving surgeons’ dexterity The incorporation of haptic feedback in AR-based laparoscopic surgery training applications has been shown to lead to improved performance among novice surgeons and improve surgeons’ skills in laparoscopic suturing . The efficacy of haptic feedback in laparoscopic skill acquisition was also studied by other authors (4/51, 8%) , who showed that it significantly enhanced surgeons’ precision and instrument manipulation skills. These studies highlighted the potential of haptic feedback as a valuable adjunct to laparoscopic surgery training. It seems evident that haptic technology promises further advances in laparoscopic surgery training and proficiency. Regarding tracking technologies, there were 2 main approaches: a sensor-based approach and an optical technology–based approach. The sensor-based approach uses sensors placed at specific locations to track the object in the training environment. Meanwhile, the optical technology–based approach uses image analysis to perform tracking of the objects of interest. The studies included in this review used artificial vision techniques and optical sensorization, mainly by using the laparoscopic camera feed. For instance, Loukas et al used 2 different algorithms to estimate the tooltip position, one of them using an adaptive model of a color strip attached to the tool and the other one tracking directly the shaft of the tool. Optical technology plays a crucial role in accurate tracking in clinical settings. Zahiri et al used this technology to evaluate the performance of an object transfer task in basic training environments by tracking colored rings. Another study focused on tracking anatomical structures such as the Calot triangle and user interaction during the training procedure . Vision-matching techniques enhance trainees’ visualization of hidden anatomical structures by superimposing 3D models onto the laparoscopic camera source, improving trainees’ understanding of spatial relationships . Vision-matching techniques were also used for pairing of real and augmented vision, some of them automatically and others semiautomatically. Thus, in the study by Hughes-Hallet et al , a semiautomated real-time overlay of preoperative models onto the anatomical field structures was performed, improving accuracy and efficiency in the AR-guided training process, whereas Pessaux et al and Condino et al used manual assistance to align the virtual objects with the patient’s anatomy. Overall, the integration of artificial vision techniques and optical technology plays a crucial role in the identification and tracking of the anatomical structures of the patient. These solutions improve the understanding of the steps to be carried out during training activities, as well as the identification of manipulated objects by trainees in laparoscopic surgery training. The eye-tracking technology present in many of the OST devices has shown great potential in various fields, including human-computer interaction and cognitive sciences. However, in the context of laparoscopic surgery training, the use of this technology remains relatively unexplored. In this scoping review, only 4% (2/51) of the studies investigated the incorporation of eye-tracking technology for assistance in laparoscopic surgery training. In one of the studies, gaze tracking (using head tracking) was performed to focus the laparoscopic camera on the area where the surgeon’s gaze was centered . This study made it possible to optimize the way of visualizing the information on the surgical working environment and the surgeon’s ergonomics. Another study aimed to use eye-tracking technology to detect convergence between the trainee’s and the trainer’s gaze, as well as between the trainee and the target area . By monitoring eye movements and fixation points, the system could provide objective feedback on the trainee’s visual attention and alignment with the desired gaze targets. This approach has the potential to enhance trainee-trainer communication and improve the trainee’s spatial awareness within the surgical field. Considering these aspects, there is a clear need for further investigation on the use of this technology to improve the training process in laparoscopic surgery. Eye tracking offers unique advantages, such as providing the trainee’s visual focus, gaze patterns, and cognitive workload in real time. The integration of eye-tracking technology into laparoscopic simulators and training platforms can enhance the visual perception, spatial orientation, and overall surgical performance of the trainee. In addition, tracking of laparoscopic instruments and deformations in bodies or organs was also performed using artificial visual markers. In some cases, these were colored markers that were placed around the distal area of the instrument shaft . Reflective markers were also used, although in this case, it was necessary to use infrared light and camera filters for identification . The use of these techniques helps improve the identification and tracking of the laparoscopic instruments, which is essential to evaluate the surgical performance of trainees and assist them during the training activities. In addition, various studies (3/33, 9%) explored the calculation and modeling of deformations in bodies or organs to be used in advanced AR applications for laparoscopic surgery training. In one study, visual markers were inserted directly on the liver so that they could be identified by the laparoscopic image and allow for the localization and superimposition of preoperative 3D models of the liver and other relevant information on the endoscopic image . Another study used retroreflective markers on the laparoscopic instruments and within the formative surgical field to facilitate the superimposition on the laparoscopic image of a preoperative 3D model . Another type of technology used for tracking surgical instruments and organs involved in the formative activity was the use of electromagnetic sensors. Viglialoro et al incorporated these sensors into the laparoscope and bile ducts or arterial tree of an artificial training model for real-time monitoring during the performance of simulator tasks. Electromagnetic sensors inserted into nitinol tubes representing the ducts and arterial trees helped infer possible deformations of the sensed tubular structures. Consequently, the virtual scene was updated in real time, augmenting the laparoscopic images with information on the real-time position of these anatomical structures. By using the aforementioned techniques for the calculation of organ deformations and the inclusion of 3D models, these studies significantly contributed to the integration of virtual models into the laparoscopic surgery environments, enhancing surgical training scenarios in real time. Regarding the type of simulator used for AR-based laparoscopic surgery training, most of them were box trainers to which an AR assistance system was added. Of note is the ProMIS simulator (Haptica, Inc), which was one of the first commercial simulators for laparoscopic surgery training with AR functionalities. This simulator included basic training tasks such as dissection and suturing but also more complex procedures such as liver resection and sigmoid colectomy. This is the AR-based simulator with the largest number of scientific publications. Apart from this simulator, we highlight other commercial options to which AR-based laparoscopic surgery training assistance systems were added, such as the Fundamentals of Laparoscopic Surgery simulator (Limbs & Things Ltd) , the Szabo Pelvic Trainer simulator (ID Trust Medical) together with the iSurgeon system , the eoSim simulator (eoSurgical) , the SIMULAP-IC05 simulator (Jesús Usón Minimally Invasive Surgery Centre) , or the Body Torso laparoscopic trainer simulator (Pharmabotics Ltd) . Regarding the training model, the use of artificial models was most common because they are generally more accessible than organic models, although with less realism. However, more and more realistic artificial models have been achieved by simulating the anatomy and behavior of real tissues, as in the case of the study by Viglialoro et al , in which they presented a 3D replica of the patient’s own liver and gallbladder. Other options presented were the enhancement of the physical model using AR imaging . In the different studies analyzed, various types of metrics were used to evaluate the quality of surgical performance and the surgical skills of trainees. These metrics, according to their nature, were divided into objective and subjective metrics. Obtaining objective metrics is one of the main advantages of using AR techniques in laparoscopic surgery training. Some of the training simulators, as is the case of ProMIS, allow for recording the actions executed by the trainee to obtain metrics such as path length or motion smoothness of the laparoscopic instruments, which can help evaluate the trainee’s performance and learning curve without the need for constant supervision by an expert evaluator. Although there are some metrics that were recurrently used in the analyzed studies, so far there is no standardized set of objective metrics used to evaluate performance or technical skills in laparoscopic surgery. The most common metrics used were execution time, path length, economy of movement, and motion smoothness of laparoscopic instruments . Other metrics were also used for specific tasks, such as knotting strength, time spent in the correct suturing and knotting area , or force exerted by laparoscopic instruments on the target organ or tissue . Other types of surgical performance quality assessments based on questionnaires administered by external evaluators were also used, such as the OSATS, a highly standardized surgical assessment tool . Horeman et al and Leblanc et al also used similar questionnaires based on peer reviews to determine surgeons’ experience in laparoscopic surgery. Subjective evaluation was commonly used for assessing the validity of the proposed solution. None of the studies used a standard questionnaire for assessing system validity, but many similarities were found in terms of the aspects of the systems they evaluated. For example, usability, realism, and didactic value were included in several studies (7/18, 39%) . Questionnaires with Likert scales were the most commonly used format in the studies. Regarding assessment, the size and type of the samples included in the studies were also important. There was a wide variation in sample size, with samples ranging from 10 individuals to 270 participants . Some studies (2/25, 8%) included >100 participants , but most of them (23/25, 92%) presented sample sizes of <100 participants or <30 individuals . Finally, the included studies encompassed participants with different levels of laparoscopic surgery experience: novices, intermediates, and experts. Novices, referring to individuals with limited or no previous laparoscopic surgery experience, were frequently included in the studies. Intermediate-level participants, characterized by a moderate level of experience, were present in a few studies (4/25, 16%) . This may be because their inclusion makes it difficult to differentiate results between study groups as there is sometimes no clear difference between the intermediate group and the novice or expert groups. Experts, representing individuals with advanced laparoscopic skills, were also included in several studies (8/25, 32%). This stratification of participants allows for a comprehensive validation of the system’s effectiveness across different proficiency levels and helps detect the different needs that these groups may have. Regarding the tasks and procedures used as training activities in the studies analyzed, the most widespread ones were those focused on eye-hand coordination tasks, such as peg transfer, instrument movement, and instrument navigation tasks . The next most used type of task was the simulator suturing task . Although less frequent, some studies (17/51, 33%) also presented more complex models for procedures such as cholecystectomy . In this case, models of the liver and biliary anatomy were used to facilitate the training of novice surgeons in gallbladder extraction using laparoscopic techniques. These types of procedures, although still basic, are highly appropriate for learning basic tasks such as the localization of critical anatomical structures (cystic duct and cystic artery) and the performance of dissection and cutting tasks for dissection and cutting of the cystic duct, cystic artery, and gallbladder removal. In a closer-to-reality environment, AR-based assistive tools were used for procedures such as cholecystectomy in ex vivo (porcine) experimental models and sigmoid colectomy in human cadavers . Considering that the main objective of AR-based training assistance applications in laparoscopic surgery is to provide the students with support tools during their training activities, the type of task or procedure to be chosen has a great influence on the usefulness of these tools. It seems evident that, in considerably basic tasks, the training assistance systems would not provide significant value to the student, mainly due to the low level of complexity of the activity. However, it is in the more challenging tasks and procedures where training assistance systems may provide remarkable training value. For instance, a task that can present a high level of difficulty for novice laparoscopic surgeons is the performance of suturing. Support tools can enhance reality for the student with visual information to assist in the performance of the suture, such as the proper grip of the needle, the passage of the needle through the tissue, or the process of double and single knotting. In more complex procedures such as laparoscopic cholecystectomy or lymphadenectomy, these training assistance systems could support the students in locating complex or hidden anatomical structures, as well as remotely transmitting instructions from tutors who are reviewing the students’ training process, among other possibilities. Although the PRISMA-ScR guidelines were followed to carry out this scoping review to reduce possible biases, we must highlight some limitations encountered in our work. Although most relevant articles found in other databases such as Springer or Google Scholar also appeared in the databases consulted, it is possible that some papers were not included because these databases were not used directly. In addition, the specific search strategies and keywords used in the various databases may have led to the exclusion of articles that used different terminology to refer to the same topics. Although the number of articles affected was small, restricting the selection to English-language publications and excluding articles that we could not access may have led to the omission of studies that could have provided relevant information. Regarding the devices used, OST devices are relatively new, so there is still much to explore in terms of what they can contribute to the field of laparoscopic surgery training, which we will address in future studies. It should also be noted that there may be interesting technical proposals for the support of laparoscopic surgery training that, as they have not yet been used for such training, were not included in the studies analyzed in our scoping review. Furthermore, the lack of standardized metrics for evaluating the different support systems did not allow for a comprehensive comparison between systems and a comparative analysis of their results. In conclusion, this scoping review sheds light on the dynamic landscape of AR technologies within laparoscopic surgery training. Although OST devices have recently emerged and their advantages over traditional AR methods are still being explored, it is relevant to indicate that the results obtained are promising and open up new opportunities for the use of AR in this type of training activities. In turn, haptic feedback emerges as a valuable asset for the acquisition of laparoscopic skills. Eye-tracking technology provides relevant information during learning, although its application is in an incipient phase that requires further exploration. The prevalence of commercial simulators and artificial models underscores the delicate balance between safety and realism. Regarding assessments, this review highlights the importance of extending the use of expert assessments, such as OSATS or GOALS, and underlines the need for standardized objective assessment. These assessments are based on the study of participants’ behavior in simple tasks such as navigation and object manipulation and in other tasks with a higher level of difficulty, such as suturing. In any case, it seems clear that AR would be more useful for more complex tasks related to complex surgical procedures. These findings beckon future research to analyze the complete potential of AR in enhancing laparoscopic skill acquisition.
Inter- and intra-animal variation in the integrative properties of stellate cells in the medial entorhinal cortex
bb0ec584-4c76-438f-91a3-db7f98e83c2d
7067584
Physiology[mh]
The concept of cell types provides a general organizing principle for understanding biological structures including the brain . The simplest conceptualization of a neuronal cell type, as a population of phenotypically similar neurons with features that cluster around a single set point , is extended by observations of variability in cell type features, suggesting that some neuronal cell types may be conceived as clustering along a line rather than around a point in a feature space . Correlations between the functional organization of sensory, motor and cognitive circuits and the electrophysiological properties of individual neuronal cell types suggest that this feature variability underlies key neural computations . However, within-cell type variability has typically been deduced by combining data obtained from multiple animals. By contrast, the structure of variation within individual animals or between different animals has received little attention. For example, apparent clustering of properties along lines in feature space could reflect a continuum of set points, or could result from a small number of discrete set points that are obscured by inter-animal variation . Moreover, although investigations of invertebrate nervous systems show that set points may differ between animals , it is not clear whether mammalian neurons exhibit similar phenotypic diversity . Distinguishing these possibilities requires many more electrophysiological observations for each animal than are obtained in typical studies. Stellate cells in layer 2 (SCs) of the medial entorhinal cortex (MEC) provide a striking example of correspondence between functional organization of neural circuits and variability of electrophysiological features within a single cell type. The MEC contains neurons that encode an animal’s location through grid-like firing fields . The spatial scale of grid fields follows a dorsoventral organization , which is mirrored by a dorsoventral organization in key electrophysiological features of SCs . Grid cells are further organized into discrete modules , with the cells within a module having a similar grid scale and orientation ; progressively more ventral modules are composed of cells with wider grid spacing . Studies that demonstrate dorsoventral organization of integrative properties of SCs have so far relied on the pooling of relatively few measurements per animal. Hence, it is unclear whether the organization of these cellular properties is modular, as one might expect if they directly set the scale of grid firing fields in individual grid cells . The possibility that set points for electrophysiological properties of SCs differ between animals has also not been considered previously. Evaluation of variability between and within animals requires statistical approaches that are not typically used in single-cell electrophysiological investigations. Given appropriate assumptions, inter-animal differences can be assessed using mixed effect models that are well established in other fields . Because tests of whether data arise from modular as opposed to continuous distributions have received less general attention, to facilitate detection of modularity using relatively few observations, we introduce a modification of the gap statistic algorithm that estimates the number of modes in a dataset while controlling for observations expected by chance (see 'Materials and methods' and – ). This algorithm performs well compared with discreteness metrics that are based on the standard deviation of binned data , which we find are prone to high false-positive rates . We find that recordings from approximately 30 SCs per animal should be sufficient to detect modularity using the modified gap statistic algorithm and given the experimentally observed separation between grid modules (see 'Materials and methods' and – ). Although methods for high-quality recording from SCs in ex-vivo brain slices are well established , typically fewer than five recordings per animal were made in previous studies, which is many fewer than our estimate of the minimum number of observations required to test for modularity. We set out to establish the nature of the set points that establish the integrative properties of SCs by measuring intra- and inter-animal variation in key electrophysiological features using experiments that maximize the number of SCs recorded per animal. Our results suggest that set points for individual features of a neuronal cell type are established at the level of neuronal cell populations, differ between animals and follow a continuous organization. Sampling integrative properties from many neurons per animal Before addressing intra- and inter-animal variability, we first describe the data set used for the analyses that follow. We established procedures to facilitate the recording of integrative properties of many SCs from a single animal (see 'Materials and methods'). With these procedures, we measured and analyzed electrophysiological features of 836 SCs (n/mouse: range 11–55; median = 35) from 27 mice (median age = 37 days, age range = 18–57 days). The mice were housed either in a standard home cage (dimensions: 0.2 × 0.37 m, N = 18 mice, n = 583 neurons) or from postnatal day 16 in a 2.4 × 1.2 m cage, which provided a large environment that could be freely explored (N = 9, n = 253, median age = 38 days) . For each neuron, we measured six sub-threshold integrative properties and six supra-threshold integrative properties . Unless indicated otherwise, we report the analysis of datasets that combine the groups of mice housed in standard and large home cages and that span the full range of ages. Because SCs are found intermingled with pyramidal cells in layer 2 (L2PCs), and as misclassification of L2PCs as SCs would probably confound investigation of intra-SC variation, we validated our criteria for distinguishing each cell type. To establish characteristic electrophysiological properties of L2PCs, we recorded from neurons in layer 2 that were identified by Cre-dependent marker expression in a Wfs1 Cre mouse line . Expression of Cre in this line, and in a similar line , labels L2PCs that project to the CA1 region of the hippocampus, but does not label SCs . We identified two populations of neurons in layer 2 of MEC that were labelled in Wfs1 Cre mice . The more numerous population had properties consistent with L2PCs and could be separated from the unidentified population on the basis of a lower rheobase . The unidentified population had firing properties that were typical of layer 2 interneurons . A principal component analysis (PCA)  clearly separated the L2PC population from the SC population, but did not identify subpopulations of SCs. The properties of the less numerous population were also clearly distinct from those of SCs . These data demonstrate that the SC population used for our analyses is distinct from other cell types also found in layer 2 of the MEC. To further validate the large SC dataset, we assessed the location-dependence of individual electrophysiological features, several of which have previously been found to depend on the dorso-ventral location of the recorded neuron . We initially fit the dependence of each feature on dorsoventral position using a standard linear regression model. We found substantial (adjusted R 2 >0.1) dorsoventral gradients in input resistance, sag, membrane time constant, resonant frequency, rheobase and the current-frequency (I-F) relationship . In contrast to the situation in SCs, we did not find evidence for dorsoventral organization of these features in L2PCs . Thus, our large dataset replicates the previously observed dependence of integrative properties of SCs on their dorsoventral position, and shows that this location dependence further distinguishes SCs from L2PCs. Inter-animal differences in the intrinsic properties of stellate cells To what extent does variability between the integrative properties of SCs at a given dorsoventral location arise from differences between animals? Comparing specific features between individual animals suggested that their distributions could be almost completely non-overlapping, despite consistent and strong dorsoventral tuning . If this apparent inter-animal variability results from the random sampling of a distribution determined by a common underlying set point, then fitting the complete data set with a mixed model in which animal identity is included as a random effect should reconcile the apparent differences between animals . In this scenario, the conditional R 2 estimated from the mixed model, in other words, the estimate of variance explained by animal identity and location, should be similar to the marginal R 2 value, which indicates the variance explained by location only. By contrast, if differences between animals contribute to experimental variability, the mixed model should predict different fitting parameters for each animal, and the estimated conditional R 2 should be greater than the corresponding marginal R 2 . Fitting the experimental measures for each feature with mixed models suggests that differences between animals contribute substantially to the variability in properties of SCs. In contrast to simulated data in which inter-animal differences are absent , differences in fits between animals remained after fitting with the mixed model . This corresponds with expectations from fits to simulated data containing inter-animal variability . To visualize inter-animal variability for all measured features, we plot for each animal the intercept of the model fit (I), the predicted value at a location 1 mm ventral from the intercept (I+S), and the slope (lines) . Strikingly, even for features such as rheobase and input resistance (IR) that are highly tuned to a neurons’ dorsoventral position, the extent of variability between animals is similar to the extent to which the property changes between dorsal and mid-levels of the MEC. If set points that determine integrative properties of SCs do indeed differ between animals, then mixed models should provide a better account of the data than linear models that are generated by pooling data across all animals. Consistent with this, we found that mixed models for all electrophysiological features gave a substantially better fit to the data than linear models that considered all neurons as independent (adjusted p<2×10 −17 for all models, χ 2 test, ). Furthermore, even for properties with substantial (R 2 value >0.1) dorsoventral tuning, the conditional R 2 value for the mixed effect model was substantially larger than the marginal R 2 value ( and ). Together, these analyses demonstrate inter-animal variability in key electrophysiological features of SCs, suggesting that the set points that establish the underlying integrative properties differ between animals. Experience-dependence of intrinsic properties of stellate cells Because neuronal integrative properties may be modified by changes in neural activity , we asked whether experience influences the measured electrophysiological features of SCs. We reasoned that modifying the space through which animals can navigate may drive experience-dependent plasticity in the MEC. As standard mouse housing has dimensions less than the distance between the firing fields of more ventrally located grid cells , in a standard home cage, only a relatively small fraction of ventral grid cells is likely to be activated, whereas larger housing should lead to the activation of a greater proportion of ventral grid cells. We therefore tested whether the electrophysiological features of SCs differ between mice housed in larger environments (28,800 cm 2 ) and those with standard home cages (740 cm 2 ). We compared the mixed models described above to models in which housing was also included as a fixed effect. To minimize the effects of age on SCs , we focused these and subsequent analyses on mice between P33 and P44 (N = 25, n = 779). We found that larger housing was associated with a smaller sag coefficient, indicating an increased sag response, a lower resonant frequency and a larger spike half-width (adjusted p<0.05; , ). These differences were primarily from changes to the magnitude rather than the location-dependence of each feature. Other electrophysiological features appeared to be unaffected by housing. To determine whether inter-animal differences remain after accounting for housing, we compared mixed models that include dorsoventral location and housing as fixed effects with equivalent linear regression models in which individual animals were not accounted for. Mixed models incorporating animal identity continued to provide a better account of the data, both for features that were dependent on housing (adjusted p<2.8×10 −21 ) and for features that were not (adjusted p<1.4×10 −7 ) . Together, these data suggest that specific electrophysiological features of SCs may be modified by experience of large environments. After accounting for housing, significant inter-animal variation remains, suggesting that additional mechanisms acting at the level of animals rather than individual neurons also determine differences between SCs. Inter-animal differences remain after accounting for additional experimental parameters To address the possibility that other experimental or biological variables could contribute to inter-animal differences, we evaluated the effects of home cage size , brain hemisphere , mediolateral position ( and ), the identity of the experimenter and time since slice preparation ( and ). Several of the variables influenced some measured electrophysiological features, for example properties primarily related to the action potential waveform depended on the mediolateral position of the recorded neuron , but significant inter-animal differences remained after accounting for each variable. We carried out further analyses using models that included housing, mediolateral position, experimenter identity and the direction in which sequential recordings were obtained as fixed effects , and using models fit to minimal datasets in which housing, mediolateral position and the recording direction were identical . These analyses again found evidence for significant inter-animal differences. Inter-animal differences could arise if the health of the recorded neurons differed between brain slices. To minimize this possibility, we standardized our procedures for tissue preparation (see 'Materials and methods'), such that slices were of consistent high quality as assessed by low numbers of unhealthy cells and by visualization of soma and dendrites of neurons in the slice. Several further observations are consistent with comparable quality of slices between experiments. First, if the condition of the slices had differed substantially between animals, then in better quality slices, it should be easier to record from more neurons, in which case features that depend on tissue quality would correlate with the number of recorded neurons. However, the majority (10/12) of the electrophysiological features were not significantly (p>0.2) associated with the number of recorded neurons . Second, analyses of inter-animal differences that focus only on data from animals for which >35 recordings were made, which should only be feasible with uniformly high-quality brain slices, are consistent with conclusions from analysis of the larger dataset . Third, the conditional R 2 values of electrophysiological features of L2PCs are much lower than those for SCs recorded under the same experimental conditions ( and ), suggesting that inter-animal variation may be specific to SCs and cannot be explained by slice conditions. Together, these analyses indicate that differences between animals remain after accounting for experimental and technical factors that might contribute to variation in the measured features of SCs. The distribution of intrinsic properties is consistent with a continuous rather than a modular organization The dorsoventral organization of SC integrative properties is well established, but whether this results from within animal variation consistent with a small number of discrete set points that underlie a modular organization is unclear. To evaluate modularity, we used datasets with n ≥ 34 SCs (N = 15 mice, median age = 37 days, age range = 18–43 days). We focus initially on rheobase, which is the property with the strongest correlation to dorsoventral location, and resonant frequency, which is related to the oscillatory dynamics underlying dorsoventral tuning in some models of grid firing (e.g. ; ). For n ≥ 34 SCs, we expect that if properties are modular, then this would be detected by the modified gap statistic in at least 50% of animals ( and ). By contrast, we find that for datasets from the majority of animals, the modified gap statistic identifies only a single mode in the distribution of rheobase values ( and ) (N = 13/15) and of resonant frequencies ( and ) (N = 14/15), indicating that these properties have a continuous rather than a modular distribution. Consistent with this, smoothed distributions did not show clearly separated peaks for either property . The mean and 95% confidence interval for the probability of evaluating a dataset as clustered (p detect ) was 0.133 and 0.02–0.4 for rheobase and 0.067 and 0.002–0.32 for resonant frequency. These values of p detect were not significantly different from the proportions expected given the false positive rate of 0.1 in the complete absence of clustering (p=0.28 and 0.66, binomial test). Thus, the rheobase and resonant frequency of SCs, although depending strongly on a neuron’s dorsoventral position, do not have a detectable modular organization. When we investigated the other measured integrative properties, we also failed to find evidence for modularity. Across all properties, for any given property, at most 3 out of 15 mice were evaluated as having a clustered organization using the modified gap statistic . This does not differ significantly from the proportion expected by chance when no modularity is present (p>0.05, binomial test). Consistent with this, the average proportion of datasets evaluated as modular across all measured features was 0.072 ± 0.02 (± SEM), which is similar to the expected false-positive rate. By contrast, the properties of grid firing fields previously recorded with tetrodes in behaving animals were detected as having a modular organization using the modified gap statistic . For seven grid-cell datasets with n ≥ 20, the mean for p detect is 0.86, with 95% confidence intervals of 0.42 to 0.996. We note here that discontinuity algorithms that were previously used to assess the modularity of grid field properties did indicate significant modularity in the majority of the intrinsic properties measured in our dataset (N = 13/15 and N = 12/15, respectively), but this was attributable to false positives resulting from the relatively even sampling of recording locations (see ). Therefore, we conclude that it is unlikely that any of the intrinsic integrative properties of SCs that we examined have organization within individual animals resembling the modular organization of grid cells in behaving animals. Multiple sources of variance contribute to diversity in stellate cell intrinsic properties Finally, because many of the measured electrophysiological features of SCs emerge from shared ionic mechanisms , we asked whether dorsoventral tuning reflects a single core mechanism and whether inter-animal differences are specific to this mechanism or manifest more generally. Estimation of conditional independence for measurements at the level of individual neurons or individual animals was consistent with the expectation that particular classes of membrane ion channels influence multiple electrophysiologically measured features. The first five dimensions of a principal components analysis (PCA) of all measured electrophysiological features accounted for almost 80% of the variance . Examination of the rotations used to generate the principal components suggested relationships between individual features that are consistent with our evaluation of the conditional independence structure of the measured features . When we fit the principal components using mixed models with location as a fixed effect and animal identity as a random effect, we found that the first two components depended significantly on dorsoventral location ( and ) (marginal R 2 = 0.50 and 0.09 and adjusted p=1.09×10 −15 and 1.05 × 10 −4 , respectively). Thus, the dependence of multiple electrophysiological features on dorsoventral position may be reducible to two core mechanisms that together account for much of the variability between SCs in their intrinsic electrophysiology. Is inter-animal variation present in PCA dimensions that account for dorsoventral variation? The intercept, but not the slope of the dependence of the first two principal components on dorsoventral position depended on housing (adjusted p=0.039 and 0.027) ( and ). After accounting for housing, the first two principal components were still better fit by models that include animal identity as a random effect (adjusted p=3.3×10 −9 and 4.1 × 10 −86 ), indicating remaining inter-animal differences in these components . A further nine of the next ten higher-order principal components did not depend on housing (adjusted p>0.1) , while eight differed significantly between animals (adjusted p<0.05) . Together, these analyses indicate that the dorsoventral organization of multiple electrophysiological features of SCs is captured by two principal components, suggesting two main sources of variance, both of which are dependent on experience. Significant inter-animal variation in the major sources of variance remains after accounting for experience and experimental parameters. Before addressing intra- and inter-animal variability, we first describe the data set used for the analyses that follow. We established procedures to facilitate the recording of integrative properties of many SCs from a single animal (see 'Materials and methods'). With these procedures, we measured and analyzed electrophysiological features of 836 SCs (n/mouse: range 11–55; median = 35) from 27 mice (median age = 37 days, age range = 18–57 days). The mice were housed either in a standard home cage (dimensions: 0.2 × 0.37 m, N = 18 mice, n = 583 neurons) or from postnatal day 16 in a 2.4 × 1.2 m cage, which provided a large environment that could be freely explored (N = 9, n = 253, median age = 38 days) . For each neuron, we measured six sub-threshold integrative properties and six supra-threshold integrative properties . Unless indicated otherwise, we report the analysis of datasets that combine the groups of mice housed in standard and large home cages and that span the full range of ages. Because SCs are found intermingled with pyramidal cells in layer 2 (L2PCs), and as misclassification of L2PCs as SCs would probably confound investigation of intra-SC variation, we validated our criteria for distinguishing each cell type. To establish characteristic electrophysiological properties of L2PCs, we recorded from neurons in layer 2 that were identified by Cre-dependent marker expression in a Wfs1 Cre mouse line . Expression of Cre in this line, and in a similar line , labels L2PCs that project to the CA1 region of the hippocampus, but does not label SCs . We identified two populations of neurons in layer 2 of MEC that were labelled in Wfs1 Cre mice . The more numerous population had properties consistent with L2PCs and could be separated from the unidentified population on the basis of a lower rheobase . The unidentified population had firing properties that were typical of layer 2 interneurons . A principal component analysis (PCA)  clearly separated the L2PC population from the SC population, but did not identify subpopulations of SCs. The properties of the less numerous population were also clearly distinct from those of SCs . These data demonstrate that the SC population used for our analyses is distinct from other cell types also found in layer 2 of the MEC. To further validate the large SC dataset, we assessed the location-dependence of individual electrophysiological features, several of which have previously been found to depend on the dorso-ventral location of the recorded neuron . We initially fit the dependence of each feature on dorsoventral position using a standard linear regression model. We found substantial (adjusted R 2 >0.1) dorsoventral gradients in input resistance, sag, membrane time constant, resonant frequency, rheobase and the current-frequency (I-F) relationship . In contrast to the situation in SCs, we did not find evidence for dorsoventral organization of these features in L2PCs . Thus, our large dataset replicates the previously observed dependence of integrative properties of SCs on their dorsoventral position, and shows that this location dependence further distinguishes SCs from L2PCs. To what extent does variability between the integrative properties of SCs at a given dorsoventral location arise from differences between animals? Comparing specific features between individual animals suggested that their distributions could be almost completely non-overlapping, despite consistent and strong dorsoventral tuning . If this apparent inter-animal variability results from the random sampling of a distribution determined by a common underlying set point, then fitting the complete data set with a mixed model in which animal identity is included as a random effect should reconcile the apparent differences between animals . In this scenario, the conditional R 2 estimated from the mixed model, in other words, the estimate of variance explained by animal identity and location, should be similar to the marginal R 2 value, which indicates the variance explained by location only. By contrast, if differences between animals contribute to experimental variability, the mixed model should predict different fitting parameters for each animal, and the estimated conditional R 2 should be greater than the corresponding marginal R 2 . Fitting the experimental measures for each feature with mixed models suggests that differences between animals contribute substantially to the variability in properties of SCs. In contrast to simulated data in which inter-animal differences are absent , differences in fits between animals remained after fitting with the mixed model . This corresponds with expectations from fits to simulated data containing inter-animal variability . To visualize inter-animal variability for all measured features, we plot for each animal the intercept of the model fit (I), the predicted value at a location 1 mm ventral from the intercept (I+S), and the slope (lines) . Strikingly, even for features such as rheobase and input resistance (IR) that are highly tuned to a neurons’ dorsoventral position, the extent of variability between animals is similar to the extent to which the property changes between dorsal and mid-levels of the MEC. If set points that determine integrative properties of SCs do indeed differ between animals, then mixed models should provide a better account of the data than linear models that are generated by pooling data across all animals. Consistent with this, we found that mixed models for all electrophysiological features gave a substantially better fit to the data than linear models that considered all neurons as independent (adjusted p<2×10 −17 for all models, χ 2 test, ). Furthermore, even for properties with substantial (R 2 value >0.1) dorsoventral tuning, the conditional R 2 value for the mixed effect model was substantially larger than the marginal R 2 value ( and ). Together, these analyses demonstrate inter-animal variability in key electrophysiological features of SCs, suggesting that the set points that establish the underlying integrative properties differ between animals. Because neuronal integrative properties may be modified by changes in neural activity , we asked whether experience influences the measured electrophysiological features of SCs. We reasoned that modifying the space through which animals can navigate may drive experience-dependent plasticity in the MEC. As standard mouse housing has dimensions less than the distance between the firing fields of more ventrally located grid cells , in a standard home cage, only a relatively small fraction of ventral grid cells is likely to be activated, whereas larger housing should lead to the activation of a greater proportion of ventral grid cells. We therefore tested whether the electrophysiological features of SCs differ between mice housed in larger environments (28,800 cm 2 ) and those with standard home cages (740 cm 2 ). We compared the mixed models described above to models in which housing was also included as a fixed effect. To minimize the effects of age on SCs , we focused these and subsequent analyses on mice between P33 and P44 (N = 25, n = 779). We found that larger housing was associated with a smaller sag coefficient, indicating an increased sag response, a lower resonant frequency and a larger spike half-width (adjusted p<0.05; , ). These differences were primarily from changes to the magnitude rather than the location-dependence of each feature. Other electrophysiological features appeared to be unaffected by housing. To determine whether inter-animal differences remain after accounting for housing, we compared mixed models that include dorsoventral location and housing as fixed effects with equivalent linear regression models in which individual animals were not accounted for. Mixed models incorporating animal identity continued to provide a better account of the data, both for features that were dependent on housing (adjusted p<2.8×10 −21 ) and for features that were not (adjusted p<1.4×10 −7 ) . Together, these data suggest that specific electrophysiological features of SCs may be modified by experience of large environments. After accounting for housing, significant inter-animal variation remains, suggesting that additional mechanisms acting at the level of animals rather than individual neurons also determine differences between SCs. To address the possibility that other experimental or biological variables could contribute to inter-animal differences, we evaluated the effects of home cage size , brain hemisphere , mediolateral position ( and ), the identity of the experimenter and time since slice preparation ( and ). Several of the variables influenced some measured electrophysiological features, for example properties primarily related to the action potential waveform depended on the mediolateral position of the recorded neuron , but significant inter-animal differences remained after accounting for each variable. We carried out further analyses using models that included housing, mediolateral position, experimenter identity and the direction in which sequential recordings were obtained as fixed effects , and using models fit to minimal datasets in which housing, mediolateral position and the recording direction were identical . These analyses again found evidence for significant inter-animal differences. Inter-animal differences could arise if the health of the recorded neurons differed between brain slices. To minimize this possibility, we standardized our procedures for tissue preparation (see 'Materials and methods'), such that slices were of consistent high quality as assessed by low numbers of unhealthy cells and by visualization of soma and dendrites of neurons in the slice. Several further observations are consistent with comparable quality of slices between experiments. First, if the condition of the slices had differed substantially between animals, then in better quality slices, it should be easier to record from more neurons, in which case features that depend on tissue quality would correlate with the number of recorded neurons. However, the majority (10/12) of the electrophysiological features were not significantly (p>0.2) associated with the number of recorded neurons . Second, analyses of inter-animal differences that focus only on data from animals for which >35 recordings were made, which should only be feasible with uniformly high-quality brain slices, are consistent with conclusions from analysis of the larger dataset . Third, the conditional R 2 values of electrophysiological features of L2PCs are much lower than those for SCs recorded under the same experimental conditions ( and ), suggesting that inter-animal variation may be specific to SCs and cannot be explained by slice conditions. Together, these analyses indicate that differences between animals remain after accounting for experimental and technical factors that might contribute to variation in the measured features of SCs. The dorsoventral organization of SC integrative properties is well established, but whether this results from within animal variation consistent with a small number of discrete set points that underlie a modular organization is unclear. To evaluate modularity, we used datasets with n ≥ 34 SCs (N = 15 mice, median age = 37 days, age range = 18–43 days). We focus initially on rheobase, which is the property with the strongest correlation to dorsoventral location, and resonant frequency, which is related to the oscillatory dynamics underlying dorsoventral tuning in some models of grid firing (e.g. ; ). For n ≥ 34 SCs, we expect that if properties are modular, then this would be detected by the modified gap statistic in at least 50% of animals ( and ). By contrast, we find that for datasets from the majority of animals, the modified gap statistic identifies only a single mode in the distribution of rheobase values ( and ) (N = 13/15) and of resonant frequencies ( and ) (N = 14/15), indicating that these properties have a continuous rather than a modular distribution. Consistent with this, smoothed distributions did not show clearly separated peaks for either property . The mean and 95% confidence interval for the probability of evaluating a dataset as clustered (p detect ) was 0.133 and 0.02–0.4 for rheobase and 0.067 and 0.002–0.32 for resonant frequency. These values of p detect were not significantly different from the proportions expected given the false positive rate of 0.1 in the complete absence of clustering (p=0.28 and 0.66, binomial test). Thus, the rheobase and resonant frequency of SCs, although depending strongly on a neuron’s dorsoventral position, do not have a detectable modular organization. When we investigated the other measured integrative properties, we also failed to find evidence for modularity. Across all properties, for any given property, at most 3 out of 15 mice were evaluated as having a clustered organization using the modified gap statistic . This does not differ significantly from the proportion expected by chance when no modularity is present (p>0.05, binomial test). Consistent with this, the average proportion of datasets evaluated as modular across all measured features was 0.072 ± 0.02 (± SEM), which is similar to the expected false-positive rate. By contrast, the properties of grid firing fields previously recorded with tetrodes in behaving animals were detected as having a modular organization using the modified gap statistic . For seven grid-cell datasets with n ≥ 20, the mean for p detect is 0.86, with 95% confidence intervals of 0.42 to 0.996. We note here that discontinuity algorithms that were previously used to assess the modularity of grid field properties did indicate significant modularity in the majority of the intrinsic properties measured in our dataset (N = 13/15 and N = 12/15, respectively), but this was attributable to false positives resulting from the relatively even sampling of recording locations (see ). Therefore, we conclude that it is unlikely that any of the intrinsic integrative properties of SCs that we examined have organization within individual animals resembling the modular organization of grid cells in behaving animals. Finally, because many of the measured electrophysiological features of SCs emerge from shared ionic mechanisms , we asked whether dorsoventral tuning reflects a single core mechanism and whether inter-animal differences are specific to this mechanism or manifest more generally. Estimation of conditional independence for measurements at the level of individual neurons or individual animals was consistent with the expectation that particular classes of membrane ion channels influence multiple electrophysiologically measured features. The first five dimensions of a principal components analysis (PCA) of all measured electrophysiological features accounted for almost 80% of the variance . Examination of the rotations used to generate the principal components suggested relationships between individual features that are consistent with our evaluation of the conditional independence structure of the measured features . When we fit the principal components using mixed models with location as a fixed effect and animal identity as a random effect, we found that the first two components depended significantly on dorsoventral location ( and ) (marginal R 2 = 0.50 and 0.09 and adjusted p=1.09×10 −15 and 1.05 × 10 −4 , respectively). Thus, the dependence of multiple electrophysiological features on dorsoventral position may be reducible to two core mechanisms that together account for much of the variability between SCs in their intrinsic electrophysiology. Is inter-animal variation present in PCA dimensions that account for dorsoventral variation? The intercept, but not the slope of the dependence of the first two principal components on dorsoventral position depended on housing (adjusted p=0.039 and 0.027) ( and ). After accounting for housing, the first two principal components were still better fit by models that include animal identity as a random effect (adjusted p=3.3×10 −9 and 4.1 × 10 −86 ), indicating remaining inter-animal differences in these components . A further nine of the next ten higher-order principal components did not depend on housing (adjusted p>0.1) , while eight differed significantly between animals (adjusted p<0.05) . Together, these analyses indicate that the dorsoventral organization of multiple electrophysiological features of SCs is captured by two principal components, suggesting two main sources of variance, both of which are dependent on experience. Significant inter-animal variation in the major sources of variance remains after accounting for experience and experimental parameters. Phenotypic variation is found across many areas of biology , but has received little attention in investigations of mammalian nervous systems. We find unexpected inter-animal variability in SC properties, suggesting that the integrative properties of neurons are determined by set points that differ between animals and are controlled at a circuit level . Continuous, location-dependent organization of set points for SC integrative properties provides new constraints on models for grid cell firing. More generally, the existence of inter-animal differences in set points has implications for experimental design and raises new questions about how the integrative properties of neurons are specified. A conceptual framework for within cell type variability Theoretical models suggest how different cell types can be generated by varying target concentrations of intracellular Ca 2+ or rates of ion channel expression . The within cell type variability predicted by these models arises from different initial conditions and may explain the variability in our data between neurons from the same animal at the same dorsoventral location . By contrast, the dependence of integrative properties on position and their variation between animals implies additional mechanisms that operate at the circuit level . In principle, this variation could be accounted for by inter-animal differences in dorsoventrally tuned or spatially uniform factors that influence ion channel expression or target points for intracellular Ca 2+ . The mechanisms for within cell type variability that are suggested by our results may differ from inter-animal variation described in invertebrate nervous systems. In invertebrates, inter-animal variability is between properties of individual identified neurons , whereas in mammalian nervous systems, neurons are not individually identifiable and the variation that we describe here is at the level of cell populations. From a developmental perspective in which cell identity is considered as a trough in a state-landscape through which each cell moves , variation in the population of neurons of the same type could be conceived as cell autonomous deviations from set points corresponding to the trough . Our finding that variability among neurons of the same type manifests between as well as within animals, could be explained by differences between animals in the position of the trough or set point in the developmental landscape . Our comparison of neurons from animals in standard and large cages provides evidence for the idea that within cell-type excitable properties are modified by experience . For example, granule cells in the dentate gyrus that receive input from SCs increase their excitability when animals are housed in enriched environments . Our experiments differ in that we increased the size of the environment with the goal of activating more ventral grid cells, whereas previous enrichment experiments have focused on increasing the environmental complexity and availability of objects for exploration. Further investigation will be required to dissociate the influence of each factor on excitability. Implications of continuous dorsoventral organization of stellate cell integrative properties for grid cell firing Dorsoventral gradients in the electrophysiological features of SCs have stimulated cellular models for the organization of grid firing . Increases in spatial scale following deletion of HCN1 channels , which in part determine the dorsoventral organization of SC integrative properties , support a relationship between the electrophysiological properties of SCs and grid cell spatial scales. Our data argue against models that explain this relationship through single cell computations , as in this case, the modularity of integrative properties of SCs is required to generate modularity of grid firing. A continuous dorsoventral organization of the electrophysiological properties of SCs could support the modular grid firing generated by self-organizing maps or by synaptic learning mechanisms . It is less clear how a continuous gradient would affect the organization of grid firing predicted by continuous attractor network models, which can instead account for modularity by limiting synaptic interactions between modules . Modularity of grid cell firing could also arise through the anatomical clustering of calbindin-positive L2PCs . Because many SCs do not appear to generate grid codes and as the most abundant functional cell type in the MEC appears to be non-grid spatial neurons , the continuous dorsoventral organization of SC integrative properties may also impact grid firing indirectly through modulation of these codes. Our results add to previous comparisons of medially and laterally located SCs . The similar dorsoventral organization of subthreshold integrative properties of SCs from medial and lateral parts of the MEC appears consistent with the organization of grid cell modules recorded in behaving animals . How mediolateral differences in firing properties might contribute to spatial computations within the MEC is unclear. The continuous dorsoventral variation of the electrophysiological features of SCs suggested by our analysis is consistent with continuous dorsoventral gradients in gene expression along layer 2 of the MEC . For example, labelling of the mRNA and protein for the HCN1 ion channel suggests a continuous dorsoventral gradient in its expression . It is also consistent with single-cell RNA sequencing analysis of other brain areas, which indicates that although the expression profiles for some cell types cluster around a point in feature space, others lie along a continuum . It will be interesting in future to determine whether gene expression continua establish corresponding continua of electrophysiological features . Functional consequences of within cell type inter-animal variability What are the functional roles of inter-animal variability? In the crab stomatogastric ganglion, inter-animal variation correlates with circuit performance . Accordingly, variation in intrinsic properties of SCs might correlate with differences in grid firing or behaviors that rely on SCs . It is interesting in this respect that there appear to be inter-animal differences in the spatial scale of grid modules (Figure 5 of ). Modification of grid field scaling following deletion of HCN1 channels is also consistent with this possibility . Alternatively, inter-animal differences may reflect multiple ways to achieve a common higher-order phenotype. According to this view, coding of spatial location by SCs would not differ between animals despite lower level variation in their intrinsic electrophysiological features. This is related to the idea of degeneracy at the level of single-cell electrophysiological properties , except that here the electrophysiological features differ between animals whereas the higher-order circuit computations may nevertheless be similar. In conclusion, our results identify substantial within cell type variation in neuronal integrative properties that manifests between as well as within animals. This has implications for experimental design and model building as the distribution of replicates from the same animal will differ from those obtained from different animals . An important future goal will be to distinguish causes of inter-animal variation. Many behaviors are characterized by substantial inter-animal variation (e.g. ), which could result from variation in neuronal integrative properties, or could drive this variation. For example, it is possible that external factors such as social interactions may affect brain circuitry , although these effects appear to be focused on frontal cortical structures rather than circuits for spatial computations . Alternatively, stochastic mechanisms operating at the population level may drive the emergence of inter-animal differences during the development of SCs . Addressing these questions may turn out to be critical to understanding the relationship between cellular biophysics and circuit-level computations in cognitive circuits . Theoretical models suggest how different cell types can be generated by varying target concentrations of intracellular Ca 2+ or rates of ion channel expression . The within cell type variability predicted by these models arises from different initial conditions and may explain the variability in our data between neurons from the same animal at the same dorsoventral location . By contrast, the dependence of integrative properties on position and their variation between animals implies additional mechanisms that operate at the circuit level . In principle, this variation could be accounted for by inter-animal differences in dorsoventrally tuned or spatially uniform factors that influence ion channel expression or target points for intracellular Ca 2+ . The mechanisms for within cell type variability that are suggested by our results may differ from inter-animal variation described in invertebrate nervous systems. In invertebrates, inter-animal variability is between properties of individual identified neurons , whereas in mammalian nervous systems, neurons are not individually identifiable and the variation that we describe here is at the level of cell populations. From a developmental perspective in which cell identity is considered as a trough in a state-landscape through which each cell moves , variation in the population of neurons of the same type could be conceived as cell autonomous deviations from set points corresponding to the trough . Our finding that variability among neurons of the same type manifests between as well as within animals, could be explained by differences between animals in the position of the trough or set point in the developmental landscape . Our comparison of neurons from animals in standard and large cages provides evidence for the idea that within cell-type excitable properties are modified by experience . For example, granule cells in the dentate gyrus that receive input from SCs increase their excitability when animals are housed in enriched environments . Our experiments differ in that we increased the size of the environment with the goal of activating more ventral grid cells, whereas previous enrichment experiments have focused on increasing the environmental complexity and availability of objects for exploration. Further investigation will be required to dissociate the influence of each factor on excitability. Dorsoventral gradients in the electrophysiological features of SCs have stimulated cellular models for the organization of grid firing . Increases in spatial scale following deletion of HCN1 channels , which in part determine the dorsoventral organization of SC integrative properties , support a relationship between the electrophysiological properties of SCs and grid cell spatial scales. Our data argue against models that explain this relationship through single cell computations , as in this case, the modularity of integrative properties of SCs is required to generate modularity of grid firing. A continuous dorsoventral organization of the electrophysiological properties of SCs could support the modular grid firing generated by self-organizing maps or by synaptic learning mechanisms . It is less clear how a continuous gradient would affect the organization of grid firing predicted by continuous attractor network models, which can instead account for modularity by limiting synaptic interactions between modules . Modularity of grid cell firing could also arise through the anatomical clustering of calbindin-positive L2PCs . Because many SCs do not appear to generate grid codes and as the most abundant functional cell type in the MEC appears to be non-grid spatial neurons , the continuous dorsoventral organization of SC integrative properties may also impact grid firing indirectly through modulation of these codes. Our results add to previous comparisons of medially and laterally located SCs . The similar dorsoventral organization of subthreshold integrative properties of SCs from medial and lateral parts of the MEC appears consistent with the organization of grid cell modules recorded in behaving animals . How mediolateral differences in firing properties might contribute to spatial computations within the MEC is unclear. The continuous dorsoventral variation of the electrophysiological features of SCs suggested by our analysis is consistent with continuous dorsoventral gradients in gene expression along layer 2 of the MEC . For example, labelling of the mRNA and protein for the HCN1 ion channel suggests a continuous dorsoventral gradient in its expression . It is also consistent with single-cell RNA sequencing analysis of other brain areas, which indicates that although the expression profiles for some cell types cluster around a point in feature space, others lie along a continuum . It will be interesting in future to determine whether gene expression continua establish corresponding continua of electrophysiological features . What are the functional roles of inter-animal variability? In the crab stomatogastric ganglion, inter-animal variation correlates with circuit performance . Accordingly, variation in intrinsic properties of SCs might correlate with differences in grid firing or behaviors that rely on SCs . It is interesting in this respect that there appear to be inter-animal differences in the spatial scale of grid modules (Figure 5 of ). Modification of grid field scaling following deletion of HCN1 channels is also consistent with this possibility . Alternatively, inter-animal differences may reflect multiple ways to achieve a common higher-order phenotype. According to this view, coding of spatial location by SCs would not differ between animals despite lower level variation in their intrinsic electrophysiological features. This is related to the idea of degeneracy at the level of single-cell electrophysiological properties , except that here the electrophysiological features differ between animals whereas the higher-order circuit computations may nevertheless be similar. In conclusion, our results identify substantial within cell type variation in neuronal integrative properties that manifests between as well as within animals. This has implications for experimental design and model building as the distribution of replicates from the same animal will differ from those obtained from different animals . An important future goal will be to distinguish causes of inter-animal variation. Many behaviors are characterized by substantial inter-animal variation (e.g. ), which could result from variation in neuronal integrative properties, or could drive this variation. For example, it is possible that external factors such as social interactions may affect brain circuitry , although these effects appear to be focused on frontal cortical structures rather than circuits for spatial computations . Alternatively, stochastic mechanisms operating at the population level may drive the emergence of inter-animal differences during the development of SCs . Addressing these questions may turn out to be critical to understanding the relationship between cellular biophysics and circuit-level computations in cognitive circuits . Mouse strains All experimental procedures were performed under a United Kingdom Home Office license and with approval of the University of Edinburgh’s animal welfare committee. Recordings of many SCs per animal used C57Bl/6J mice (Charles River). Recordings targeting calbindin cells used a Wfs1 Cre line ( Wfs1 -Tg3-CreERT2) obtained from Jackson Labs (Strain name: B6;C3-Tg( Wfs1 -cre/ERT2)3Aibs/J; stock number:009103) crossed to RCE:loxP (R26R CAG-boosted EGFP) reporter mice (described in ). To promote expression of Cre in the mice, tamoxifen (Sigma, 20 mg/ml in corn oil) was administered on three consecutive days by intraperitoneal injections, approximately 1 week before experiments. Mice were group housed in a 12 hr light/dark cycle with unrestricted access to food and water (light on 07.30–19.30 hr). Mice were usually housed in standard 0.2 × 0.37 m cages that contained a cardboard roll for enrichment. A subset of the mice was instead housed from pre-weaning ages in a larger 2.4 × 1.2 m cage that was enriched with up to 15 bright plastic objects and eight cardboard rolls . Thus, the large cages had more items but at a slightly lower density (large cages — up to 1 item per 0.125 m 2 ; standard cages — 1 item per 0.074 m 2 ). All experiments in the standard cage used male mice. For experiments in the large cage, two mice were female, six mice were male and one was not identified. Because we did not find significant effects of sex on individual electrophysiologically properties, all mice were included in the analyses reported in the text. When only male mice were included, the effects of housing on the first principal component remained significant, whereas the effects of housing on individual electrophysiologically properties no longer reach statistical significance after correcting for multiple comparisons. Additional analyses that consider only male mice are provided in the code associated with the manuscript. Slice preparation Methods for preparation of parasagittal brain slices containing medial entorhinal cortex were based on procedures described previously . Briefly, mice were sacrificed by cervical dislocation and their brains carefully removed and placed in cold (2–4°C) modified ACSF, with composition (in mM): NaCl 86, NaH 2 PO 4 1.2, KCl 2.5, NaHCO 3 25, glucose 25, sucrose 75, CaCl 2 0.5, and MgCl 2 7. Brains were then hemisected and sectioned, also in modified ACSF at 4–8°C, using a vibratome (Leica VT1200S). To minimize variation in the slicing angle, the hemi-section was cut along the midline of the brain and the cut surface of the brain was glued to the cutting block. After cutting, brains were placed at 36°C for 30 min in standard ACSF, with composition (in mM): NaCl 124, NaH 2 PO4 1.2, KCl 2.5, NaHCO 3 25, glucose 20, CaCl 2 2, and MgCl 2 1. They were then allowed to cool passively to room temperature. All slices were prepared by the same experimenter (HP), who followed the same procedure on each day. Recording methods Whole-cell patch-clamp recordings were made between 1 to 10 hr after slice preparation using procedures described previously . Recordings were made from slice perfused in standard ACSF maintained at 34–36°C. In these conditions, we observe spontaneous fast inhibitory and excitatory postsynaptic potentials, but do not find evidence for tonic GABAergic or glutamatergic currents. Patch electrodes were filled with the following intracellular solution (in mM): K gluconate 130; KCl 10, HEPES 10, MgCl 2 2, EGTA 0.1, Na 2 ATP 2, Na 2 GTP 0.3 and NaPhosphocreatine 10. The open tip resistance was 4–5 MΩ, all seal resistances were >2 GΩ and series resistances were <30 MΩ. Recordings were made in current clamp mode using Multiclamp 700B amplifiers (Molecular Devices, Sunnyvale, CA, USA) connected to PCs via Instrutech ITC-18 interfaces (HEKA Elektronik, Lambrecht, Germany) and using Axograph X acquisition software ( http://axographx.com/ ). Pipette capacitance and series resistances were compensated using the capacitance neutralization and bridge-balance amplifier controls. An experimentally measured liquid junction potential of 12.9 mV was not corrected for. Stellate cells were identified by their large sag response and the characteristic waveform of their action potential after hyperpolarization (see ; ; ; ). To maximize the number of cells recorded per animal, we adopted two strategies. First, to minimize the time required to obtain data from each recorded cell, we measured electrophysiological features using a series of three short protocols following initiation of stable whole-cell recordings. We used responses to sub-threshold current steps to estimate passive membrane properties , a frequency modulated sinusoidal current waveform (ZAP waveform) to estimate impedance amplitude profiles , and a linear current ramp to estimate the action potential threshold and firing properties . From analysis of data obtained with these protocols, we obtained 12 quantitative measures that describe the sub- and supra-threshold integrative properties of each recorded cell . Second, for the majority of mice, two experimenters made recordings in parallel from neurons in two sagittal brain sections from the same hemisphere. The median dorsal-ventral extent of the recorded SCs was 1644 µm (range 0–2464 µm). Each experimenter aimed to sample recording locations evenly across the dorsoventral extent of the MEC, and for most animals, each experimenter recorded sequentially from opposite extremes of the dorsoventral axis. Each experimenter varied the starting location for recording between animals. For several features, the direction of recording affected their measured dependence on dorsoventral location, but the overall dependence of these features on dorsoventral location was robust to this effect . Measurement of electrophysiological features and neuronal location Electrophysiological recordings were analyzed in Matlab (Mathworks) using a custom-written semi-automated pipeline. We defined each feature as follows (see also ; ): resting membrane potential was the mean of the membrane potential during the 1 s prior to injection of the current steps used to assess subthreshold properties; input resistance was the steady-state voltage response to the negative current steps divided by their amplitude; membrane time constant was the time constant of an exponential function fit to the initial phase of membrane potential responses to the negative current steps; the sag coefficient was the steady-state divided by the peak membrane potential response to the negative current steps; resonance frequency was the frequency at which the peak membrane potential impedance was found to occur; resonance magnitude was the ratio between the peak impedance and the impedance at a frequency of 1 Hz; action potential threshold was calculated from responses to positive current ramps as the membrane potential at which the first derivative of the membrane potential crossed 20 mv ms −1 averaged across the first five spikes, or fewer if fewer spikes were triggered; rheobase was the ramp current at the point when the threshold was crossed on the first spike; spike maximum was the mean peak amplitude of the action potentials triggered by the positive current ramp; spike width was the duration of the action potentials measured at the voltage corresponding to the midpoint between the spike threshold and spike maximum; the AHP minimum was the negative peak membrane potential of the after hyperpolarization following the first action potential when a second action potential also occurred; and the F-I slope was the linear slope of the relationship between the spike rate and the injected ramp current over a 500 ms window. The location of each recorded neuron was estimated as described previously . Following each recording, a low magnification image was taken of the slice with the patch-clamp electrode at the recording location. The image was calibrated and then the distance measured from the dorsal border of the MEC along the border of layers 1 and 2 to the location of the recorded cell. Analysis of location-dependence, experience-dependence and inter-animal differences Analyses of location-dependence and inter-animal differences used R 3.4.3 (R Core Team, Vienna, Austria) and R Studio 1.1.383 (R Studio Inc, Boston, MA). To fit linear mixed effect models, we used the lme4 package . Features of interest were included as fixed effects and animal identity was included as a random effect. All reported analyses are for models with the minimal a priori random effect structure, in other words the random effect was specified with uncorrelated slope and intercept. We also evaluated models in which only the intercept, or correlated intercept and slope were specified as the random effect. Model assessment was performed using Akaike Information Criterion (AIC) scores. In general, models with either random slope and intercept, or correlated random slope and intercept, had lower AIC scores than random intercept only models, indicating a better fit to the data. We used the former set of models for all analyses of all properties for consistency and because a maximal effect structure may be preferable on theoretical grounds . We evaluated assumptions including linearity, normality, homoscedasticity and influential data points. For some features, we found modest deviations from these assumptions that could be remedied by handling non-linearity in the data using a copula transformation. Model fits were similar following transformation of the data. However, we focus here on analyses of the untransformed data because the physical interpretation of the resulting values for data points is clearer. To evaluate the location-dependence of a given feature, p-values were calculated using a χ 2 test comparing models to null models with no location information but identical random effect specification. To calculate marginal and conditional R 2 of mixed effect models, we used the MuMin package . To evaluate additional fixed effects, we used Type II Wald χ 2 test tests provided by the car package . To compare mixed effect with equivalent linear models, we used a χ 2 test to compare the calculated deviance for each model. For clarity, we have reported key statistics in the main text and provide full test statistics in the Supplemental Tables. In addition, the code from which the analyses can be fully reproduced is available at https://github.com/MattNolanLab/Inter_Intra_Variation ( ; copy archived at https://github.com/elifesciences-publications/Inter_Intra_Variation ). To evaluate partial correlations between features, we used the function cor2pcor from the R package corpcor . Principal components analysis used core R functions. Implementation of tests for modularity To establish statistical tests to distinguish ‘modular’ from ‘continuous’ distributions given relatively few observations, we classified datasets as continuous or modular by modifying the gap statistic algorithm . The gap statistic estimates the number of clusters (k est ) that best account for the data in any given dataset . However, this estimate may be prone to false positives, particularly where the numbers of observations are low. We therefore introduced a thresholding mechanism for tuning the sensitivity of the algorithm so that the false-positive rate, which is the rate of misclassifying datasets drawn from continuous (uniform) distributions as ‘modular’, is low, constant across different numbers of cluster modes and insensitive to dataset size . With this approach, we are able to estimate whether a dataset is best described as lacking modularity (k est = 1), or having a given number of modes (k est > 1). Below, we describe tests carried out to validate the approach. To illustrate the sensitivity and specificity of the modified gap statistic algorithm, we applied it to simulated datasets drawn either from a uniform distribution (k = 1, n = 40) or from a bimodal distribution with separation between the modes of five standard deviations (k = 2, n = 40, sigma = 5) . We set the thresholding mechanism so that k est for each distinct k (where k est ≥2) has a false-positive rate of 0.01. In line with this, testing for 2 ≤ k est ≤ 8 (the maximum k expected to occur in grid spacing in the MEC), across multiple (N = 1000) simulated datasets drawn from the uniform distribution, produced a low false-positive rate (P(k est )≥2 = ~0.07), whereas when the data were drawn from the bimodal distribution, the ability to detect modular organization (p detect ) was good (P[k est ]≥2 = ~0.8) . The performance of the statistic improved with larger separation between clusters and with greater numbers of data points per dataset and is relatively insensitive to the numbers of clusters . The algorithm maintains high rates of p detect when modes have varying densities and when sigma between modes varies in a manner similar to grid spacing data . Analysis of extracellular recording data from other laboratories Recently described algorithms address the problem of identifying modularity when data are sampled from multiple locations and data values vary as a function of location, as is the case for the mean spacing of grid fields for cells at different dorsoventral locations recorded in behaving animals using tetrodes . They generate log-normalized discontinuity (which we refer to here as lnDS) or discreteness scores, which are the log of the ratio of discontinuity or discreteness scores for the data points of interest and for the sampling locations, with positive values interpreted as evidence for clustering . However, in simulations of datasets generated from a uniform distribution with evenly spaced recording locations, we find that the lnDS is always greater than zero . This is because evenly spaced locations result in a discontinuity score that approaches zero, and therefore the log ratio of the discontinuity of the data to this score will be positive. Thus, for evenly spaced data, the lnDS is guaranteed to produce false-positive results. When locations are instead sampled from a uniform distribution, approximately half of the simulated datasets have a log discontinuity ratio greater than 0 , which in previous studies would be interpreted as evidence of modularity . Similar discrepancies arise for the discreteness measure . To address these issues, we introduced a log discontinuity ratio threshold, so that the discontinuity method could be matched to produce a similar false-positive rate to the adapted gap statistic algorithm used in the example above. After including this modification, we found that for a given false-positive rate, the adapted gap statistic is more sensitive at detecting modularity in the simulated datasets . To establish whether the modified gap statistic detects clustering in experimental data, we applied it to previously published grid cell data recorded with tetrodes from awake behaving animals . We find that the modified gap statistic identified clustered grid spacing for 6 of 7 animals previously identified as having grid modules and with n ≥ 20. For these animals, the number of modules was similar (but not always identical) to the number of previously identified modules . By contrast, the modified gap statistic does not identify clustering in five of six sets of recording locations, confirming that the grid clustering is likely not a result of uneven sampling of locations (we could not test the seventh as location data were not available). The thresholded discontinuity score also detects clustering in the same five of the six tested sets of grid data. From the six grid datasets detected as clustered with the modified gap statistic, we estimated the separation between clusters by fitting the data with a mixture of Gaussians, with the number of modes set by the value of k obtained with the modified gap statistic. This analysis suggested that the largest spacing between contiguous modules in each mouse is always >5.6 standard deviations (mean = 20.5 ± 5.0 standard deviations). Thus, the modified gap statistic detects modularity within the grid system and indicates that previous descriptions of grid modularity are, in general, robust to the possibility of false positives associated with the discreteness and discontinuity methods. All experimental procedures were performed under a United Kingdom Home Office license and with approval of the University of Edinburgh’s animal welfare committee. Recordings of many SCs per animal used C57Bl/6J mice (Charles River). Recordings targeting calbindin cells used a Wfs1 Cre line ( Wfs1 -Tg3-CreERT2) obtained from Jackson Labs (Strain name: B6;C3-Tg( Wfs1 -cre/ERT2)3Aibs/J; stock number:009103) crossed to RCE:loxP (R26R CAG-boosted EGFP) reporter mice (described in ). To promote expression of Cre in the mice, tamoxifen (Sigma, 20 mg/ml in corn oil) was administered on three consecutive days by intraperitoneal injections, approximately 1 week before experiments. Mice were group housed in a 12 hr light/dark cycle with unrestricted access to food and water (light on 07.30–19.30 hr). Mice were usually housed in standard 0.2 × 0.37 m cages that contained a cardboard roll for enrichment. A subset of the mice was instead housed from pre-weaning ages in a larger 2.4 × 1.2 m cage that was enriched with up to 15 bright plastic objects and eight cardboard rolls . Thus, the large cages had more items but at a slightly lower density (large cages — up to 1 item per 0.125 m 2 ; standard cages — 1 item per 0.074 m 2 ). All experiments in the standard cage used male mice. For experiments in the large cage, two mice were female, six mice were male and one was not identified. Because we did not find significant effects of sex on individual electrophysiologically properties, all mice were included in the analyses reported in the text. When only male mice were included, the effects of housing on the first principal component remained significant, whereas the effects of housing on individual electrophysiologically properties no longer reach statistical significance after correcting for multiple comparisons. Additional analyses that consider only male mice are provided in the code associated with the manuscript. Methods for preparation of parasagittal brain slices containing medial entorhinal cortex were based on procedures described previously . Briefly, mice were sacrificed by cervical dislocation and their brains carefully removed and placed in cold (2–4°C) modified ACSF, with composition (in mM): NaCl 86, NaH 2 PO 4 1.2, KCl 2.5, NaHCO 3 25, glucose 25, sucrose 75, CaCl 2 0.5, and MgCl 2 7. Brains were then hemisected and sectioned, also in modified ACSF at 4–8°C, using a vibratome (Leica VT1200S). To minimize variation in the slicing angle, the hemi-section was cut along the midline of the brain and the cut surface of the brain was glued to the cutting block. After cutting, brains were placed at 36°C for 30 min in standard ACSF, with composition (in mM): NaCl 124, NaH 2 PO4 1.2, KCl 2.5, NaHCO 3 25, glucose 20, CaCl 2 2, and MgCl 2 1. They were then allowed to cool passively to room temperature. All slices were prepared by the same experimenter (HP), who followed the same procedure on each day. Whole-cell patch-clamp recordings were made between 1 to 10 hr after slice preparation using procedures described previously . Recordings were made from slice perfused in standard ACSF maintained at 34–36°C. In these conditions, we observe spontaneous fast inhibitory and excitatory postsynaptic potentials, but do not find evidence for tonic GABAergic or glutamatergic currents. Patch electrodes were filled with the following intracellular solution (in mM): K gluconate 130; KCl 10, HEPES 10, MgCl 2 2, EGTA 0.1, Na 2 ATP 2, Na 2 GTP 0.3 and NaPhosphocreatine 10. The open tip resistance was 4–5 MΩ, all seal resistances were >2 GΩ and series resistances were <30 MΩ. Recordings were made in current clamp mode using Multiclamp 700B amplifiers (Molecular Devices, Sunnyvale, CA, USA) connected to PCs via Instrutech ITC-18 interfaces (HEKA Elektronik, Lambrecht, Germany) and using Axograph X acquisition software ( http://axographx.com/ ). Pipette capacitance and series resistances were compensated using the capacitance neutralization and bridge-balance amplifier controls. An experimentally measured liquid junction potential of 12.9 mV was not corrected for. Stellate cells were identified by their large sag response and the characteristic waveform of their action potential after hyperpolarization (see ; ; ; ). To maximize the number of cells recorded per animal, we adopted two strategies. First, to minimize the time required to obtain data from each recorded cell, we measured electrophysiological features using a series of three short protocols following initiation of stable whole-cell recordings. We used responses to sub-threshold current steps to estimate passive membrane properties , a frequency modulated sinusoidal current waveform (ZAP waveform) to estimate impedance amplitude profiles , and a linear current ramp to estimate the action potential threshold and firing properties . From analysis of data obtained with these protocols, we obtained 12 quantitative measures that describe the sub- and supra-threshold integrative properties of each recorded cell . Second, for the majority of mice, two experimenters made recordings in parallel from neurons in two sagittal brain sections from the same hemisphere. The median dorsal-ventral extent of the recorded SCs was 1644 µm (range 0–2464 µm). Each experimenter aimed to sample recording locations evenly across the dorsoventral extent of the MEC, and for most animals, each experimenter recorded sequentially from opposite extremes of the dorsoventral axis. Each experimenter varied the starting location for recording between animals. For several features, the direction of recording affected their measured dependence on dorsoventral location, but the overall dependence of these features on dorsoventral location was robust to this effect . Electrophysiological recordings were analyzed in Matlab (Mathworks) using a custom-written semi-automated pipeline. We defined each feature as follows (see also ; ): resting membrane potential was the mean of the membrane potential during the 1 s prior to injection of the current steps used to assess subthreshold properties; input resistance was the steady-state voltage response to the negative current steps divided by their amplitude; membrane time constant was the time constant of an exponential function fit to the initial phase of membrane potential responses to the negative current steps; the sag coefficient was the steady-state divided by the peak membrane potential response to the negative current steps; resonance frequency was the frequency at which the peak membrane potential impedance was found to occur; resonance magnitude was the ratio between the peak impedance and the impedance at a frequency of 1 Hz; action potential threshold was calculated from responses to positive current ramps as the membrane potential at which the first derivative of the membrane potential crossed 20 mv ms −1 averaged across the first five spikes, or fewer if fewer spikes were triggered; rheobase was the ramp current at the point when the threshold was crossed on the first spike; spike maximum was the mean peak amplitude of the action potentials triggered by the positive current ramp; spike width was the duration of the action potentials measured at the voltage corresponding to the midpoint between the spike threshold and spike maximum; the AHP minimum was the negative peak membrane potential of the after hyperpolarization following the first action potential when a second action potential also occurred; and the F-I slope was the linear slope of the relationship between the spike rate and the injected ramp current over a 500 ms window. The location of each recorded neuron was estimated as described previously . Following each recording, a low magnification image was taken of the slice with the patch-clamp electrode at the recording location. The image was calibrated and then the distance measured from the dorsal border of the MEC along the border of layers 1 and 2 to the location of the recorded cell. Analyses of location-dependence and inter-animal differences used R 3.4.3 (R Core Team, Vienna, Austria) and R Studio 1.1.383 (R Studio Inc, Boston, MA). To fit linear mixed effect models, we used the lme4 package . Features of interest were included as fixed effects and animal identity was included as a random effect. All reported analyses are for models with the minimal a priori random effect structure, in other words the random effect was specified with uncorrelated slope and intercept. We also evaluated models in which only the intercept, or correlated intercept and slope were specified as the random effect. Model assessment was performed using Akaike Information Criterion (AIC) scores. In general, models with either random slope and intercept, or correlated random slope and intercept, had lower AIC scores than random intercept only models, indicating a better fit to the data. We used the former set of models for all analyses of all properties for consistency and because a maximal effect structure may be preferable on theoretical grounds . We evaluated assumptions including linearity, normality, homoscedasticity and influential data points. For some features, we found modest deviations from these assumptions that could be remedied by handling non-linearity in the data using a copula transformation. Model fits were similar following transformation of the data. However, we focus here on analyses of the untransformed data because the physical interpretation of the resulting values for data points is clearer. To evaluate the location-dependence of a given feature, p-values were calculated using a χ 2 test comparing models to null models with no location information but identical random effect specification. To calculate marginal and conditional R 2 of mixed effect models, we used the MuMin package . To evaluate additional fixed effects, we used Type II Wald χ 2 test tests provided by the car package . To compare mixed effect with equivalent linear models, we used a χ 2 test to compare the calculated deviance for each model. For clarity, we have reported key statistics in the main text and provide full test statistics in the Supplemental Tables. In addition, the code from which the analyses can be fully reproduced is available at https://github.com/MattNolanLab/Inter_Intra_Variation ( ; copy archived at https://github.com/elifesciences-publications/Inter_Intra_Variation ). To evaluate partial correlations between features, we used the function cor2pcor from the R package corpcor . Principal components analysis used core R functions. To establish statistical tests to distinguish ‘modular’ from ‘continuous’ distributions given relatively few observations, we classified datasets as continuous or modular by modifying the gap statistic algorithm . The gap statistic estimates the number of clusters (k est ) that best account for the data in any given dataset . However, this estimate may be prone to false positives, particularly where the numbers of observations are low. We therefore introduced a thresholding mechanism for tuning the sensitivity of the algorithm so that the false-positive rate, which is the rate of misclassifying datasets drawn from continuous (uniform) distributions as ‘modular’, is low, constant across different numbers of cluster modes and insensitive to dataset size . With this approach, we are able to estimate whether a dataset is best described as lacking modularity (k est = 1), or having a given number of modes (k est > 1). Below, we describe tests carried out to validate the approach. To illustrate the sensitivity and specificity of the modified gap statistic algorithm, we applied it to simulated datasets drawn either from a uniform distribution (k = 1, n = 40) or from a bimodal distribution with separation between the modes of five standard deviations (k = 2, n = 40, sigma = 5) . We set the thresholding mechanism so that k est for each distinct k (where k est ≥2) has a false-positive rate of 0.01. In line with this, testing for 2 ≤ k est ≤ 8 (the maximum k expected to occur in grid spacing in the MEC), across multiple (N = 1000) simulated datasets drawn from the uniform distribution, produced a low false-positive rate (P(k est )≥2 = ~0.07), whereas when the data were drawn from the bimodal distribution, the ability to detect modular organization (p detect ) was good (P[k est ]≥2 = ~0.8) . The performance of the statistic improved with larger separation between clusters and with greater numbers of data points per dataset and is relatively insensitive to the numbers of clusters . The algorithm maintains high rates of p detect when modes have varying densities and when sigma between modes varies in a manner similar to grid spacing data . Recently described algorithms address the problem of identifying modularity when data are sampled from multiple locations and data values vary as a function of location, as is the case for the mean spacing of grid fields for cells at different dorsoventral locations recorded in behaving animals using tetrodes . They generate log-normalized discontinuity (which we refer to here as lnDS) or discreteness scores, which are the log of the ratio of discontinuity or discreteness scores for the data points of interest and for the sampling locations, with positive values interpreted as evidence for clustering . However, in simulations of datasets generated from a uniform distribution with evenly spaced recording locations, we find that the lnDS is always greater than zero . This is because evenly spaced locations result in a discontinuity score that approaches zero, and therefore the log ratio of the discontinuity of the data to this score will be positive. Thus, for evenly spaced data, the lnDS is guaranteed to produce false-positive results. When locations are instead sampled from a uniform distribution, approximately half of the simulated datasets have a log discontinuity ratio greater than 0 , which in previous studies would be interpreted as evidence of modularity . Similar discrepancies arise for the discreteness measure . To address these issues, we introduced a log discontinuity ratio threshold, so that the discontinuity method could be matched to produce a similar false-positive rate to the adapted gap statistic algorithm used in the example above. After including this modification, we found that for a given false-positive rate, the adapted gap statistic is more sensitive at detecting modularity in the simulated datasets . To establish whether the modified gap statistic detects clustering in experimental data, we applied it to previously published grid cell data recorded with tetrodes from awake behaving animals . We find that the modified gap statistic identified clustered grid spacing for 6 of 7 animals previously identified as having grid modules and with n ≥ 20. For these animals, the number of modules was similar (but not always identical) to the number of previously identified modules . By contrast, the modified gap statistic does not identify clustering in five of six sets of recording locations, confirming that the grid clustering is likely not a result of uneven sampling of locations (we could not test the seventh as location data were not available). The thresholded discontinuity score also detects clustering in the same five of the six tested sets of grid data. From the six grid datasets detected as clustered with the modified gap statistic, we estimated the separation between clusters by fitting the data with a mixture of Gaussians, with the number of modes set by the value of k obtained with the modified gap statistic. This analysis suggested that the largest spacing between contiguous modules in each mouse is always >5.6 standard deviations (mean = 20.5 ± 5.0 standard deviations). Thus, the modified gap statistic detects modularity within the grid system and indicates that previous descriptions of grid modularity are, in general, robust to the possibility of false positives associated with the discreteness and discontinuity methods.
Artrosis: ¿cambios degenerativos o cambios adaptativos? Consejos educativos breves en la consulta de atención primaria
1c9d1262-bf7b-42ef-93d4-5f10a5bf1e35
11720430
Patient Education as Topic[mh]
No existe en el momento actual un tratamiento curativo ni fármacos modificadores de la enfermedad, siendo el tratamiento de primera línea el no farmacológico, basado en educación y ejercicio físico . Tratamiento no farmacológico Se sabe que muchas personas afectadas por artrosis no reciben el tratamiento adecuado según la evidencia actual , un tratamiento no farmacológico basado en educación y ejercicio físico , . La revisión de la EULAR del 2023 sobre artrosis de rodilla y cadera recomienda el abordaje biopsicosocial y la toma de decisiones compartida. Como recomendaciones de evidencia 1A incluye: la oferta de un plan de tratamiento multicomponente basado en el tratamiento no farmacológico; proporcionar educación y consejo en el automanejo en los encuentros clínicos; proponer un programa de ejercicio según disponibilidad y preferencias del paciente; proporcionar educación sobre la importancia de mantener un peso adecuado y abordar la obesidad y el sobrepeso si está presente. Según la guía NICE se debe recomendar el ejercicio adaptado a todas las personas con artrosis y, en algunos casos, siendo supervisado o con un programa educativo estructurado. Además, recomienda avisar a los pacientes de que a pesar de que el dolor articular puede empeorar al principio, la práctica regular de ejercicio físico beneficia a las articulaciones y mejora a largo plazo la función, la calidad de vida y el dolor. En caso de presentar sobrepeso u obesidad, se recomendaría la pérdida de peso, incidiendo en que cualquier pérdida es beneficiosa por pequeña que sea. También nos recuerda que no hay suficiente evidencia para recomendar terapia manual, acupuntura o punción seca y que no hay evidencia que respalde ofrecer electroterapia o ultrasonido, cuando estas técnicas no se realizan como coadyuvantes del ejercicio físico. Se pueden considerar ayudas para la deambulación, como el uso de bastones. Tratamiento farmacológico e intervencionista El tratamiento farmacológico, según la guía NICE , se puede considerar asociado al tratamiento no farmacológico y para favorecer el ejercicio físico, siempre en la menor dosis posible y el menor tiempo posible. Antiinflamatorios (AINE) tópicos: Se pueden ofrecer en personas con gonartrosis y considerar en personas afectadas en otras articulaciones. AINE orales: En caso de no ser efectivos en la vía tópica, se puede considerar su uso en vía oral, siempre teniendo en cuenta el beneficio/riesgo y la necesidad de gastroprotección. Paracetamol: No ofrecer rutinariamente, excepto para uso ocasional o corto plazo cuando los AINE están contraindicados o son inefectivos. Glucosamina: No ofrecer por su falta de evidencia. Opioides: No ofrecer opioides débiles de forma generalizada, excepto para uso ocasional o a corto plazo siempre y cuando los AINE estén contraindicados o sean inefectivos. No ofrecer opioides potentes dada la presencia de un balance beneficio/riesgo negativo. Antidepresivos: Una revisión Cochrane sobre el uso de antidepresivos en dolor persistente en personas con artrosis de rodilla y cadera concluye que hay evidencia alta de que el uso de antidepresivos para la artrosis de rodilla no produce una mejoría significativa en el dolor o la funcionalidad, con un número necesario a tratar para un efecto beneficioso adicional de 6 y un número necesario a tratar para un desenlace perjudicial adicional de 7. Sin embargo, un pequeño número de personas tendría una mejoría importante del 50% o más (es posible que la causa del dolor que responde a este tratamiento solo esté presente en un pequeño número de personas, por lo que se debe seleccionar cuidadosamente a los pacientes cuando se considere su uso) . Técnicas intervencionistas: S egún la guía NICE , se pueden considerar las inyecciones intraarticulares de corticoide cuando el resto de tratamientos farmacológicos no han sido efectivos para favorecer la actividad física, explicando que como mucho produce una mejoría a corto plazo. No recomienda ofrecer inyecciones intraarticulares de ácido hialurónico. Con un grado de evidencia alto se recomienda no ofrecer artroscopias para lavado o desbridamiento (las artroscopias, en procesos degenerativos de rodilla, incluidas las roturas degenerativas de menisco, tienen poco o ningún efecto clínicamente importante en dolor o función; podrían dar lugar a un ligero aumento de efectos adversos y no se ha determinado si dan lugar a un número ligeramente mayor de cirugías de rodilla) . Cirugía: Los pacientes se someten a una artroplastia total de rodilla (ATR) en etapas muy dispares de artrosis. La mayoría de los estudios no incluyen a los pacientes basándose en la severidad de la artrosis, y la falta de criterios de inclusión consistentes da como resultados cohortes muy heterogéneas que no dan validez a estos estudios . En una revisión sistemática de pacientes sometidos a artroplastia de cadera o rodilla se encontró una fuerte y consistente asociación con mayor dolor posquirúrgico de los siguientes factores prequirúrgicos: el sexo femenino, bajo nivel socioeconómico, un mayor nivel de dolor preoperatorio, la presencia de comorbilidades o dolor lumbar, un peor estado funcional preoperatorio y la presencia de factores psicológicos (depresión, ansiedad o catastrofismo) . Según una revisión de la Cochrane sobre la cirugía de reemplazo de hombro para artrosis y artropatía por desgarro del manguito rotador, no hay estudios de calidad para determinar si es más efectivo que otros tratamientos ni cuál sería la técnica más efectiva en diferentes situaciones . Se sabe que muchas personas afectadas por artrosis no reciben el tratamiento adecuado según la evidencia actual , un tratamiento no farmacológico basado en educación y ejercicio físico , . La revisión de la EULAR del 2023 sobre artrosis de rodilla y cadera recomienda el abordaje biopsicosocial y la toma de decisiones compartida. Como recomendaciones de evidencia 1A incluye: la oferta de un plan de tratamiento multicomponente basado en el tratamiento no farmacológico; proporcionar educación y consejo en el automanejo en los encuentros clínicos; proponer un programa de ejercicio según disponibilidad y preferencias del paciente; proporcionar educación sobre la importancia de mantener un peso adecuado y abordar la obesidad y el sobrepeso si está presente. Según la guía NICE se debe recomendar el ejercicio adaptado a todas las personas con artrosis y, en algunos casos, siendo supervisado o con un programa educativo estructurado. Además, recomienda avisar a los pacientes de que a pesar de que el dolor articular puede empeorar al principio, la práctica regular de ejercicio físico beneficia a las articulaciones y mejora a largo plazo la función, la calidad de vida y el dolor. En caso de presentar sobrepeso u obesidad, se recomendaría la pérdida de peso, incidiendo en que cualquier pérdida es beneficiosa por pequeña que sea. También nos recuerda que no hay suficiente evidencia para recomendar terapia manual, acupuntura o punción seca y que no hay evidencia que respalde ofrecer electroterapia o ultrasonido, cuando estas técnicas no se realizan como coadyuvantes del ejercicio físico. Se pueden considerar ayudas para la deambulación, como el uso de bastones. El tratamiento farmacológico, según la guía NICE , se puede considerar asociado al tratamiento no farmacológico y para favorecer el ejercicio físico, siempre en la menor dosis posible y el menor tiempo posible. Antiinflamatorios (AINE) tópicos: Se pueden ofrecer en personas con gonartrosis y considerar en personas afectadas en otras articulaciones. AINE orales: En caso de no ser efectivos en la vía tópica, se puede considerar su uso en vía oral, siempre teniendo en cuenta el beneficio/riesgo y la necesidad de gastroprotección. Paracetamol: No ofrecer rutinariamente, excepto para uso ocasional o corto plazo cuando los AINE están contraindicados o son inefectivos. Glucosamina: No ofrecer por su falta de evidencia. Opioides: No ofrecer opioides débiles de forma generalizada, excepto para uso ocasional o a corto plazo siempre y cuando los AINE estén contraindicados o sean inefectivos. No ofrecer opioides potentes dada la presencia de un balance beneficio/riesgo negativo. Antidepresivos: Una revisión Cochrane sobre el uso de antidepresivos en dolor persistente en personas con artrosis de rodilla y cadera concluye que hay evidencia alta de que el uso de antidepresivos para la artrosis de rodilla no produce una mejoría significativa en el dolor o la funcionalidad, con un número necesario a tratar para un efecto beneficioso adicional de 6 y un número necesario a tratar para un desenlace perjudicial adicional de 7. Sin embargo, un pequeño número de personas tendría una mejoría importante del 50% o más (es posible que la causa del dolor que responde a este tratamiento solo esté presente en un pequeño número de personas, por lo que se debe seleccionar cuidadosamente a los pacientes cuando se considere su uso) . Técnicas intervencionistas: S egún la guía NICE , se pueden considerar las inyecciones intraarticulares de corticoide cuando el resto de tratamientos farmacológicos no han sido efectivos para favorecer la actividad física, explicando que como mucho produce una mejoría a corto plazo. No recomienda ofrecer inyecciones intraarticulares de ácido hialurónico. Con un grado de evidencia alto se recomienda no ofrecer artroscopias para lavado o desbridamiento (las artroscopias, en procesos degenerativos de rodilla, incluidas las roturas degenerativas de menisco, tienen poco o ningún efecto clínicamente importante en dolor o función; podrían dar lugar a un ligero aumento de efectos adversos y no se ha determinado si dan lugar a un número ligeramente mayor de cirugías de rodilla) . Cirugía: Los pacientes se someten a una artroplastia total de rodilla (ATR) en etapas muy dispares de artrosis. La mayoría de los estudios no incluyen a los pacientes basándose en la severidad de la artrosis, y la falta de criterios de inclusión consistentes da como resultados cohortes muy heterogéneas que no dan validez a estos estudios . En una revisión sistemática de pacientes sometidos a artroplastia de cadera o rodilla se encontró una fuerte y consistente asociación con mayor dolor posquirúrgico de los siguientes factores prequirúrgicos: el sexo femenino, bajo nivel socioeconómico, un mayor nivel de dolor preoperatorio, la presencia de comorbilidades o dolor lumbar, un peor estado funcional preoperatorio y la presencia de factores psicológicos (depresión, ansiedad o catastrofismo) . Según una revisión de la Cochrane sobre la cirugía de reemplazo de hombro para artrosis y artropatía por desgarro del manguito rotador, no hay estudios de calidad para determinar si es más efectivo que otros tratamientos ni cuál sería la técnica más efectiva en diferentes situaciones . Una revisión sistemática sobre recomendaciones de guías de práctica clínica para la atención al dolor osteomuscular identifica 11 recomendaciones de calidad : • Asegurarse de que la atención está centrada en el paciente • Identificar banderas rojas • Abordar factores psicosociales • Usar pruebas de imagen selectivamente • Realizar exploración física • Monitorizar el progreso del paciente • Proveer información/educación • Abordar actividad física y ejercicio • Usar terapia manual solo como tratamiento coadyuvante • Ofrecer cuidados no quirúrgicos de calidad antes de proponer cirugía • Tratar de mantener al paciente laboralmente activo Cuando aparecen, los síntomas de la artrosis son muy variables, desde leves e intermitentes a más persistentes y severos. La propuesta, al igual que en otras causas de dolor crónico no oncológico (DCNO), es aplicar un enfoque salutogénico, donde las acciones se enfocan hacia el bienestar y el envejecimiento saludable, en el cual se concibe la salud con una visión positiva orientada a la promoción con énfasis de aquello que genera salud y se desvincula del enfoque patogénico. Creemos que abordando de este modo el dolor inicial se puede prevenir su persistencia; pero para que este enfoque tenga éxito es necesario que los distintos profesionales proporcionen mensajes similares, evitando tratamientos innecesarios o incluso perjudiciales para los pacientes . Si el paciente vuelve a consultar, se debe reevaluar en algún momento la posibilidad de procesos alternativos. Cada consulta sucesiva por este motivo es una oportunidad para ir ampliando el consejo . Según las conclusiones a las que llegaron Hurley et al. , cuando el dolor se cronifica afecta todos los dominios de la vida de las personas; las creencias de los pacientes acerca del dolor crónico forman sus actitudes y comportamientos, presentando confusión acerca de la causa del dolor y desconcierto en cuanto a su variabilidad y aleatoriedad. Sin la información y el asesoramiento adecuados por parte de los profesionales de la asistencia sanitaria, los pacientes no saben lo que deben y no deben hacer y, en consecuencia, evitan la actividad por temor a hacerse daño . La artrosis es un motivo de consulta frecuente en atención primaria. Se propone cambiar el enfoque en la atención de los pacientes y revisar cómo se da la información acerca del proceso por el que consultan, así como si la evidencia en la que se basan nuestros consejos y recomendaciones es la más actualizada. Es conveniente, como explicación del dolor inicial, evitar la asociación generalizada del mismo con los cambios artrósicos objetivados en la radiografía, ya que puede quitar muchas connotaciones negativas en el pensamiento del paciente (el miedo con el que se afronta el dolor inicial es un factor importante en su persistencia). Quizá pueda suponer un cambio el mero hecho de proporcionar una atención centrada en el paciente y su contexto biopsicosocial y descentralizar la atención en hacer desaparecer por completo el dolor, para orientarla en mejorar o mantener la funcionalidad de los pacientes a través de la educación y ejercicio físico. Nos gustaría recalcar que, pese a que las guías hacen hincapié en incidir sobre el peso, este es el resultado de múltiples factores biopsicosociales, muchos de los cuales escapan al control del paciente. Además, consideramos que un enfoque pesocentrista tiene efectos negativos en la salud mental de los pacientes . Es por esto por lo que puede resultar más interesante abordar la artrosis incidiendo desde otros factores (creencias, ejercicio físico...) que se han ido abordando en este artículo. En el dolor persistente de años de evolución es posible que el dolor no desaparezca por completo, pero una mejoría en la funcionalidad va a proporcionar a los pacientes una mayor calidad de vida. Además, adhiriéndonos a las recomendaciones actuales se conseguirá una optimización de los recursos sin causar daño o iatrogenia con procesos y tratamientos invasivos, con efectos secundarios y/o sin evidencia científica demostrada en artrosis. «El dolor es inevitable, el sufrimiento es opcional» es la conclusión a la que llegó Buda después de años de meditación. En este trabajo no se ha llevado a cabo experimentación con animales, no incluye sujetos humanos ni se trata de un ensayo clínico. La presente investigación no ha recibido ayudas específicas provenientes de agencias del sector público, sector comercial o entidades sin ánimo de lucro. Los autores declaran no tener ningún conflicto de intereses.
The Relevance of Integrating CYP2C19 Phenoconversion Effects into Clinical Pharmacogenetics
2a2f5406-3c86-4175-ae97-877a04d72276
10948286
Pharmacology[mh]
For pharmacogenetic (PGx) considerations in psychopharmacological treatment, clinical recommendations are available for patients treated with tricyclic antidepressants and selective serotonin reuptake inhibitors, which specify how to adjust dosages according to the CYP2D6 and CYP2C19 phenotypes of the patient . Currently, the genotype-inferred phenotypes are primarily considered . Concomitant drugs that inhibit cytochrome P450 (CYP) enzyme activity or induce their expression can cause phenoconversion (PC) effects. PC, therefore, leads to a discordance between the genotype-derived phenotype and the clinically observed phenotype (functional enzyme status) . In our case, for example, bupropion, or fluoxetine (CYP2D6), and fluvoxamine, or fluoxetine (CYP2C19) are potential pertubators of relevant CYP enzymes. Experimental methods to measure PC in patients (for example, using the “Geneva Micrococktail” ) are not suitable for clinical routine; therefore, a method that does not interfere with the complex therapy of vulnerable psychiatric patients would be desirable. To address this, a calculator tool for CYP2D6 was established to integrate standardized assessments of PC in clinical practice . The activity score of CYP2D6 is multiplied by a factor corresponding to the inhibitory properties of the comedication (strong/moderate/weak). The adjusted activity score is then assigned to the adjusted phenotype . As patients are routinely treated with multiple drugs in clinical practice, PC is common among psychiatric inpatients . Considering the CYP2D6 functional enzyme status, the poor (PM) and intermediate status (IM) are much more common, and the normal metabolizer (NM) status is less common compared to the genotype-inferred phenotype . For example, a patient genotyped as CYP2D6 NM treated with bupropion will phenoconvert to a CYP2D6 PM. Not considering PC in the interpretation of PGx results can lead to an inappropriate drug selection or false dosing recommendation, which in turn increases the risk for adverse drug reactions or nonresponse. Consequently, the phenoconversion effects of CYP2D6 are relevant ; however, integration in clinical routine is currently rare . As of today, data on the relevance of PC for CYP2C19 are missing. One study described a decrease in CYP2C19 NM and an increase in IM when considering PC; however, the authors did not report statistical significance . Unlike CYP2D6, different methods are available for CYP2C19 to correct for PC effects, taking into account the presence of an inducer or a moderate or strong inhibitor . According to Bousman et al. , in the presence of a moderate CYP2C19 inhibitor, the phenotype is converted to the next lower activity, whereas a concomitant strong inhibitor leads to a conversion into a PM functional enzyme status regardless of the genotype-derived status. If an inducer is present, the phenotype is converted to the next higher activity phenotype. On the other hand, according to Hahn and Roll , in the presence of a moderate or strong inhibitor, NM and IM are phenoconverted to PM, whereas rapid (RM) and ultrarapid metabolizers (UM) are both converted to IM, respectively. In the presence of a moderate or strong inducer, NM and RM are phenoconverted to UM whereas IM is converted to NM. Thus, the latter method is stricter in the presence of a moderate CYP2C19 inhibitor. However, there is currently no consensus on any approach to adjust CYP2C19 phenotypes . Also, physiologically based pharmacokinetics modeling is an approach to predict phenoconversion effects . A model predicting the phenoconversion of CYP2C19 by esomeprazole is available ; however, besides that, available models mainly focus on specific drug-drug interactions. Aside from CYP2D6, CYP2C19 is an important enzyme in the metabolism of psychotropic drugs , and its phenotype affects serum concentrations of many drugs . Mainly selective serotonin reuptake inhibitors and tricyclic antidepressants serum concentrations are affected by the CYP2C19 phenotypes ; in addition, in a previous study, CYP2C19 phenotypes also affected venlafaxine serum concentration . So far, studies reporting the pharmacokinetics of the drugs with respect to the CYP2C19 functional enzyme status in a clinical setting are missing. To address these prevailing issues and therefore to improve the interpretation of PGx result on CYP2C19 , the primary goal was to investigate how considering PC alters the CYP2C19 phenotype status. Different methods of including phenoconversion effects were applied to compare the effect of the correction method. According to Mostafa et al. , PC should be calculated rather than measured to relieve psychiatric patients, but also to obtain results applicable to routine clinical practice. Second, as an exploratory goal, this study investigates how the CYP2C19 functional enzyme status affects serum concentrations and metabolite-to-parent ratios of psychotropic drugs. Patients Wuerzburg Sample In the Wuerzburg sample, 212 inpatients at the Department of Psychiatry, Psychosomatics, and Psychotherapy of the University Hospital of Wuerzburg, with available genotype data, as well as therapeutic drug monitoring (TDM) results, were included in the analyses. Only adult patients (≥18 years of age) were included. Genotyping of CYP2D6 and CYP2C19 , as well as TDM, were performed according to the physician’s choice as part of the clinical routine. TDM was performed according to the guidelines of the TDM expert group of the Arbeitsgemeinschaft für Neuropsychopharmakologie und Pharmakopsychiatrie (AGNP) . Genotyping for CYP2D6 and CYP2C19 was performed according to recommendations of the German Genetic Diagnostics Commission and according to the procedures of the German Genetic Diagnostics Act with written informed consent. Genotypes and serum concentrations were determined between January 2020 and December 2021. To avoid bias in case of multiple serum concentration determinations for one drug in the same patient, only the latest determination per analyte was included in the analyses. The retrospective analysis of clinical routine data was approved by the Wuerzburg ethics committee (20220120 02) and was performed in accordance with the principles of the declaration of Helsinki. Frankfurt Sample Adult inpatients (≥18 years of age) admitted to the Department of Psychiatry, Psychosomatic Medicine and Psychotherapy of the University Hospital Frankfurt due to a depressive episode (single major depressive episode, recurrent depression) were genotyped for CYP2D6 and CYP2C19 as part of the FACT-PGx study. TDM was performed as part of the clinical routine according to the physician’s choice according to the guidelines of the TDM expert group of the AGNP . Data of 104 patients who took part in the FACT-PGx study with available TDM data were included in the analyses. Genotypes and serum concentrations were determined between July 2021 and March 2022. To avoid bias in case of multiple serum concentration determinations for one drug in the same patient, only the latest determination per analyte was included in the analyses. The study was approved by the local ethics committee of the University of Frankfurt (2021–138) and carried out in accordance with the ethical principles of the Helsinki Declaration version 2013. Written informed consent was obtained from each participant. Genotyping and therapeutic drug monitoring Genotyping and serum concentration determinations in both cohorts were performed at the Department of Psychiatry, Psychosomatics, and Psychotherapy of the University Hospital of Wuerzburg. Details about the methods are available in Supplement 1. Haplotypes were defined for all analyzed single nucleotide polymorphisms according to gene-specific haplotype tables from the PharmVar homepage (https://www.pharmvar.org/genes; Supplement 1). Phenotypes were determined according to the Clinical Pharmacogenetics Implementation Consortium (CPIC) specifications . Dose-corrected serum concentrations (serum concentration/dose, CD) of either the active moiety of the drug (serum concentration parent drug+active metabolite; CD AM ) or the parent drug alone, depending on the relevance for treatment response and metabolite-to-parent ratios (MPR) were calculated . Dimensional outliers (≥3 SD from the mean) from CD and MPR were set as missing data. Phenoconversion effects As there is no consensus on how to correct for the phenoconversion effects of CYP2C19 without using a “drug-cocktail” , two available methods were used and compared to each other. The phenoconversion effects were assessed according to Bousman et al. and Hahn and Roll . For details, see Introduction, and Supplement 1. According to Bousman et al. , concomitant drugs with the propensity to cause phenoconversion due to inhibitory or inducing effects on CYP2C19 were derived from the Flockhart table (Supplement 2) . For supplemental analysis, drugs with inhibitory and inducing effects on CYP2C19 were derived from the FDA table (Supplement 3). Statistical analyses Statistical analyses were conducted in R v4.0.4 . Differences in the CYP2C19 functional enzyme status obtained by different correction methods were investigated by performing McNemar tests with continuity correction. We performed Benjamini-Hochberg correction, as Bonferroni correction tends to be too conservative for genomic analysis due to the linkage equilibrium between individual genotypes . A p-value<0.05 was considered significant. Differences in CD and MPR depending on the CYP2C19 functional enzyme status, were investigated by performing linear regression analyses, corrected for age and sex. In the amitriptyline, venlafaxine, and risperidone samples, the CYP2D6 functional enzyme status was also included in the regression analyses, as the serum concentrations of these drugs are also dependent on CYP2D6 functional enzyme status . Chi-squared tests or Fisher’s exact tests were performed to investigate the association between the phenotype and the serum concentration being below, above, or within the therapeutic reference range for the respective drug. To obtain reliable statistic results, groups (below, above, or within the therapeutic reference range) with less than five patients were excluded from analyses. A p-value<0.05 was considered significant. Wuerzburg Sample In the Wuerzburg sample, 212 inpatients at the Department of Psychiatry, Psychosomatics, and Psychotherapy of the University Hospital of Wuerzburg, with available genotype data, as well as therapeutic drug monitoring (TDM) results, were included in the analyses. Only adult patients (≥18 years of age) were included. Genotyping of CYP2D6 and CYP2C19 , as well as TDM, were performed according to the physician’s choice as part of the clinical routine. TDM was performed according to the guidelines of the TDM expert group of the Arbeitsgemeinschaft für Neuropsychopharmakologie und Pharmakopsychiatrie (AGNP) . Genotyping for CYP2D6 and CYP2C19 was performed according to recommendations of the German Genetic Diagnostics Commission and according to the procedures of the German Genetic Diagnostics Act with written informed consent. Genotypes and serum concentrations were determined between January 2020 and December 2021. To avoid bias in case of multiple serum concentration determinations for one drug in the same patient, only the latest determination per analyte was included in the analyses. The retrospective analysis of clinical routine data was approved by the Wuerzburg ethics committee (20220120 02) and was performed in accordance with the principles of the declaration of Helsinki. Frankfurt Sample Adult inpatients (≥18 years of age) admitted to the Department of Psychiatry, Psychosomatic Medicine and Psychotherapy of the University Hospital Frankfurt due to a depressive episode (single major depressive episode, recurrent depression) were genotyped for CYP2D6 and CYP2C19 as part of the FACT-PGx study. TDM was performed as part of the clinical routine according to the physician’s choice according to the guidelines of the TDM expert group of the AGNP . Data of 104 patients who took part in the FACT-PGx study with available TDM data were included in the analyses. Genotypes and serum concentrations were determined between July 2021 and March 2022. To avoid bias in case of multiple serum concentration determinations for one drug in the same patient, only the latest determination per analyte was included in the analyses. The study was approved by the local ethics committee of the University of Frankfurt (2021–138) and carried out in accordance with the ethical principles of the Helsinki Declaration version 2013. Written informed consent was obtained from each participant. In the Wuerzburg sample, 212 inpatients at the Department of Psychiatry, Psychosomatics, and Psychotherapy of the University Hospital of Wuerzburg, with available genotype data, as well as therapeutic drug monitoring (TDM) results, were included in the analyses. Only adult patients (≥18 years of age) were included. Genotyping of CYP2D6 and CYP2C19 , as well as TDM, were performed according to the physician’s choice as part of the clinical routine. TDM was performed according to the guidelines of the TDM expert group of the Arbeitsgemeinschaft für Neuropsychopharmakologie und Pharmakopsychiatrie (AGNP) . Genotyping for CYP2D6 and CYP2C19 was performed according to recommendations of the German Genetic Diagnostics Commission and according to the procedures of the German Genetic Diagnostics Act with written informed consent. Genotypes and serum concentrations were determined between January 2020 and December 2021. To avoid bias in case of multiple serum concentration determinations for one drug in the same patient, only the latest determination per analyte was included in the analyses. The retrospective analysis of clinical routine data was approved by the Wuerzburg ethics committee (20220120 02) and was performed in accordance with the principles of the declaration of Helsinki. Adult inpatients (≥18 years of age) admitted to the Department of Psychiatry, Psychosomatic Medicine and Psychotherapy of the University Hospital Frankfurt due to a depressive episode (single major depressive episode, recurrent depression) were genotyped for CYP2D6 and CYP2C19 as part of the FACT-PGx study. TDM was performed as part of the clinical routine according to the physician’s choice according to the guidelines of the TDM expert group of the AGNP . Data of 104 patients who took part in the FACT-PGx study with available TDM data were included in the analyses. Genotypes and serum concentrations were determined between July 2021 and March 2022. To avoid bias in case of multiple serum concentration determinations for one drug in the same patient, only the latest determination per analyte was included in the analyses. The study was approved by the local ethics committee of the University of Frankfurt (2021–138) and carried out in accordance with the ethical principles of the Helsinki Declaration version 2013. Written informed consent was obtained from each participant. Genotyping and serum concentration determinations in both cohorts were performed at the Department of Psychiatry, Psychosomatics, and Psychotherapy of the University Hospital of Wuerzburg. Details about the methods are available in Supplement 1. Haplotypes were defined for all analyzed single nucleotide polymorphisms according to gene-specific haplotype tables from the PharmVar homepage (https://www.pharmvar.org/genes; Supplement 1). Phenotypes were determined according to the Clinical Pharmacogenetics Implementation Consortium (CPIC) specifications . Dose-corrected serum concentrations (serum concentration/dose, CD) of either the active moiety of the drug (serum concentration parent drug+active metabolite; CD AM ) or the parent drug alone, depending on the relevance for treatment response and metabolite-to-parent ratios (MPR) were calculated . Dimensional outliers (≥3 SD from the mean) from CD and MPR were set as missing data. As there is no consensus on how to correct for the phenoconversion effects of CYP2C19 without using a “drug-cocktail” , two available methods were used and compared to each other. The phenoconversion effects were assessed according to Bousman et al. and Hahn and Roll . For details, see Introduction, and Supplement 1. According to Bousman et al. , concomitant drugs with the propensity to cause phenoconversion due to inhibitory or inducing effects on CYP2C19 were derived from the Flockhart table (Supplement 2) . For supplemental analysis, drugs with inhibitory and inducing effects on CYP2C19 were derived from the FDA table (Supplement 3). Statistical analyses were conducted in R v4.0.4 . Differences in the CYP2C19 functional enzyme status obtained by different correction methods were investigated by performing McNemar tests with continuity correction. We performed Benjamini-Hochberg correction, as Bonferroni correction tends to be too conservative for genomic analysis due to the linkage equilibrium between individual genotypes . A p-value<0.05 was considered significant. Differences in CD and MPR depending on the CYP2C19 functional enzyme status, were investigated by performing linear regression analyses, corrected for age and sex. In the amitriptyline, venlafaxine, and risperidone samples, the CYP2D6 functional enzyme status was also included in the regression analyses, as the serum concentrations of these drugs are also dependent on CYP2D6 functional enzyme status . Chi-squared tests or Fisher’s exact tests were performed to investigate the association between the phenotype and the serum concentration being below, above, or within the therapeutic reference range for the respective drug. To obtain reliable statistic results, groups (below, above, or within the therapeutic reference range) with less than five patients were excluded from analyses. A p-value<0.05 was considered significant. Patient Samples The combined sample comprised 316 patients, which were 44.2±15.4 (mean±standard deviation (SD)) years old, and 54.1% female. Among these, 144 patients were nonsmokers, 99 were smokers, and from 73 patients, no information on smoker status was available. Patients received between 0 and 18 additional drugs in combination (mean±SD 4.1±3.5). A more detailed demographic overview is given in . Eighteen patients were identified as CYP2C19 UM (genotype-inferred phenotype), 95 patients as RM, 129 as NM, 69 as IM, and 5 as PM. The number of serum concentration determinations is listed in . Only patients who received venlafaxine (N=117), amitriptyline (N=100), mirtazapine (N=85), sertraline (N=64), escitalopram (N=52), risperidone (N=73), and quetiapine (N=125) were included in the analyses to limit the type II error probability. Demographic data of these patients are given in Supplement 4. To increase statistical power, all analyses were performed in the combined sample. Phenoconversion Effect Results on phenoconversion effects are given per TDM request, as concomitant drugs with each TDM request affect the genotype-inferred phenotype. At baseline, 40.9% of the patients were classified as CYP2C19 NM ; after accounting for PC, according to Bousman et al. (PC Bousman ) , the number significantly decreased, and 39.5% were classified as NM PC (p=0.05) . According to Hahn and Roll (PC Hahn&Roll ) , the number of NM changed not significantly (p=0.08) ; however, the number of PM significantly increased from 1.1% to 2.7% (p<0.001). The number of IM, UM, and RM did not change significantly with either of the correction methods . Patients prone to PC are summarized in Supplement 5. As only five patients with CYP2C19 affecting concomitant medications according to the FDA phenoconversion list were included, the number of NM, IM, PM, RM, and UM did not change significantly after considering PC (Supplement 3). Venlafaxine CD AM and MPR of venlafaxine were not associated with genotype-inferred CYP2C19 phenotypes, functional enzyme status Bousman , and functional enzyme status Hahn&Roll (Supplement 6). Genotype-inferred CYP2C19 phenotypes, as well as the functional enzyme status, were not associated with serum concentrations below, above, or within the therapeutic reference range (Supplement 6). Amitriptyline CD AM of amitriptyline was associated with genotype-inferred CYP2C19 phenotypes, with RM and UM showing lower CD compared to NM (ß std =−0.52, p=0.02; ß std =−0.68 p=0.04) (Supplement 6). MPR was not associated with genotype-inferred CYP2C19 phenotypes, and these were not associated with serum concentrations below, above, or within the therapeutic reference range. Considering PC Bousman or PC Hahn&Roll did not change the number of NM, IM, PM, RM, and UM. Consequently, analyses on functional enzyme status were not performed (Supplement 6). Mirtazapine CD, as well as MPR of mirtazapine, were not associated with genotype-inferred CYP2C19 phenotypes, nor with functional enzyme status. Serum concentrations of mirtazapine within, above or below the respective therapeutic reference range were not associated with genotype-inferred CYP2C19 phenotypes, nor with functional enzyme status (Supplement 6). Sertraline CD of sertraline was associated with genotype-inferred CYP2C19 phenotypes with higher CD in PM compared to NM (ß std =2.67; p=0.005). A trend towards higher and lower CD in IM and UM, respectively, compared to NM, was observed (ß std =0.74, p=0.06; ß std =−0.94, p=0.06). The number of NM, IM, PM, RM, and UM considering PC Bousman was concordant with the number considering PC Hahn&Roll . CD was associated with functional enzyme status with higher CD in PM compared to NM (ß std =2.37, p<0.001). Metabolites were not measured; thus, analyses on MPR were not possible. Only one patient showed serum concentrations below the therapeutic reference range, and no patient showed concentrations above the reference range; therefore, further analyses could not be conducted (Supplement 6). Escitalopram CD of escitalopram was associated with genotype-inferred CYP2C19 phenotypes with lower CD in UM compared to NM (ß std =−1.96, p=0.05). MPR of escitalopram was not associated with genotype-inferred CYP2C19 phenotypes. Genotype-inferred CYP2C19 phenotypes were associated with serum concentrations below or within the therapeutic reference range (p<0.001). Post-hoc tests showed that frequencies of RM compared to IM were significantly different (p=0.007). Considering PC Bousman or PC Hahn&Roll did not change the number of NM, IM, and PM; RM, and UM, therefore, analyses on functional enzyme status were not performed (Supplement 6). Risperidone CD AM , as well as MPR of risperidone, were not associated with genotype-inferred CYP2C19 phenotypes, nor with functional enzyme status. Serum concentrations of risperidone within, above, and below the respective therapeutic reference range were not associated with genotype-inferred CYP2C19 phenotypes, nor with functional enzyme status (Supplement 6). Quetiapine CD and MPR of quetiapine were not associated with genotype-inferred CYP2C19 phenotypes; also, serum concentrations of quetiapine within, above, and below the respective therapeutic reference range were not associated with genotype-inferred CYP2C19 phenotypes. Considering PC Bousman or PC Hahn&Roll did not change the number of NM, IM, PM; RM, and UM, therefore, analyses on functional enzyme status were not performed (Supplement 6). The combined sample comprised 316 patients, which were 44.2±15.4 (mean±standard deviation (SD)) years old, and 54.1% female. Among these, 144 patients were nonsmokers, 99 were smokers, and from 73 patients, no information on smoker status was available. Patients received between 0 and 18 additional drugs in combination (mean±SD 4.1±3.5). A more detailed demographic overview is given in . Eighteen patients were identified as CYP2C19 UM (genotype-inferred phenotype), 95 patients as RM, 129 as NM, 69 as IM, and 5 as PM. The number of serum concentration determinations is listed in . Only patients who received venlafaxine (N=117), amitriptyline (N=100), mirtazapine (N=85), sertraline (N=64), escitalopram (N=52), risperidone (N=73), and quetiapine (N=125) were included in the analyses to limit the type II error probability. Demographic data of these patients are given in Supplement 4. To increase statistical power, all analyses were performed in the combined sample. Results on phenoconversion effects are given per TDM request, as concomitant drugs with each TDM request affect the genotype-inferred phenotype. At baseline, 40.9% of the patients were classified as CYP2C19 NM ; after accounting for PC, according to Bousman et al. (PC Bousman ) , the number significantly decreased, and 39.5% were classified as NM PC (p=0.05) . According to Hahn and Roll (PC Hahn&Roll ) , the number of NM changed not significantly (p=0.08) ; however, the number of PM significantly increased from 1.1% to 2.7% (p<0.001). The number of IM, UM, and RM did not change significantly with either of the correction methods . Patients prone to PC are summarized in Supplement 5. As only five patients with CYP2C19 affecting concomitant medications according to the FDA phenoconversion list were included, the number of NM, IM, PM, RM, and UM did not change significantly after considering PC (Supplement 3). CD AM and MPR of venlafaxine were not associated with genotype-inferred CYP2C19 phenotypes, functional enzyme status Bousman , and functional enzyme status Hahn&Roll (Supplement 6). Genotype-inferred CYP2C19 phenotypes, as well as the functional enzyme status, were not associated with serum concentrations below, above, or within the therapeutic reference range (Supplement 6). CD AM of amitriptyline was associated with genotype-inferred CYP2C19 phenotypes, with RM and UM showing lower CD compared to NM (ß std =−0.52, p=0.02; ß std =−0.68 p=0.04) (Supplement 6). MPR was not associated with genotype-inferred CYP2C19 phenotypes, and these were not associated with serum concentrations below, above, or within the therapeutic reference range. Considering PC Bousman or PC Hahn&Roll did not change the number of NM, IM, PM, RM, and UM. Consequently, analyses on functional enzyme status were not performed (Supplement 6). CD, as well as MPR of mirtazapine, were not associated with genotype-inferred CYP2C19 phenotypes, nor with functional enzyme status. Serum concentrations of mirtazapine within, above or below the respective therapeutic reference range were not associated with genotype-inferred CYP2C19 phenotypes, nor with functional enzyme status (Supplement 6). CD of sertraline was associated with genotype-inferred CYP2C19 phenotypes with higher CD in PM compared to NM (ß std =2.67; p=0.005). A trend towards higher and lower CD in IM and UM, respectively, compared to NM, was observed (ß std =0.74, p=0.06; ß std =−0.94, p=0.06). The number of NM, IM, PM, RM, and UM considering PC Bousman was concordant with the number considering PC Hahn&Roll . CD was associated with functional enzyme status with higher CD in PM compared to NM (ß std =2.37, p<0.001). Metabolites were not measured; thus, analyses on MPR were not possible. Only one patient showed serum concentrations below the therapeutic reference range, and no patient showed concentrations above the reference range; therefore, further analyses could not be conducted (Supplement 6). CD of escitalopram was associated with genotype-inferred CYP2C19 phenotypes with lower CD in UM compared to NM (ß std =−1.96, p=0.05). MPR of escitalopram was not associated with genotype-inferred CYP2C19 phenotypes. Genotype-inferred CYP2C19 phenotypes were associated with serum concentrations below or within the therapeutic reference range (p<0.001). Post-hoc tests showed that frequencies of RM compared to IM were significantly different (p=0.007). Considering PC Bousman or PC Hahn&Roll did not change the number of NM, IM, and PM; RM, and UM, therefore, analyses on functional enzyme status were not performed (Supplement 6). CD AM , as well as MPR of risperidone, were not associated with genotype-inferred CYP2C19 phenotypes, nor with functional enzyme status. Serum concentrations of risperidone within, above, and below the respective therapeutic reference range were not associated with genotype-inferred CYP2C19 phenotypes, nor with functional enzyme status (Supplement 6). CD and MPR of quetiapine were not associated with genotype-inferred CYP2C19 phenotypes; also, serum concentrations of quetiapine within, above, and below the respective therapeutic reference range were not associated with genotype-inferred CYP2C19 phenotypes. Considering PC Bousman or PC Hahn&Roll did not change the number of NM, IM, PM; RM, and UM, therefore, analyses on functional enzyme status were not performed (Supplement 6). In this naturalistic setting, we investigated how correcting for PC alters the CYP2C19 phenotype/functional enzyme status in a clinical routine setting. We applied different methods to correct for the phenoconversion effects, as there is no consensus on how to adjust CYP2C19 phenotypes yet . Depending on the correction method, our findings reveal a significant decrease in CYP2C19 NM and a significant increase in PM. We explored the association between CYP2C19 functional enzyme status and the pharmacokinetics of antidepressant and antipsychotic drugs and found significant associations between drug exposure of amitriptyline, sertraline, and escitalopram and CYP2C19 phenotypes, as well as functional metabolizer status (PC Bousman and PC Hahn&Roll ). We applied two methods to calculate PC rather than measuring PC, e. g., by using the “Geneva Micrococktail” , to relieve the psychiatric patients, but still obtain results applicable to routine clinical practice. CYP2C19 phenotype frequencies in our clinical routine sample are in concordance with the phenotype frequency for Europeans . Less than one in two patients were CYP2C19 NM. When including PC Bousman , in accordance with a previous study, the number of NM decreased; however, no statistical results were reported previously . When applying PC Hahn&Roll , due to the stricter classification when taking a moderate CYP2C19 inhibitor, the number of PMs increased. Thus, the method of correction for PC significantly affected the frequencies of the functional enzyme status. As including PC altered the frequencies of phenotypes/functional enzyme status of CYP2C19, PC is relevant for CYP2D6 , and for CYP2C19; however, they may be less pronounced. PC rates in the present study seem much lower than in previous studies . Mostafa et al. included not only psychiatric patients ; in addition, esomeprazole was used more often in the previous study compared to the present one. In clinical practice in Würzburg and Frankfurt, pantoprazole is preferred over (es)omeprazole due to the preferable drug interaction profile. Compared to CYP2D6, CYP2C19-affecting drugs were less often used; only 17 patients (5.4%) were prone to CYP2C19 PC; in contrast, 24.1% of the patients were prone to CYP2D6 PC . Thus, due to the limited use of CYP2C19-affecting drugs, PCs are less common; nevertheless, PCs are relevant for an individual treated with CYP2C19-inhibiting/inducing drugs, especially esomeprazole . Therefore, we suggest considering PC not only for CYP2D6, but also for CYP2C19 as part of individualized treatment in psychiatry. Considering the FDA phenoconversion table, the number of NM, IM, PM, RM, and UM did not change significantly after taking into account PC. However, in the FDA phenoconversion table, esomeprazole is not considered a CYP2C19 inhibitor. This is in contrast to the product information of the European Medicines Agency (EMA) that esomeprazole is a CYP2C19 inhibitor, and when starting or ending treatment with esomeprazole, the potential for interactions with drugs metabolized through CYP2C19 should be considered . Moreover, also clinical data showed that esomeprazole inhibits CYP2C19 clinically relevant . For CYP2D6, there is consensus among experts that if the patient is taking a strong or moderate inhibitor, the activity score of CYP2D6 should be multiplied by 0 or 0.5, respectively. Administration of a weak inhibitor does not require adjustment, as the area under the curve is only minimally affected by weak inhibitors . This concurs with the definition of the relevance of drug interactions in general, which are only considered relevant with moderate and strong inhibitors. In contrast to CYP2D6, there are no activity scores for CYP2C19; therefore, establishing a method for including PC of CYP2C19 is more challenging. Currently, there is no consensus about dealing with weak/moderate/strong inhibitors. Prior to including CYP2C19 PC into clinical routine processes, studies must show that the serum concentrations correlate better with the functional enzyme status than with the genotype-inferred phenotype; if relevant, a consensus on how to adjust for PC has to be developed. In the meantime, to ensure an effective and safe pharmacotherapy in patients affected by CYP2C19 PC and treated with drugs metabolized by CYP2C19, therapy should be closely monitored by TDM to prevent adverse drug reactions. We explored the association between pharmacokinetics and CYP2C19 phenotypes and functional enzyme status using linear regression analyses to control for age and sex. In analyses on venlafaxine, amitriptyline, and risperidone, we also controlled for CYP2D6 functional enzyme status, as CYP2D6 has previously been shown to impact drug exposure of these drugs . Venlafaxine is primarily metabolized by CYP2D6 and, to a lesser extent, by CYP2C19 , making the impact of CYP2C19 alone harder to assess as a single gene. Therefore, for better accuracy, we evaluated the CYP2D6/CYP2C19 combination. CD AM of venlafaxine was not associated with CYP2C19 phenotypes nor with functional enzyme status. This contrasts with initial results that CYP2C19 phenotypes affected the active moiety serum concentration of venlafaxine . However, previously, CYP2C19 was assessed as a single gene, not CYP2D6/CYP2C19 in combination. Thus, the combined approach showed that CYP2D6 rather than CYP2C19 impacted CD AM of venlafaxine (CD AM was associated with CYP2D6 functional enzyme status with higher CD AM in CYP2D6 IM compared to NM (Supplement 6)), which is in accordance with PGx dosing guidelines for venlafaxine . According to venlafaxine, in the metabolism of amitriptyline, CYP2D6 is primarily involved and should be considered in combination with CYP2C19 . Therefore, corrected for the CYP2D6 functional enzyme status, CYP2C19 was associated with CD AM of amitriptyline with lower CD AM in RM and UM compared to NM. This concurs with dosing guidelines, considering CYP2D6 and CYP2C19 phenotypes for the treatment with amitriptyline . None of the patients on amitriptyline had been taking medications with relevant inhibition or induction effects on CYP2C19 to cause PC. Therefore, it is not possible to determine the impact of PC. In our clinical routine setting, we found that CD of sertraline was associated with CYP2C19 phenotypes and functional enzyme status. The number of NM, IM, PM, RM, and UM did not differ when applying PC Bousman and PC Hahn&Roll . This highlighted the major role of CYP2C19 in the metabolism, more precisely in the N -demethylation of sertraline in-vivo , even if other CYP enzymes are also involved . This result supports clinical guidelines giving dosing recommendations based on CYP2C19 phenotypes . Additionally, escitalopram is mainly metabolized by CYP2C19 ; it has been recommended that in escitalopram-treated patients, CYP2C19 phenotypes should be considered for dose adjustments . This is in accordance with our results that CYP2C19 phenotypes were associated with CD of escitalopram. In addition to these results, CYP2C19 was associated with serum concentrations below, above or within the therapeutic reference range of escitalopram. Patients with serum concentrations below the therapeutic reference range are more often RM, compared to IM; in contrast patients with serum concentrations within the therapeutic reference range were more often IM than RM. Thus, CYP2C19 RM may have an increased risk for low serum concentrations. However, according to amitriptyline, no patients on escitalopram were taking medications with relevant inhibition or induction effects on CYP2C19 to cause PC. As serum concentrations of mirtazapine, risperidone, and quetiapine were not associated with CYP2C19 phenotypes/functional enzyme status, we demonstrated that CYP2C19 does not affect the serum concentrations of these drugs in a clinically relevant way . This is in accordance with the knowledge that CYP2C19 is not involved in the metabolism of these drugs . In consequence, drug-drug interactions with respect to CYP2C19 are likely negligible for mirtazapine, risperidone, and quetiapine. This shows that enzymes with altered function can possibly be compensated by other enzymes involved in the metabolism of the drug. In consequence, as shown previously for sertraline, a combined pharmacogenomics algorithm including more than two genes may predict the serum concentrations more precisely than one or two individual genes . Bousman, therefore, proposed evidence-based panel testing with a minimum gene set (CYP2C19, CYP2D6, CYP2C9, HLA-A, HLA-B) . Strengths and limitations The major strength of our analysis is the relevance for a routine clinical setting. Our retrospective naturalistic study in two independent cohorts provides clinical routine real-life data, including a high number of patients. Pharmacokinetic analyses were controlled for age and sex and, if relevant, for the CYP2D6 functional enzyme status. However, due to the limited number of patients who received CYP2C19-affecting drugs and whose phenotype was consequently corrected for PC, it cannot be assessed whether correction for PC and if so, if PC Bousman or PC Hahn&Roll is better associated with serum concentrations than the genotype-inferred phenotype. Inhibitors and inducers derived from index drugs were categorized as weak/moderate/strong . This categorization of inhibitor/inducer strength, however, is not consistent among different sources. Nevertheless, using the Flockhart table was in line with a previous study by Bousman et al. . Clinical data, for example, clinical improvement, were not available in both cohorts. A limitation of our study is that daily doses of the inhibitors/inducers of CYP2C19 were not recorded due to the retrospective nature of this study. However, a recent study showed that the phenoconversion effect might be dose-dependent . Also, the phenoconversion was calculated based on the genetic phenotype, not on haplotypes due to the low number of patients; however, a study of de Jong showed that the phenoconversion might depend upon the specific polymorphism (e. g.,*1/*17 vs.*2/*17) . Moreover, patients were not restricted to a diet, thus, nutrition may have affected enzyme inhibition/induction. Comorbidities and ethnicities were not recorded. Inclusion criteria in both samples were not the same; the Wuerzburg cohort included all patients from which TDM and PGx were available; in contrast, in the Frankfurt cohort, only patients suffering from a depressive episode were included. In addition, drugs are not metabolized by one enzyme but by multiple enzymes in combination; however, we considered only CYP2C19, if relevant, in combination with CYP2D6. Nevertheless, as such real-life data on PGx are rare, our results are important for supporting routine PGx-testing to provide precision medicine . Conclusion Phenoconversion effects are relevant for CYP2C19; however, occur less often due to the limited use of CYP2C19 perturbating drugs, compared to CYP2D6. Including PC effects for both enzymes in clinical routine processes may maximize the potential benefits of PGx testing due to an improvement in the prediction of pharmacokinetics, as not only the genotype-inferred phenotype but the more specific (dynamic) functional status of the enzyme is taken into account. However, before including CYP2C19 PC in routine clinical processes, studies with large numbers of patients and sufficient power must show that the serum concentrations correlate better with the functional enzyme status than with the genotype-inferred phenotype. If relevant, a consensus on how to adjust for PC has to be developed. In our study with limited sample size, PC of CYP2C19 changes phenotypes but does not provide superior correlations with serum concentrations. Based on our results, we suggest therapeutic drug monitoring to ensure adequate serum concentrations in individual patients treated with CYP2C19-affecting drugs, for example, esomeprazole and fluoxetine. Ethical approvalAll procedures performed in the analysis involving human participants were in accordance with the ethical standards of the institutional research committees and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. The major strength of our analysis is the relevance for a routine clinical setting. Our retrospective naturalistic study in two independent cohorts provides clinical routine real-life data, including a high number of patients. Pharmacokinetic analyses were controlled for age and sex and, if relevant, for the CYP2D6 functional enzyme status. However, due to the limited number of patients who received CYP2C19-affecting drugs and whose phenotype was consequently corrected for PC, it cannot be assessed whether correction for PC and if so, if PC Bousman or PC Hahn&Roll is better associated with serum concentrations than the genotype-inferred phenotype. Inhibitors and inducers derived from index drugs were categorized as weak/moderate/strong . This categorization of inhibitor/inducer strength, however, is not consistent among different sources. Nevertheless, using the Flockhart table was in line with a previous study by Bousman et al. . Clinical data, for example, clinical improvement, were not available in both cohorts. A limitation of our study is that daily doses of the inhibitors/inducers of CYP2C19 were not recorded due to the retrospective nature of this study. However, a recent study showed that the phenoconversion effect might be dose-dependent . Also, the phenoconversion was calculated based on the genetic phenotype, not on haplotypes due to the low number of patients; however, a study of de Jong showed that the phenoconversion might depend upon the specific polymorphism (e. g.,*1/*17 vs.*2/*17) . Moreover, patients were not restricted to a diet, thus, nutrition may have affected enzyme inhibition/induction. Comorbidities and ethnicities were not recorded. Inclusion criteria in both samples were not the same; the Wuerzburg cohort included all patients from which TDM and PGx were available; in contrast, in the Frankfurt cohort, only patients suffering from a depressive episode were included. In addition, drugs are not metabolized by one enzyme but by multiple enzymes in combination; however, we considered only CYP2C19, if relevant, in combination with CYP2D6. Nevertheless, as such real-life data on PGx are rare, our results are important for supporting routine PGx-testing to provide precision medicine . Phenoconversion effects are relevant for CYP2C19; however, occur less often due to the limited use of CYP2C19 perturbating drugs, compared to CYP2D6. Including PC effects for both enzymes in clinical routine processes may maximize the potential benefits of PGx testing due to an improvement in the prediction of pharmacokinetics, as not only the genotype-inferred phenotype but the more specific (dynamic) functional status of the enzyme is taken into account. However, before including CYP2C19 PC in routine clinical processes, studies with large numbers of patients and sufficient power must show that the serum concentrations correlate better with the functional enzyme status than with the genotype-inferred phenotype. If relevant, a consensus on how to adjust for PC has to be developed. In our study with limited sample size, PC of CYP2C19 changes phenotypes but does not provide superior correlations with serum concentrations. Based on our results, we suggest therapeutic drug monitoring to ensure adequate serum concentrations in individual patients treated with CYP2C19-affecting drugs, for example, esomeprazole and fluoxetine. Ethical approvalAll procedures performed in the analysis involving human participants were in accordance with the ethical standards of the institutional research committees and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. Project administration: J. Deckert, M. Scherf-Clavel, and M. Hahn; data collection: M. Scherf-Clavel, A. Eckert, M. Hahn, and A. Frantz; analysis and interpretation of the data: M. Scherf-Clavel; writing—original draft preparation: M. Scherf-Clavel; writing—review and editing: M. Scherf-Clavel, H. Weber, S. Unterecker, J. Deckert, A. Reif, M. Hahn, A. Eckert, and A. Frantz. All authors have approved of the contents of this manuscript and provided consent for publication.
Training in critical care cardiology: making the case for a standardized core curriculum
933026fe-2183-4b6b-a24c-9645ab431999
10546665
Internal Medicine[mh]
The implementation of interventional cardiac procedures and mechanical circulatory support (MCS) devices has led to a fundamental transformation of critical care cardiology. Today, both technological advances and a more nuanced understanding of outcomes allow for the delivery of various complex procedures in high-risk settings and to older patients with severe comorbidities . Considering the expanding wealth of knowledge and skill required for the management of such complex cases, cardiac intensive care unit (CICU) teams face unprecedented challenges that need to be addressed within the educational pathway of critical care cardiology. In this article, we outline key training elements for aspiring cardiac intensivists and advocate for a contemporary core curriculum that allows for the integration of these elements into an overarching sub-specialization concept in critical care cardiology. Modern cardiovascular fellowship programs typically include a 6-month general or CICU rotation, which may be prolonged depending on individual preferences. The goal for this period set forth by the Core Cardiovascular Training Statement (COCATS) 4 Task Force 13 and the 2020 European Society of Cardiology (ESC) curriculum for the cardiologist is to manage the majority of cardiovascular patients in intensive care settings . Many prerequisites for CICU patient care are taught as part of rotations in cardiac imaging, electrophysiology, and emergency medicine. Core training components during the first CICU rotation comprise advanced hemodynamic monitoring, airway and respiratory management, basic circulatory shock management, cardiac arrest algorithms and post-resuscitation care, sedation, monitoring of neurological function, infection control, and management of multi-organ dysfunction. Extracurricular seminars, such as the in-depth courses on acute myocardial ischemia, hypertension, and organ transplantation offered by the European Society of Intensive Care Medicine (ESICM) or the Society of Critical Care Medicine (SCCM) can solidify theoretical knowledge and foster interaction with peers. In general, early scholarly activity and participation in (inter)national meetings is encouraged by teaching institutions and the respective societies. In recent years, we have been witnessing a growing interdependence of interventional and critical care cardiology. Coronary angiography has become a cornerstone of myocardial infarction-related cardiogenic shock management. Beyond that, patients presenting with acute decompensated valvular dysfunction, massive pulmonary embolism, arrhythmia-related hemodynamic instability, or intracardiac shunts may qualify for emergent transcatheter procedures. While expert multidisciplinary teams formulate the most promising treatment approach, CICU fellows play a central role for its successful implementation. Following interventional procedures, the access site vasculature, potential thromboembolic complications, arrhythmias, and end organ function require constant vigilance. Echocardiographic visualization of implanted devices and assessment of their function together with general hemodynamic changes needs to be performed reliably and specific procedure-related complications must be recognized and addressed in a timely manner. Apart from these major advances in interventional cardiology, contemporary cardiogenic shock management became a complex endeavor, particularly due to the widespread usage of MCS devices for patients with severe hemodynamic compromise. Although based on recent data the future role of veno-arterial extracorporeal membrane oxygenation (VA-ECMO), Impella, and other MCS options is uncertain , cardiovascular fellows will nonetheless be confronted with the responsibility of handling these devices safely, detecting complications early, and initiating effective countermeasures. After specialization in general cardiovascular or intensive care medicine, accreditation in critical care cardiology necessitates exposure to CICU cases for an extended period of time, ideally within a dedicated sub-specialization program at an accredited tertiary institution. There is consensus on the relevance for offering structured up-to-date training pathways to prepare for the staggering challenges in cardiovascular intensive care and allow for standardized quality assessment. However, sub-specialization models and accreditation standards still vary substantially with respect to training time and curricula. Managing CICU cases independently requires a broad practical skillset, general intensive care expertise, and in-depth knowledge regarding cardiac illnesses and the respective treatment options. Clinical training elements in this career stage include decision-making regarding interventional procedures in high-risk settings, intimate knowledge and skillset required for comprehensive peri-interventional care, familiarity with all forms of circulatory shock and advanced MCS device management including cannulation/decannulation, weaning, LV venting modalities, as well as combined MCS. A contemporary CICU core curriculum should aim to distinguish the milestones for pursuing a career in critical care cardiology from the scope of critical care training that applies to a broader silo of cardiovascular fellows. Beyond patient care, critical care cardiologists must be proficient in many other rapidly evolving areas of competence, such as IT-systems, billing, professional communication skills particularly in end-of-life scenarios, and the legal framework of clinical trials. Palliative care, the allocation of limited financial resources and clinical trial participation involve demanding ethical considerations. Contemporary CICU training programs should incorporate these additional skills as part of a well-rounded education in intensive care medicine. Teaching the vast theoretical and practical armamentarium of critical care cardiology in context of recent advances in the field calls for an updated core curriculum covering all aspects of critical care training for cardiovascular fellows as well as aspiring CICU specialists. Apart from integrating the ever-expanding list of competences, a holistic training concept must be flexible enough to allow for personalization of the candidate’s career pathway. To incentivize fellows to pursue the lengthy and demanding training required to become an expert in critical care cardiology, a modular curriculum seems to be the most applicable basis for tailoring their education to prior professional experience and personal wishes, including the desire for maternity/paternity leave, and international workplace mobility. Importantly, training programs should be conceptualized to allow for interruption of clinical training for research purposes and hybridization with sub-specialization programs in interventional cardiology or advanced heart failure . The evolution of modern critical care cardiology has generated a burgeoning demand for specialists in this domain of cardiovascular medicine, and by extension, a need for an updated concept for sub-specialization that is both comprehensive and flexible. Despite the general consensus that there is a need for such training standards, dedicated sub-specialization tracks or advanced fellowships in cardiac intensive care are rarely available and heterogeneous, which continues to stunt the progression of leaders in the field . A contemporary core curriculum endorsed by the major intensive care societies may incentivize fellows to pursue a career in critical care cardiology, encourage teaching institutions to offer a tangible educational perspective, and ultimately improve the quality of care for critically ill patients.
Correlation Analysis of Microbial Contamination and Alkaline Phosphatase Activity in Raw Milk and Dairy Products
d65f3dc7-64c5-4591-bc53-06d97207b3ff
9915017
Microbiology[mh]
Microbial contamination of raw milk and dairy products is an important source of foodborne pathogens that can adversely affect human health . The microbial contamination of raw milk can affect the safety and the quality of dairy products from the source . During the processing of dairy products, microorganisms from several sources (e.g., personnel, water, equipment, additives, and packaging materials) can cause contamination . Milk microbial contamination is also responsible for significant economic losses at various points throughout the milk production chain . Compared with other commodities, dairy products are easily contaminated and subject to rapid deterioration . The most common bacterial spores in dairy products belong to the genus Bacillus . Mesophilic and thermophilic aerobic Bacillus species are of particular concern because of their high heat resistance and the high thermal stability of their degradation enzymes . Some species of the genus Bacillus have been implicated in food-borne diseases. For example, B. cereus was reported as the causative agent in a large food poisoning outbreak attributed to pasteurized milk . At mesophilic temperatures, some facultative thermophiles, such as B. subtilis , B. licheniformis , and B. pumilus , can also produce toxins . Spoilage caused by Bacillus species has been reported, even in commercially sterilized milk . Alkaline phosphatase (ALP; EC 3.1.3.1), an enzyme that is naturally found in raw milk, can be denatured by pasteurization temperatures. Alkaline phosphatase activity has been used to confirm the efficacy of pasteurization in dairy products. Thus, a dairy product that contains an insignificant amount of active enzyme or no enzyme at all is considered properly pasteurized . The aims of this study were: (i) to investigate microbial contamination and alkaline phosphatase activity in raw milk, pasteurized milk, and sterilized milk collected from 13 Chinese provinces, and (ii) to clarify the correlation between the aerobic plate count, aerobic Bacillus abundance, thermophilic aerobic Bacillus abundance, and alkaline phosphatase activity. The findings of this study provide theoretical and practical support for elucidating the microbiological quality of raw, pasteurized, and sterilized milk in China. 2.1. Sampling A total of 435 raw milk, 451 pasteurized milk, and 617 sterilized milk samples were collected randomly from 13 Chinese provinces (or municipalities). One province collected 105–189 samples ( ), consisting of raw milk, pasteurized milk, and sterilized milk samples. The volume of each sample was 500 mL. The sampling sites included dairy farms, dairy factories, supermarkets, retail shops, online stores, farmer’s markets, and restaurants. The sample collection and the investigation were conducted from April to September 2021. 2.2. Microbiological Analyses and Enzymatic Activity Assays The determination of the aerobic plate count was conducted following the Chinese national food safety standard [GB 4789.2-2016] . Briefly, a 25 g raw milk or dairy product sample was added into 225 mL of 0.85% saline solution and homogenized for 2 min to prepare a 10 −1 dilution. Serial dilutions were prepared with 0.85% saline solution. Three suitable serial dilutions were selected according to the contamination status of the samples. Next, 1 mL was aliquoted from each diluent, transferred into agar medium, and incubated for 48 h at 36 °C. All the colonies appearing on the plates were enumerated. The data were reported as log CFU/g of raw milk or dairy product. The limit of detection (LOD) of this method was 1 log 10 CFU/mL. The plate counts of aerobic Bacillus and thermophilic aerobic Bacillus were determined according to the methods of NEN 6813:2014 and NEN 6809:2014 . For aerobic Bacillus detection, a 25 g raw milk or dairy product sample was added into 225 mL of phosphate buffer and homogenized for 2 min to prepare a 10 −1 dilution. A 10 mL diluted sample was transferred into a sterile tube, which was incubated in an 80 °C water bath for 10 min and then cooled in a 20 °C freezer. The cooled sample was diluted serially with phosphate buffer. Three suitable serial dilutions were selected according to the contamination status of the samples. Next, 1 mL was aliquoted from each diluent, transferred into milk plate count (MPC) agar medium, and incubated for 48 h at 36 °C. All the colonies appearing on the plates were enumerated. The data were reported as log CFU/g of raw milk or dairy product. The LOD of this method was 1 log 10 CFU/mL. For thermophilic aerobic Bacillus detection, a 25 g raw milk or dairy product sample was added into 225 mL of phosphate buffer and homogenized for 2 min to prepare a 10 −1 dilution. A 10 mL diluted sample was transferred into a sterile tube, which was incubated in a 100 °C water bath for 30 min and then cooled in a 20 °C freezer. The cooled sample was diluted serially with phosphate buffer. Three suitable serial dilutions were selected according to the contamination status of the samples. Next, 1 mL was aliquoted from each diluent, transferred into dextrose tryptone agar (DTA) medium, and incubated for 48 h at 36 °C. All the colonies appearing on the plates were enumerated. The data were reported as log CFU/g of raw milk or dairy product. The LOD of this method was 1 log 10 CFU/mL. The alkaline phosphatase activity in raw milk and pasteurized milk was measured according to a modified chemiluminescent method . Briefly, a 100 μL milk sample was added into a vial containing 0.5 mL of predispensed chemiluminescent substrate (Beijing Biotai, China) in buffer. The contents in the vial were mixed for 5 s, and the vial was attached to a NovaLUM adapter (Charm Sciences, Lawrence, MA, USA) and inserted upright into a NovaLUM analyzer. The fast alkaline phosphatase (F-AP) channel specific to the matrix in the NovaLUM analyzer was activated. The F-AP channel was equipped with a built-in timer and temperature monitor to complete the analysis in 45 s for raw milk or dairy product samples. The LOD of this method was 1.30 log 10 mU/L (20 mU/L), and the LOQ was 1.78 log 10 mU/L (60 mU/L). 2.3. Statistical Analysis The mean, standard deviation, median, minimum and maximum values, and 25th and 75th percentiles were calculated for each microbiological parameter using SPSS 16.0 software (IBM, Armonk, NY, USA) . Spearman’s correlation analysis was conducted using R, and a p value < 0.05 was indicative of a significant correlation. The values of the aerobic plate count, aerobic Bacillus count, and thermophilic aerobic Bacillus count <LOD were considered as 0, while the values of alkaline phosphatase activity <LOQ were also considered as 0. The correlation graph was produced using an online tool ( http://www.bioinformatics.com.cn/plot_basic_corrplot_corrlation_plot_082 , (accessed on 7 January 2023)) . A total of 435 raw milk, 451 pasteurized milk, and 617 sterilized milk samples were collected randomly from 13 Chinese provinces (or municipalities). One province collected 105–189 samples ( ), consisting of raw milk, pasteurized milk, and sterilized milk samples. The volume of each sample was 500 mL. The sampling sites included dairy farms, dairy factories, supermarkets, retail shops, online stores, farmer’s markets, and restaurants. The sample collection and the investigation were conducted from April to September 2021. The determination of the aerobic plate count was conducted following the Chinese national food safety standard [GB 4789.2-2016] . Briefly, a 25 g raw milk or dairy product sample was added into 225 mL of 0.85% saline solution and homogenized for 2 min to prepare a 10 −1 dilution. Serial dilutions were prepared with 0.85% saline solution. Three suitable serial dilutions were selected according to the contamination status of the samples. Next, 1 mL was aliquoted from each diluent, transferred into agar medium, and incubated for 48 h at 36 °C. All the colonies appearing on the plates were enumerated. The data were reported as log CFU/g of raw milk or dairy product. The limit of detection (LOD) of this method was 1 log 10 CFU/mL. The plate counts of aerobic Bacillus and thermophilic aerobic Bacillus were determined according to the methods of NEN 6813:2014 and NEN 6809:2014 . For aerobic Bacillus detection, a 25 g raw milk or dairy product sample was added into 225 mL of phosphate buffer and homogenized for 2 min to prepare a 10 −1 dilution. A 10 mL diluted sample was transferred into a sterile tube, which was incubated in an 80 °C water bath for 10 min and then cooled in a 20 °C freezer. The cooled sample was diluted serially with phosphate buffer. Three suitable serial dilutions were selected according to the contamination status of the samples. Next, 1 mL was aliquoted from each diluent, transferred into milk plate count (MPC) agar medium, and incubated for 48 h at 36 °C. All the colonies appearing on the plates were enumerated. The data were reported as log CFU/g of raw milk or dairy product. The LOD of this method was 1 log 10 CFU/mL. For thermophilic aerobic Bacillus detection, a 25 g raw milk or dairy product sample was added into 225 mL of phosphate buffer and homogenized for 2 min to prepare a 10 −1 dilution. A 10 mL diluted sample was transferred into a sterile tube, which was incubated in a 100 °C water bath for 30 min and then cooled in a 20 °C freezer. The cooled sample was diluted serially with phosphate buffer. Three suitable serial dilutions were selected according to the contamination status of the samples. Next, 1 mL was aliquoted from each diluent, transferred into dextrose tryptone agar (DTA) medium, and incubated for 48 h at 36 °C. All the colonies appearing on the plates were enumerated. The data were reported as log CFU/g of raw milk or dairy product. The LOD of this method was 1 log 10 CFU/mL. The alkaline phosphatase activity in raw milk and pasteurized milk was measured according to a modified chemiluminescent method . Briefly, a 100 μL milk sample was added into a vial containing 0.5 mL of predispensed chemiluminescent substrate (Beijing Biotai, China) in buffer. The contents in the vial were mixed for 5 s, and the vial was attached to a NovaLUM adapter (Charm Sciences, Lawrence, MA, USA) and inserted upright into a NovaLUM analyzer. The fast alkaline phosphatase (F-AP) channel specific to the matrix in the NovaLUM analyzer was activated. The F-AP channel was equipped with a built-in timer and temperature monitor to complete the analysis in 45 s for raw milk or dairy product samples. The LOD of this method was 1.30 log 10 mU/L (20 mU/L), and the LOQ was 1.78 log 10 mU/L (60 mU/L). The mean, standard deviation, median, minimum and maximum values, and 25th and 75th percentiles were calculated for each microbiological parameter using SPSS 16.0 software (IBM, Armonk, NY, USA) . Spearman’s correlation analysis was conducted using R, and a p value < 0.05 was indicative of a significant correlation. The values of the aerobic plate count, aerobic Bacillus count, and thermophilic aerobic Bacillus count <LOD were considered as 0, while the values of alkaline phosphatase activity <LOQ were also considered as 0. The correlation graph was produced using an online tool ( http://www.bioinformatics.com.cn/plot_basic_corrplot_corrlation_plot_082 , (accessed on 7 January 2023)) . 3.1. Microbiological and Enzymatic Activity Analyses The results of the microbiological and enzymatic activity analyses of raw and pasteurized milk are outlined in , and the results of sterilized milk are outlined in . The contamination levels of the raw milk samples varied widely. The mean and median values of the aerobic plate count were far below the threshold set by the Chinese national food safety standard [GB19301-2010] (6.30 log 10 CFU/mL). Approximately 9.89% (43/435) of samples had a contamination level higher than the threshold value. The proportions of aerobic Bacillus and thermophilic aerobic Bacillus were 54.02% (235/435) and 7.36% (32/435) in raw milk samples, respectively. Approximately 36.18% (157/434) of raw milk samples contained >500,000 mU/L (5.70 log 10 mU/L) alkaline phosphatase activity. For the pasteurized milk samples, the mean and median values of the aerobic plate count were far below the threshold (5.00 log 10 CFU/mL). However, 2.22% (10/451) of samples showed a contamination level that was higher than the threshold value, indicating the low hygienic status of milk. The proportions of aerobic Bacillus and thermophilic aerobic Bacillus were 14.41% (65/451) and 4.88% (22/451) in pasteurized milk samples, respectively. The maximum values were 5.30 log 10 CFU/mL and 3.81 log 10 CFU/mL for the counts of aerobic Bacillus and thermophilic aerobic Bacillus , respectively. Approximately 9.71% (43/443) of pasteurized milk samples contained >350 mU/L (2.54 log 10 CFU/mL) of alkaline phosphatase activity. Of the 43 samples, only 2 (4.65%) samples showed aerobic Bacillus count >1 log 10 CFU/mL, and 1 (2.33%) sample showed thermophilic aerobic Bacillus above the LOD. The contamination levels of the sterilized milk samples were very low. The proportion of aerobic Bacillus was 4.05% (25/617). The contamination of aerobic Bacillus in sterilized milk was 1.30% (8/617). The maximum values were 2.32 log 10 CFU/mL and 1.34 log 10 CFU/mL for the aerobic plate count and the aerobic Bacillus count, respectively. No thermophilic aerobic Bacillus was counted in the samples. 3.2. Correlation Analysis of Microbial Contamination and Alkaline Phosphatase Activity The correlogram based on the Spearman correlation analysis revealed the influence of the aerobic plate count, counts of aerobic Bacillus and thermophilic aerobic Bacillus , alkaline phosphatase activity after comparing one to another for raw milk ( ) and pasteurized milk ( ), respectively. For raw milk, there were positive correlations between the aerobic plate count, the aerobic Bacillus abundance, and the alkaline phosphatase activity (all p < 0.05), and the aerobic Bacillus abundance had positive correlations with the thermophilic aerobic Bacillus count and the alkaline phosphatase activity (both p < 0.05). Thermophilic aerobic Bacillus count was irrelevant in raw milk in the determination of correlations with the aerobic plate count and the alkaline phosphatase activity, respectively (both p > 0.05). The largest p value (0.29) was observed for the positive correlation between the aerobic plate count and the aerobic Bacillus abundance. For pasteurized milk, there were positive correlations between the aerobic plate count, the aerobic Bacillus abundance, and the thermophilic aerobic Bacillus count (all p < 0.05); however, the alkaline phosphatase activity had negative correlations with the aerobic plate count, the aerobic Bacillus abundance, and the thermophilic aerobic Bacillus abundance (all p > 0.05). Similar to raw milk, the largest p value (0.36) was observed for the positive correlation between the aerobic plate count and the aerobic Bacillus abundance. The results of the microbiological and enzymatic activity analyses of raw and pasteurized milk are outlined in , and the results of sterilized milk are outlined in . The contamination levels of the raw milk samples varied widely. The mean and median values of the aerobic plate count were far below the threshold set by the Chinese national food safety standard [GB19301-2010] (6.30 log 10 CFU/mL). Approximately 9.89% (43/435) of samples had a contamination level higher than the threshold value. The proportions of aerobic Bacillus and thermophilic aerobic Bacillus were 54.02% (235/435) and 7.36% (32/435) in raw milk samples, respectively. Approximately 36.18% (157/434) of raw milk samples contained >500,000 mU/L (5.70 log 10 mU/L) alkaline phosphatase activity. For the pasteurized milk samples, the mean and median values of the aerobic plate count were far below the threshold (5.00 log 10 CFU/mL). However, 2.22% (10/451) of samples showed a contamination level that was higher than the threshold value, indicating the low hygienic status of milk. The proportions of aerobic Bacillus and thermophilic aerobic Bacillus were 14.41% (65/451) and 4.88% (22/451) in pasteurized milk samples, respectively. The maximum values were 5.30 log 10 CFU/mL and 3.81 log 10 CFU/mL for the counts of aerobic Bacillus and thermophilic aerobic Bacillus , respectively. Approximately 9.71% (43/443) of pasteurized milk samples contained >350 mU/L (2.54 log 10 CFU/mL) of alkaline phosphatase activity. Of the 43 samples, only 2 (4.65%) samples showed aerobic Bacillus count >1 log 10 CFU/mL, and 1 (2.33%) sample showed thermophilic aerobic Bacillus above the LOD. The contamination levels of the sterilized milk samples were very low. The proportion of aerobic Bacillus was 4.05% (25/617). The contamination of aerobic Bacillus in sterilized milk was 1.30% (8/617). The maximum values were 2.32 log 10 CFU/mL and 1.34 log 10 CFU/mL for the aerobic plate count and the aerobic Bacillus count, respectively. No thermophilic aerobic Bacillus was counted in the samples. The correlogram based on the Spearman correlation analysis revealed the influence of the aerobic plate count, counts of aerobic Bacillus and thermophilic aerobic Bacillus , alkaline phosphatase activity after comparing one to another for raw milk ( ) and pasteurized milk ( ), respectively. For raw milk, there were positive correlations between the aerobic plate count, the aerobic Bacillus abundance, and the alkaline phosphatase activity (all p < 0.05), and the aerobic Bacillus abundance had positive correlations with the thermophilic aerobic Bacillus count and the alkaline phosphatase activity (both p < 0.05). Thermophilic aerobic Bacillus count was irrelevant in raw milk in the determination of correlations with the aerobic plate count and the alkaline phosphatase activity, respectively (both p > 0.05). The largest p value (0.29) was observed for the positive correlation between the aerobic plate count and the aerobic Bacillus abundance. For pasteurized milk, there were positive correlations between the aerobic plate count, the aerobic Bacillus abundance, and the thermophilic aerobic Bacillus count (all p < 0.05); however, the alkaline phosphatase activity had negative correlations with the aerobic plate count, the aerobic Bacillus abundance, and the thermophilic aerobic Bacillus abundance (all p > 0.05). Similar to raw milk, the largest p value (0.36) was observed for the positive correlation between the aerobic plate count and the aerobic Bacillus abundance. Raw milk safety is crucial for both farmers’ income and human health, as well as for consumers paying more for safer dairy products . Microbial contamination of raw milk originating from farms can increase spoilage and wastage and adversely affect producers, traders, and consumers . The aerobic plate count is an important criterion for evaluating the microbial quality of raw milk as well as the degree of food freshness . This criterion reflects the standards of primary operation procedures that include collection, transportation, and storage . In our survey, nearly 10% of raw milk samples exceeded the Chinese legal limit (≤2 × 10 6 CFU/ mL). Although the legal limit is far below those of the EU and USA (<1 × 10 5 CFU/mL) , these results indicate that more attention should be given at the farm level, as most microorganisms are introduced into final dairy products at this stage. Pasteurization does not have a significant impact on the nutritional value of raw milk . This process is essential for ensuring the safety of milk and increasing its shelf life, because it reduces most heat-resistant and all other non-spore-forming microbes to safer levels, thereby increasing the safety and shelf-life. Factors affecting the shelf-life of pasteurized milk include storage temperature, post-pasteurization contamination, growth behaviors of contaminating bacteria, and incidence of Bacillus occurrence . In most countries, the legal limits for aerobic plate count in pasteurized milk range from 5 × 10 3 to 5 × 10 5 CFU/mL . According to the legal limit of China (<1 × 10 5 CFU/mL), approximately 2.22% of samples exceeded this threshold value, indicating the low hygienic status of milk. The acceptance limit for the aerobic plate count in pasteurized milk is ≤2 × 10 4 CFU/mL . Although milk pasteurization is regarded as an effective method to eliminate foodborne pathogens, some reports on pathogen contamination in pasteurized milk clearly indicate that pasteurization alone is not the ultimate solution for the control of milk-borne pathogens [ , , ]. However, pasteurization is still an optimized method that minimizes bacterial contamination and maintains high nutritive value . Sterilized milk is created by heating milk through an ultra-high temperature process. This process can destroy nearly all microbes in milk, thereby increasing the shelf-life . To achieve the legal microbe limit for sterilized milk in China (GB25190-2010), milk must be sterilized . In this survey, only 4% of sterilized milk samples showed an aerobic plate count ≥1 log 10 CFU/mL. A previous study demonstrated that most microorganisms present in sterilized milk were heat-treatment-resistant strains or those that originated from post-sterilization contamination . The genus Bacillus is capable of overcoming the heat barrier during the sterilization of milk . Some of these microorganisms can produce highly heat-resistant endospores, which may survive the ultra-high temperature process . The aerobic Bacillus could not only affect the shelf-life of pasteurized and sterilized milk but is also associated with defects such as off flavors, sweet curdling, and bitter cream . A previous study demonstrated that soiling on the udder and the teats is the major source of aerobic Bacillus contamination . In this study, the proportions of aerobic Bacillus in raw milk, pasteurized milk, and sterilized milk were 54.02%, 14.41%, and 1.30%, respectively. Our results also showed that the proportions of thermophilic aerobic Bacillus were 7.36% and 4.88% in raw milk and pasteurized milk, respectively. A survey conducted in Tunisia found a high degree of diversity, both phenotypic and genotypic, among Bacillus isolates from milk samples. Seven Bacillus species, Bacillus cereus predominantly, were identified in pasteurized milk and sterilized milk and posed the risk of milk-borne illness to consumers . Thus, it is important to minimize aerobic Bacillus contamination at the farm level. Moreover, our results showed that the aerobic plate count had a positive correlation with the aerobic Bacillus count in raw milk, and also had positive correlations with the aerobic Bacillus count and the thermophilic aerobic Bacillus in sterilized milk. Thus, good hygiene practices for the interior or the exterior of the udder and milking instruments during the milk production process must be implemented. Alkaline phosphatase is a heat-sensitive enzyme found in raw milk that is used as a marker for the efficacy of thermal pasteurization. The absence of alkaline phosphatase activity has been used to confirm the benefits of pasteurization in dairy products for the past 80 years. A product that contains a small amount of active enzyme or no enzyme at all is considered properly pasteurized . In our survey, a total of 36.18% of raw milk samples contained >500,000 mU/L alkaline phosphatase activity, which indicated widespread active enzyme. However, the value was decreased dramatically for pasteurized milk, with only 9.71% of samples containing >350 mU/L alkaline phosphatase activity based on EU guidelines (nos. 1664/2006 and 2074/2005), showing a good sterilizing effect. In the study of Ziobro and McElroy , nearly 14% (5/36) of pasteurized milk products showed >350 mU/L enzyme activity, which was higher than the value in this study. The thermal denaturation parameters for alkaline phosphatase activity in milk were similar to those of heat-resistant milk pathogens . Our results showed that alkaline phosphatase activity showed weak positive correlations with the aerobic plate count and the aerobic Bacillus count in raw milk. By contrast, the alkaline phosphatase activity showed weak negative correlations with the aerobic plate count, the aerobic Bacillus count, and even the thermophilic aerobic Bacillus count in pasteurized milk. Alkaline phosphatase has greater heat resistance than most aerobic microorganisms. Therefore, most microorganisms will inactivate more rapidly than this enzyme during thermal degradation. Our results of alkaline phosphatase activity showed a weak negative correlation with the thermophilic aerobic Bacillus count, although these microorganisms have more thermal resistance. These results indicated that the alkaline phosphatase activity had a weak correlation with the microbial loads. Thus, enzyme activity is only an indicator of temperature and time of the heat treatment, and not an indicator of microorganisms present. A previous study indicated that the alkaline phosphatase activity assay was affected by many factors, including the composition of the product and the presence of microbial alkaline phosphatase . However, our results showed that of the 43 (9.71%) pasteurized milk samples containing alkaline phosphatase activity >350 mU/L, only 2.22% samples showed aerobic plate count exceeded the legal limit (1 × 10 5 CFU/mL) and 4.65% and 2.33% samples showed aerobic Bacillus count and thermophilic aerobic Bacillus > 1 log 10 CFU/mL, respectively. These results illustrate that the alkaline phosphatase activity was a good indicator of the effectiveness of thermal pasteurization. Thus, detection and identification of the species of thermophilic aerobic Bacillus are necessary for the pasteurized milk samples with high alkaline phosphatase activity. The output and consumption of dairy products in China have increased greatly in recent decades, and Chinese consumers have expressed increased demand for safe and healthy dairy products. Compared with other foods, dairy products are easily contaminated with microorganisms at farm, transportation, process, and storage stages and subject to rapid deterioration and foodborne diseases. Thus, further national-scale surveys of microbial contamination are needed for raw milk and dairy products. To our knowledge, this is the first study to evaluate the microbiological quality of raw milk and dairy products from 13 major dairy production and consumption provinces in China. Our results indicate that raw milk showed high microorganism contamination; however, the pasteurized milk and sterilized milk were sufficiently sterilized. In this study, we also improved our understanding about the correlation between microbial contamination and alkaline phosphatase activity. This study recommends that further studies be carried out to identify the contamination sources and the microorganism species contaminating pasteurized milk and sterilized milk.
Exploring health literacy development through co-design: understanding the expectations for health literacy mediators
11b76042-8778-40e7-bb80-e74d6a4c04fc
11879027
Health Literacy[mh]
Understanding health inequities Health inequities are systematic differences in health status among individual population groups, influenced by factors such as the social determinants of health (SDH) . These disparities stem from historical and contemporary inequities shaped by societal structures and unequal distribution of power and resources . They negatively impact some individuals and societies, leading to poor health outcomes, economic costs, and social disparities . Health inequities are closely linked to non-communicable diseases (NCDs) . Globally, NCDs are a significant health concern, responsible for over 40 million deaths annually, with cardiovascular disease being the leading cause, followed by cancers, respiratory diseases, and diabetes . These diseases are preventable and often linked to modifiable lifestyle factors such as smoking, alcohol use, physical inactivity, and unhealthy diets . Addressing NCDs and health inequities involves coordinated national and international action, focusing on modifiable risk factors, improving access to high-quality chronic care management, and understanding root causes (such as health literacy [HL]) to then inform policies that reduce these disparities and meet the needs of the population, and especially vulnerable groups within it . Health literacy as a key to equity HL plays a crucial role in addressing both health inequity and NCDs by empowering individuals to understand health information, make informed decisions, and engage in self-management. Efforts to improve HL, both traditional and digital, are essential for promoting better health outcomes and reducing the burden of NCDs globally . The current state of HL within the Australian population reveals both strengths and challenges . People with greater HL challenges often experience adverse health outcomes, increased hospitalizations, and poorer health behaviours than those with fewer such challenges . Improving the HL environment through effective communication strategies, embedding HL into policies, and ensuring accessible information can enhance health outcomes and quality of care . Knowing more about an individual’s and a community’s HL provides an important foundation when creating strategies to strengthen or maintain HL assets. HL assets can refer to the skills, knowledge, and resources individuals and communities possess to access, understand, appraise, and use health information effectively; these assets are vital as they empower people to make informed decisions, navigate healthcare systems, and engage in health-promoting behaviours . Whether an individual has the required HL assets required to manage their health, may reflect on their HL strengths and challenges. This paper will investigate a new role focused on creating HL learning opportunities within a community, evaluating the support, expectations, and requirements for this role to inform future implementation. Enhancing HL assets can lead to better health outcomes, bolster health promotion initiatives, and improve overall well-being. Health promotion and HL are distinct yet complementary concepts that together can contribute to improve overall health outcomes. Health promotion focuses on enabling individuals and communities to increase control over and improve their health through broad actions aimed at addressing social, environmental, and individual factors . This includes implementing policies, providing education, and creating supportive environments that facilitate healthier choices . In contrast, HL refers to an individual’s capacity to obtain, process, and understand basic health information needed to make appropriate health decisions . It encompasses people’s knowledge, motivation, and competences to access, understand, appraise, and apply health information effectively . While these concepts differ in their scope and focus, they complement each other in several ways. HL serves as a foundation for effective health promotion, as individuals with stronger HL assets are better equipped to engage with and benefit from health promotion activities . Conversely, health promotion efforts often aim to improve HL as one of their outcomes, enhancing people’s health knowledge and skills through various educational initiatives. Both concepts share the ultimate goal of empowering individuals and communities to take control of their health and are critical for addressing health inequities and achieving broader health and development goals . HL can be viewed as both an outcome of health promotion efforts and a tool that enables further health promotion . In essence, while health promotion provides broader strategies and actions to improve health, HL equips individuals and communities with the skills to effectively engage with these efforts and make informed health decisions. Together, they can create a more comprehensive approach to improving population health. The role of co-design in health promotion Co-designed and community-led health promotion interventions have gained recognition as an effective strategy for addressing complex health issues while ensuring cultural appropriateness and local relevance. This collaborative approach involves engaging community members, researchers, policy-makers, and other stakeholders throughout the development and implementation of health initiatives . By embracing co-design, health promotion efforts can better address health inequities, enhance cultural competence, and lead to more effective and sustainable health outcomes . Additionally, these approaches consider varying levels of HL within communities, making information and interventions accessible and understandable to all . International groups such as the WHO promote using a co-design process to co-design HL solutions . An example of this is Optimizing Health Literacy and Access (Ophelia) process, which aims to improve HL and equitable access to healthcare by implementing locally tailored, evidence-informed solutions in collaboration with communities and stakeholders . This approach begins by assessing the HL requirements of the intended population using the Health Literacy Questionnaire (HLQ) . The HLQ was created to capture the multi-dimensional nature of HL . The Ophelia approach then utilizes data-driven vignettes (case studies derived from HLQ data) to illustrate and convey the HL needs of the target population. This approach has been successful in the co-design of ideas to enhance the HL assets, responsiveness, and outcomes in numerous settings . Given the international literature above highlights that communities are experiencing significant HL challenges health promotion efforts must be cognisant of HL in their design, the concept of a Health Literacy Mediator (HLM) has been inspired by the Marmot Review: Fair Society, Healthy Lives , which highlighted the success of local health trainers and community champions in empowering individuals to manage their health. Similar roles have already been explored in Eastern Europe, for example, health mediators have been effective in bridging healthcare access for the Roma communities (Roma Health Mediators Project), and in Hungary, the integration of health mediators as part of multidisciplinary teams has shown success in addressing complex health needs and building trust . Furthermore, various health-support roles such as health navigators, health connectors, health coaches, and health advocates have emerged internationally, reflecting a growing focus on improving, adapting, and developing HL practices. For example, health navigators, also known as patient navigators or care coordinators, help individuals overcome barriers to care by connecting them with healthcare providers and community resources . Health coaches use evidence-based strategies and techniques, such as motivational interviewing, to support patients in achieving health goals and integrating healthy habits into their lives . Health advocates provide case management-like support, helping individuals ask questions and navigate the complexities of healthcare systems . Health connectors focus on addressing inequities by building social support networks for individuals and carers . Building on these foundational ideas, the current research team has expanded and formalized the new conception of the HLM role to address the specific needs and context of the Tasmanian community. , defined an HLM as ‘a person or group of people dedicated to providing learning experiences and opportunities to enable individuals and communities to overcome inequities perpetuated by their social determinants and increase their HL assets to improve their health outcomes’. This definition of the role indicates a holistic approach to supporting an individual’s healthcare journey, wherein there is a significant focus upon building autonomous capacity for all individuals, addressing local health inequities, and targeting those disadvantaged by their SDH . The HLM role aims to improve comprehensive HL, beyond just healthcare access, and actively engage in health promotion with individuals, organizations, and policy-makers in the local community. Community expectations of healthcare typically encompass accessible, affordable, and high-quality services . Additionally, communities desire healthcare systems that are culturally sensitive, equitable, and inclusive, ensuring that all individuals, regardless of background or socioeconomic status, receive adequate care . There is also an expectation for healthcare to be proactive in promoting health and preventing diseases through education and community-based interventions . Investigating the role of HLMs is important due to their potential to impact an individual’s or community’s health outcomes positively. HLMs could bridge the gap between healthcare providers and patients, ensuring that individuals understand health information and can utilize and navigate the healthcare system effectively. This is particularly important for managing and preventing NCDs, which require ongoing patient engagement and self-management. By improving HL assets, HLMs could empower individuals to make informed health decisions, adhere to treatment plans, and adopt healthier lifestyles. This empowerment may lead to better management of chronic conditions, reduced hospital readmissions, and overall improved health outcomes . Moreover, HLMs could play a pivotal role in addressing health inequities by targeting interventions towards disadvantaged populations, thus ensuring that HL improvements are inclusive and equitable . This is why this study aims to co-design the emerging HLM role with various stakeholders working in health and health-related settings across diverse Tasmanian regions. This will be achieved by assessing the support, expectations, and need for such a role via online workshops. Health inequities are systematic differences in health status among individual population groups, influenced by factors such as the social determinants of health (SDH) . These disparities stem from historical and contemporary inequities shaped by societal structures and unequal distribution of power and resources . They negatively impact some individuals and societies, leading to poor health outcomes, economic costs, and social disparities . Health inequities are closely linked to non-communicable diseases (NCDs) . Globally, NCDs are a significant health concern, responsible for over 40 million deaths annually, with cardiovascular disease being the leading cause, followed by cancers, respiratory diseases, and diabetes . These diseases are preventable and often linked to modifiable lifestyle factors such as smoking, alcohol use, physical inactivity, and unhealthy diets . Addressing NCDs and health inequities involves coordinated national and international action, focusing on modifiable risk factors, improving access to high-quality chronic care management, and understanding root causes (such as health literacy [HL]) to then inform policies that reduce these disparities and meet the needs of the population, and especially vulnerable groups within it . HL plays a crucial role in addressing both health inequity and NCDs by empowering individuals to understand health information, make informed decisions, and engage in self-management. Efforts to improve HL, both traditional and digital, are essential for promoting better health outcomes and reducing the burden of NCDs globally . The current state of HL within the Australian population reveals both strengths and challenges . People with greater HL challenges often experience adverse health outcomes, increased hospitalizations, and poorer health behaviours than those with fewer such challenges . Improving the HL environment through effective communication strategies, embedding HL into policies, and ensuring accessible information can enhance health outcomes and quality of care . Knowing more about an individual’s and a community’s HL provides an important foundation when creating strategies to strengthen or maintain HL assets. HL assets can refer to the skills, knowledge, and resources individuals and communities possess to access, understand, appraise, and use health information effectively; these assets are vital as they empower people to make informed decisions, navigate healthcare systems, and engage in health-promoting behaviours . Whether an individual has the required HL assets required to manage their health, may reflect on their HL strengths and challenges. This paper will investigate a new role focused on creating HL learning opportunities within a community, evaluating the support, expectations, and requirements for this role to inform future implementation. Enhancing HL assets can lead to better health outcomes, bolster health promotion initiatives, and improve overall well-being. Health promotion and HL are distinct yet complementary concepts that together can contribute to improve overall health outcomes. Health promotion focuses on enabling individuals and communities to increase control over and improve their health through broad actions aimed at addressing social, environmental, and individual factors . This includes implementing policies, providing education, and creating supportive environments that facilitate healthier choices . In contrast, HL refers to an individual’s capacity to obtain, process, and understand basic health information needed to make appropriate health decisions . It encompasses people’s knowledge, motivation, and competences to access, understand, appraise, and apply health information effectively . While these concepts differ in their scope and focus, they complement each other in several ways. HL serves as a foundation for effective health promotion, as individuals with stronger HL assets are better equipped to engage with and benefit from health promotion activities . Conversely, health promotion efforts often aim to improve HL as one of their outcomes, enhancing people’s health knowledge and skills through various educational initiatives. Both concepts share the ultimate goal of empowering individuals and communities to take control of their health and are critical for addressing health inequities and achieving broader health and development goals . HL can be viewed as both an outcome of health promotion efforts and a tool that enables further health promotion . In essence, while health promotion provides broader strategies and actions to improve health, HL equips individuals and communities with the skills to effectively engage with these efforts and make informed health decisions. Together, they can create a more comprehensive approach to improving population health. Co-designed and community-led health promotion interventions have gained recognition as an effective strategy for addressing complex health issues while ensuring cultural appropriateness and local relevance. This collaborative approach involves engaging community members, researchers, policy-makers, and other stakeholders throughout the development and implementation of health initiatives . By embracing co-design, health promotion efforts can better address health inequities, enhance cultural competence, and lead to more effective and sustainable health outcomes . Additionally, these approaches consider varying levels of HL within communities, making information and interventions accessible and understandable to all . International groups such as the WHO promote using a co-design process to co-design HL solutions . An example of this is Optimizing Health Literacy and Access (Ophelia) process, which aims to improve HL and equitable access to healthcare by implementing locally tailored, evidence-informed solutions in collaboration with communities and stakeholders . This approach begins by assessing the HL requirements of the intended population using the Health Literacy Questionnaire (HLQ) . The HLQ was created to capture the multi-dimensional nature of HL . The Ophelia approach then utilizes data-driven vignettes (case studies derived from HLQ data) to illustrate and convey the HL needs of the target population. This approach has been successful in the co-design of ideas to enhance the HL assets, responsiveness, and outcomes in numerous settings . Given the international literature above highlights that communities are experiencing significant HL challenges health promotion efforts must be cognisant of HL in their design, the concept of a Health Literacy Mediator (HLM) has been inspired by the Marmot Review: Fair Society, Healthy Lives , which highlighted the success of local health trainers and community champions in empowering individuals to manage their health. Similar roles have already been explored in Eastern Europe, for example, health mediators have been effective in bridging healthcare access for the Roma communities (Roma Health Mediators Project), and in Hungary, the integration of health mediators as part of multidisciplinary teams has shown success in addressing complex health needs and building trust . Furthermore, various health-support roles such as health navigators, health connectors, health coaches, and health advocates have emerged internationally, reflecting a growing focus on improving, adapting, and developing HL practices. For example, health navigators, also known as patient navigators or care coordinators, help individuals overcome barriers to care by connecting them with healthcare providers and community resources . Health coaches use evidence-based strategies and techniques, such as motivational interviewing, to support patients in achieving health goals and integrating healthy habits into their lives . Health advocates provide case management-like support, helping individuals ask questions and navigate the complexities of healthcare systems . Health connectors focus on addressing inequities by building social support networks for individuals and carers . Building on these foundational ideas, the current research team has expanded and formalized the new conception of the HLM role to address the specific needs and context of the Tasmanian community. , defined an HLM as ‘a person or group of people dedicated to providing learning experiences and opportunities to enable individuals and communities to overcome inequities perpetuated by their social determinants and increase their HL assets to improve their health outcomes’. This definition of the role indicates a holistic approach to supporting an individual’s healthcare journey, wherein there is a significant focus upon building autonomous capacity for all individuals, addressing local health inequities, and targeting those disadvantaged by their SDH . The HLM role aims to improve comprehensive HL, beyond just healthcare access, and actively engage in health promotion with individuals, organizations, and policy-makers in the local community. Community expectations of healthcare typically encompass accessible, affordable, and high-quality services . Additionally, communities desire healthcare systems that are culturally sensitive, equitable, and inclusive, ensuring that all individuals, regardless of background or socioeconomic status, receive adequate care . There is also an expectation for healthcare to be proactive in promoting health and preventing diseases through education and community-based interventions . Investigating the role of HLMs is important due to their potential to impact an individual’s or community’s health outcomes positively. HLMs could bridge the gap between healthcare providers and patients, ensuring that individuals understand health information and can utilize and navigate the healthcare system effectively. This is particularly important for managing and preventing NCDs, which require ongoing patient engagement and self-management. By improving HL assets, HLMs could empower individuals to make informed health decisions, adhere to treatment plans, and adopt healthier lifestyles. This empowerment may lead to better management of chronic conditions, reduced hospital readmissions, and overall improved health outcomes . Moreover, HLMs could play a pivotal role in addressing health inequities by targeting interventions towards disadvantaged populations, thus ensuring that HL improvements are inclusive and equitable . This is why this study aims to co-design the emerging HLM role with various stakeholders working in health and health-related settings across diverse Tasmanian regions. This will be achieved by assessing the support, expectations, and need for such a role via online workshops. A collaborative constructivist approach was employed in this research. This approach was selected to explore and co-construct meaning from the data through active collaboration amongst researchers and participants . The project received ethics approval from the University of Tasmania Research Ethics Committee (Approval Number H0026170). All participants were required to read an information sheet and give electronic and verbal consent prior to admission to the interview, they were aware that whilst within the workshop they were not anonymous to each other, but any data gathered during the discussion would be de-identified. Participants and recruitment The study setting for this research was Tasmania, Australia. HL levels vary across Australia, with Tasmania experiencing some of the lowest health and educational outcomes, as highlighted by the Optimising Health Care for Tasmanians Report, which underscore the state’s challenges in addressing preventable chronic diseases, socioeconomic disadvantage, and educational attainment . Due to the online nature of the workshop participants could be any within the state and still partake. Public health professionals, healthcare providers, managers, and allied health professionals working in Tasmania’s health sector were recruited through purposive and snowball sampling methods . Recruitment occurred via the Tasmanian Health Literacy Network, the Tasmanian Health Department, and the research team’s professional networks. These stakeholders were selected for their relevant knowledge, experience, and interest in the project. Initial contact was made through an email from the research team, which included a brief study description and a registration form for participation in the co-design workshops. Interested individuals were then sent detailed participant information sheets and consent forms. The research team also encouraged these stakeholders to disseminate the workshop details within their own professional networks to increase participation. A total of 15 stakeholders participated and chose one of two identical workshops to attend (Workshop 1, n = 8, Workshop 2, n = 7). The stakeholders represented a range of different sectors including the Department of Health, the University of Tasmania, and not-for-profit organizations, as summarized in . Participants were from all around the state with nine from Southern Tasmania, four from Northern Tasmania, and two from the Northwest Coast. The majority of the participants who took part in the workshops were women ( n = 14). Data collection Consistent with the Ophelia approach, the data for this phase of the research project were gathered from focus group discussions within online workshops. Gathering data through co-design workshops aligns with one of the steps within the Ophelia process, where stakeholders collaboratively design tailored HL interventions based on identified community needs . Two online workshops were conducted in March 2023 on Microsoft Teams, a conferencing software . They went for approximately 1 hour each, first starting with an introduction to the overall project and an overview of the HLQ survey results from previous phases of the research project . Following this, data-informed vignettes were shared with the group. These vignettes were created specifically for this study from a cluster analysis of the HLQ data ( n = 255) and interview data ( n = 14) representing the HL strengths and challenges of the target population . The following questions were presented with the aim of generating discussion to identify local solutions that could respond to the needs of the individuals and families personified in the vignette(s). The questions were: Do you know anyone like this individual or this family in your community? What are the main barriers that this individual/family is facing? What can be done to help this individual/family? How might an HLM assist in these solutions? Should an HLM role be an extension of what already exists or a new role? Would improving HL assets from an earlier age impact these situations? Participants were encouraged to use their microphones and the chat function to contribute to discussions and share ideas during the workshop. Both the workshops were digitally recorded, with consent obtained from all participants beforehand. MS conducted all the workshops, with RN serving as the co-facilitator. Throughout and at the conclusion of each workshop, both the facilitator and co-facilitator made observational notes on the discussions that had occurred. Data analysis For the analysis of the data in this qualitative study, a thematic analysis was employed, as described by . This method was used to provide insights into how the key stakeholders who participated in the workshops conceptualize an HLM from within their specific contexts. This single thematic analysis involved six distinct phases as outlined by described by . Initially, for Phase One, MS and IC (student researcher) collaboratively immersed themselves in the data. This familiarization process included transcribing the workshop discussions verbatim via auto-generation within the Microsoft Teams software and combining that with the researchers’ observational notes and comments that participants had noted in the chat box. It also involved repeatedly listening to audio recordings and thoroughly re-reading the transcripts. Key information from each transcript was highlighted and systematically recorded in an Excel spreadsheet. Subsequently, for Phase Two, IC developed codes. An inductive approach was adopted, beginning with real-world observations, identifying patterns, and formulating theories based on these patterns. The coding process was repeated multiple times, focusing on the data while considering the influence of prior knowledge from earlier readings on the topic. As the analysis progressed and entered Phase Three, IC conducted a theme search, using the codes as foundational elements to the group and refine them into preliminary themes. These initial themes were reviewed and discussed with MS, ensuring that the most pertinent points were captured and aligned with the research objectives as per Phases Four and Five. The final step, Phase Six, involved gathering all qualitative responses, revising the original themes through discussions amongst all authors, and refining and defining the themes to be reported. Through this, one thematic analysis of the workshop discussion’s multiple themes was identified, the themes are reported in and , utilizing a contemporary ‘infographic’ structure, to display the findings. Included example quotes were selected for their clarity and precision in reflecting one of the defined themes, although other participants provided similar responses . The study setting for this research was Tasmania, Australia. HL levels vary across Australia, with Tasmania experiencing some of the lowest health and educational outcomes, as highlighted by the Optimising Health Care for Tasmanians Report, which underscore the state’s challenges in addressing preventable chronic diseases, socioeconomic disadvantage, and educational attainment . Due to the online nature of the workshop participants could be any within the state and still partake. Public health professionals, healthcare providers, managers, and allied health professionals working in Tasmania’s health sector were recruited through purposive and snowball sampling methods . Recruitment occurred via the Tasmanian Health Literacy Network, the Tasmanian Health Department, and the research team’s professional networks. These stakeholders were selected for their relevant knowledge, experience, and interest in the project. Initial contact was made through an email from the research team, which included a brief study description and a registration form for participation in the co-design workshops. Interested individuals were then sent detailed participant information sheets and consent forms. The research team also encouraged these stakeholders to disseminate the workshop details within their own professional networks to increase participation. A total of 15 stakeholders participated and chose one of two identical workshops to attend (Workshop 1, n = 8, Workshop 2, n = 7). The stakeholders represented a range of different sectors including the Department of Health, the University of Tasmania, and not-for-profit organizations, as summarized in . Participants were from all around the state with nine from Southern Tasmania, four from Northern Tasmania, and two from the Northwest Coast. The majority of the participants who took part in the workshops were women ( n = 14). Consistent with the Ophelia approach, the data for this phase of the research project were gathered from focus group discussions within online workshops. Gathering data through co-design workshops aligns with one of the steps within the Ophelia process, where stakeholders collaboratively design tailored HL interventions based on identified community needs . Two online workshops were conducted in March 2023 on Microsoft Teams, a conferencing software . They went for approximately 1 hour each, first starting with an introduction to the overall project and an overview of the HLQ survey results from previous phases of the research project . Following this, data-informed vignettes were shared with the group. These vignettes were created specifically for this study from a cluster analysis of the HLQ data ( n = 255) and interview data ( n = 14) representing the HL strengths and challenges of the target population . The following questions were presented with the aim of generating discussion to identify local solutions that could respond to the needs of the individuals and families personified in the vignette(s). The questions were: Do you know anyone like this individual or this family in your community? What are the main barriers that this individual/family is facing? What can be done to help this individual/family? How might an HLM assist in these solutions? Should an HLM role be an extension of what already exists or a new role? Would improving HL assets from an earlier age impact these situations? Participants were encouraged to use their microphones and the chat function to contribute to discussions and share ideas during the workshop. Both the workshops were digitally recorded, with consent obtained from all participants beforehand. MS conducted all the workshops, with RN serving as the co-facilitator. Throughout and at the conclusion of each workshop, both the facilitator and co-facilitator made observational notes on the discussions that had occurred. For the analysis of the data in this qualitative study, a thematic analysis was employed, as described by . This method was used to provide insights into how the key stakeholders who participated in the workshops conceptualize an HLM from within their specific contexts. This single thematic analysis involved six distinct phases as outlined by described by . Initially, for Phase One, MS and IC (student researcher) collaboratively immersed themselves in the data. This familiarization process included transcribing the workshop discussions verbatim via auto-generation within the Microsoft Teams software and combining that with the researchers’ observational notes and comments that participants had noted in the chat box. It also involved repeatedly listening to audio recordings and thoroughly re-reading the transcripts. Key information from each transcript was highlighted and systematically recorded in an Excel spreadsheet. Subsequently, for Phase Two, IC developed codes. An inductive approach was adopted, beginning with real-world observations, identifying patterns, and formulating theories based on these patterns. The coding process was repeated multiple times, focusing on the data while considering the influence of prior knowledge from earlier readings on the topic. As the analysis progressed and entered Phase Three, IC conducted a theme search, using the codes as foundational elements to the group and refine them into preliminary themes. These initial themes were reviewed and discussed with MS, ensuring that the most pertinent points were captured and aligned with the research objectives as per Phases Four and Five. The final step, Phase Six, involved gathering all qualitative responses, revising the original themes through discussions amongst all authors, and refining and defining the themes to be reported. Through this, one thematic analysis of the workshop discussion’s multiple themes was identified, the themes are reported in and , utilizing a contemporary ‘infographic’ structure, to display the findings. Included example quotes were selected for their clarity and precision in reflecting one of the defined themes, although other participants provided similar responses . All of the participants could relate to the families presented in each of the cases, with multiple comments identifying how realistic the scenarios were for people living within their community. For example: I could identify with this case study (vignette). We all experience our own health issues and those of our family members from time to time. And sometimes it can be very challenging to find the time and know where to go to get the help you need in that moment. So, I think it’s actually quite a common problem for anyone that has to engage with the health system— Participant 2. Discussions led to the key stakeholders voicing their thoughts and concerns with the current healthcare system which then allowed us to identify barriers that the individuals and their families in each vignette were facing. Discussion then moved into how the emerging HLM role may assist in overcoming these issues. All stakeholders voiced their opinions on what the expectation of the HLM role should be and how they could see the role helping their own communities. Barriers to healthcare Through analysis of stakeholder discussions, four major themes emerged that encapsulate the barriers to healthcare within the vignettes presented: Theme One: Time Both individual and systemic challenges in the time to access healthcare exist. On an individual level, limited time due to work, caregiving, and other responsibilities often causes healthcare to be deprioritized. Systemically, long waiting times, limited after-hours services, and delays in accessing specialists or diagnostic tests exacerbate these issues. These barriers are particularly pronounced in rural or regional areas, where healthcare requires additional travel time. Theme Two: Navigating and understanding healthcare Individuals often face challenges in understanding the available options when seeking healthcare, which can make it difficult to ask the right questions to obtain appropriate care. This issue is exacerbated by limited services throughout healthcare, particularly in regional areas. Theme Three: Access to the right healthcare It is crucial for individuals to find the right healthcare provider who can offer the necessary information to manage their individual or family’s healthcare needs. Access to care is further altered by financial and social resources. Theme Four: Expectation of healthcare Individuals expect to be listened to by healthcare professionals. However, when they feel that this is not happening, they are put off and become disengaged with the system. Additionally, societal attitudes can play a role in shaping what is considered socially acceptable regarding illness and chronic disease. This can alter both how healthcare professionals discuss the topics but also how individuals seek help for themselves. For each of the themes that were identified, there were a number of subthemes that related to the perceived barriers to healthcare. The themes, subthemes, and some example quotes can be seen in . Co-design of the HLM position Building on the identified barriers, discussions then transitioned to how an HLM could play a transformative role in mitigating these challenges and improving overall HL. Stakeholders emphasized four primary expectations for the HLM role: Expectation One: Solution-focused The HLM role should be solution-focused, using their understanding of individual and community barriers to address healthcare challenges. Stakeholders emphasized that the HLM should empower individuals, families, and communities by providing essential learning opportunities. A key responsibility of the HLM would be improving the efficiency of both individuals’ and healthcare organizations’ time. By streamlining processes and facilitating access to resources, the HLM could help people navigate the complex healthcare system more easily. As a reliable point of contact for health-related inquiries, the HLM would reduce confusion and simplify access to information. Additionally, the HLM could advocate for flexible, community-tailored healthcare solutions. By understanding the specific needs of the population, they could help design strategies that are culturally sensitive and effective. The presence of a trusted community member in this role fosters trust and reliability. Equitable healthcare is central to the HLM’s role, ensuring that all individuals, regardless of background, have equal access to healthcare resources, and are empowered to make informed decisions. By addressing health inequities driven by SDH, the HLM can promote a more inclusive healthcare environment. Expectation Two: Duty to facilitate change An HLM should play a key role in connecting individuals, families, and communities with healthcare services, bridging gaps between people, healthcare providers, and community organizations. By fostering open communication, the HLM would create safe spaces where individuals feel heard and empowered to ask questions, and making informed health decisions. Their role would involve building trusting relationships and advocating for inclusive, respectful care that addresses the diverse needs of the community. Their role could then extend beyond mere advocacy; they could provide wrap-around support by possessing in-depth knowledge of the healthcare system, actively listening to community members’ problems, strengths, and queries, and working to break down existing barriers to healthcare access. Expectation Three: Community-based role The key stakeholders wanted to see an HLM as someone in a community who serves as a crucial link between individuals and the healthcare system, enhancing the community’s overall HL. This position could be effectively filled by expanding the responsibilities of existing roles such as nurses, school nurses, teachers, or more specialized positions like migrant health workers and Aboriginal health workers. Building the capacity of these individuals could allow them to step into the role of HLM, utilizing their existing trust and presence within the community. Additionally, larger organizations such as state libraries, universities, and not-for-profits could support these mediators through outreach initiatives, providing resources and training to enhance their effectiveness. The community-based nature of this role would ensure greater success and sustainability, as the HLM could tailor their approaches to the specific needs and cultural contexts of their communities. Expectation Four: Targeted position to have the most impact The final expectation was that an HLM could play a pivotal role in enhancing HL by intervening early in an individual’s life and be available to individuals during their childhood education years. By providing guidance and education before health needs arise, HLM would help foster a deeper understanding of health-related issues. This proactive approach could be particularly effective in settings such as schools, youth groups, and teenage-specific programmes, where young people are developing independence and forming lifelong habits. By equipping these individuals with the necessary HL skills, they can become informed decision-makers, capable of navigating the healthcare system. Furthermore, the stakeholders identified that as these individuals share their knowledge within their networks, a ripple effect occurs, leading to a more health-literate community overall. Discussion around how the role could impact the individuals and families presented in the vignettes not only produced four strong expectations above, but also produced other recommendations and supporting ideas for the role. These can be visualized in . Through analysis of stakeholder discussions, four major themes emerged that encapsulate the barriers to healthcare within the vignettes presented: Theme One: Time Both individual and systemic challenges in the time to access healthcare exist. On an individual level, limited time due to work, caregiving, and other responsibilities often causes healthcare to be deprioritized. Systemically, long waiting times, limited after-hours services, and delays in accessing specialists or diagnostic tests exacerbate these issues. These barriers are particularly pronounced in rural or regional areas, where healthcare requires additional travel time. Theme Two: Navigating and understanding healthcare Individuals often face challenges in understanding the available options when seeking healthcare, which can make it difficult to ask the right questions to obtain appropriate care. This issue is exacerbated by limited services throughout healthcare, particularly in regional areas. Theme Three: Access to the right healthcare It is crucial for individuals to find the right healthcare provider who can offer the necessary information to manage their individual or family’s healthcare needs. Access to care is further altered by financial and social resources. Theme Four: Expectation of healthcare Individuals expect to be listened to by healthcare professionals. However, when they feel that this is not happening, they are put off and become disengaged with the system. Additionally, societal attitudes can play a role in shaping what is considered socially acceptable regarding illness and chronic disease. This can alter both how healthcare professionals discuss the topics but also how individuals seek help for themselves. For each of the themes that were identified, there were a number of subthemes that related to the perceived barriers to healthcare. The themes, subthemes, and some example quotes can be seen in . Both individual and systemic challenges in the time to access healthcare exist. On an individual level, limited time due to work, caregiving, and other responsibilities often causes healthcare to be deprioritized. Systemically, long waiting times, limited after-hours services, and delays in accessing specialists or diagnostic tests exacerbate these issues. These barriers are particularly pronounced in rural or regional areas, where healthcare requires additional travel time. Individuals often face challenges in understanding the available options when seeking healthcare, which can make it difficult to ask the right questions to obtain appropriate care. This issue is exacerbated by limited services throughout healthcare, particularly in regional areas. It is crucial for individuals to find the right healthcare provider who can offer the necessary information to manage their individual or family’s healthcare needs. Access to care is further altered by financial and social resources. Individuals expect to be listened to by healthcare professionals. However, when they feel that this is not happening, they are put off and become disengaged with the system. Additionally, societal attitudes can play a role in shaping what is considered socially acceptable regarding illness and chronic disease. This can alter both how healthcare professionals discuss the topics but also how individuals seek help for themselves. For each of the themes that were identified, there were a number of subthemes that related to the perceived barriers to healthcare. The themes, subthemes, and some example quotes can be seen in . Building on the identified barriers, discussions then transitioned to how an HLM could play a transformative role in mitigating these challenges and improving overall HL. Stakeholders emphasized four primary expectations for the HLM role: Expectation One: Solution-focused The HLM role should be solution-focused, using their understanding of individual and community barriers to address healthcare challenges. Stakeholders emphasized that the HLM should empower individuals, families, and communities by providing essential learning opportunities. A key responsibility of the HLM would be improving the efficiency of both individuals’ and healthcare organizations’ time. By streamlining processes and facilitating access to resources, the HLM could help people navigate the complex healthcare system more easily. As a reliable point of contact for health-related inquiries, the HLM would reduce confusion and simplify access to information. Additionally, the HLM could advocate for flexible, community-tailored healthcare solutions. By understanding the specific needs of the population, they could help design strategies that are culturally sensitive and effective. The presence of a trusted community member in this role fosters trust and reliability. Equitable healthcare is central to the HLM’s role, ensuring that all individuals, regardless of background, have equal access to healthcare resources, and are empowered to make informed decisions. By addressing health inequities driven by SDH, the HLM can promote a more inclusive healthcare environment. Expectation Two: Duty to facilitate change An HLM should play a key role in connecting individuals, families, and communities with healthcare services, bridging gaps between people, healthcare providers, and community organizations. By fostering open communication, the HLM would create safe spaces where individuals feel heard and empowered to ask questions, and making informed health decisions. Their role would involve building trusting relationships and advocating for inclusive, respectful care that addresses the diverse needs of the community. Their role could then extend beyond mere advocacy; they could provide wrap-around support by possessing in-depth knowledge of the healthcare system, actively listening to community members’ problems, strengths, and queries, and working to break down existing barriers to healthcare access. Expectation Three: Community-based role The key stakeholders wanted to see an HLM as someone in a community who serves as a crucial link between individuals and the healthcare system, enhancing the community’s overall HL. This position could be effectively filled by expanding the responsibilities of existing roles such as nurses, school nurses, teachers, or more specialized positions like migrant health workers and Aboriginal health workers. Building the capacity of these individuals could allow them to step into the role of HLM, utilizing their existing trust and presence within the community. Additionally, larger organizations such as state libraries, universities, and not-for-profits could support these mediators through outreach initiatives, providing resources and training to enhance their effectiveness. The community-based nature of this role would ensure greater success and sustainability, as the HLM could tailor their approaches to the specific needs and cultural contexts of their communities. Expectation Four: Targeted position to have the most impact The final expectation was that an HLM could play a pivotal role in enhancing HL by intervening early in an individual’s life and be available to individuals during their childhood education years. By providing guidance and education before health needs arise, HLM would help foster a deeper understanding of health-related issues. This proactive approach could be particularly effective in settings such as schools, youth groups, and teenage-specific programmes, where young people are developing independence and forming lifelong habits. By equipping these individuals with the necessary HL skills, they can become informed decision-makers, capable of navigating the healthcare system. Furthermore, the stakeholders identified that as these individuals share their knowledge within their networks, a ripple effect occurs, leading to a more health-literate community overall. Discussion around how the role could impact the individuals and families presented in the vignettes not only produced four strong expectations above, but also produced other recommendations and supporting ideas for the role. These can be visualized in . The HLM role should be solution-focused, using their understanding of individual and community barriers to address healthcare challenges. Stakeholders emphasized that the HLM should empower individuals, families, and communities by providing essential learning opportunities. A key responsibility of the HLM would be improving the efficiency of both individuals’ and healthcare organizations’ time. By streamlining processes and facilitating access to resources, the HLM could help people navigate the complex healthcare system more easily. As a reliable point of contact for health-related inquiries, the HLM would reduce confusion and simplify access to information. Additionally, the HLM could advocate for flexible, community-tailored healthcare solutions. By understanding the specific needs of the population, they could help design strategies that are culturally sensitive and effective. The presence of a trusted community member in this role fosters trust and reliability. Equitable healthcare is central to the HLM’s role, ensuring that all individuals, regardless of background, have equal access to healthcare resources, and are empowered to make informed decisions. By addressing health inequities driven by SDH, the HLM can promote a more inclusive healthcare environment. An HLM should play a key role in connecting individuals, families, and communities with healthcare services, bridging gaps between people, healthcare providers, and community organizations. By fostering open communication, the HLM would create safe spaces where individuals feel heard and empowered to ask questions, and making informed health decisions. Their role would involve building trusting relationships and advocating for inclusive, respectful care that addresses the diverse needs of the community. Their role could then extend beyond mere advocacy; they could provide wrap-around support by possessing in-depth knowledge of the healthcare system, actively listening to community members’ problems, strengths, and queries, and working to break down existing barriers to healthcare access. The key stakeholders wanted to see an HLM as someone in a community who serves as a crucial link between individuals and the healthcare system, enhancing the community’s overall HL. This position could be effectively filled by expanding the responsibilities of existing roles such as nurses, school nurses, teachers, or more specialized positions like migrant health workers and Aboriginal health workers. Building the capacity of these individuals could allow them to step into the role of HLM, utilizing their existing trust and presence within the community. Additionally, larger organizations such as state libraries, universities, and not-for-profits could support these mediators through outreach initiatives, providing resources and training to enhance their effectiveness. The community-based nature of this role would ensure greater success and sustainability, as the HLM could tailor their approaches to the specific needs and cultural contexts of their communities. The final expectation was that an HLM could play a pivotal role in enhancing HL by intervening early in an individual’s life and be available to individuals during their childhood education years. By providing guidance and education before health needs arise, HLM would help foster a deeper understanding of health-related issues. This proactive approach could be particularly effective in settings such as schools, youth groups, and teenage-specific programmes, where young people are developing independence and forming lifelong habits. By equipping these individuals with the necessary HL skills, they can become informed decision-makers, capable of navigating the healthcare system. Furthermore, the stakeholders identified that as these individuals share their knowledge within their networks, a ripple effect occurs, leading to a more health-literate community overall. Discussion around how the role could impact the individuals and families presented in the vignettes not only produced four strong expectations above, but also produced other recommendations and supporting ideas for the role. These can be visualized in . This study aimed to co-design the emerging HLM role for the Tasmanian community and to assess the support, expectations, and need for such a role to then help guide the implementation in the future. This study demonstrates how considering a community’s current HL and engaging key stakeholders (those working in Tasmania’s health sector, with relevant knowledge, experience, and interest in HL and improving the health of their community) in the planning and design of new public health solutions can support the development of a fit-for-purpose, context-specific role. Experiences with similar roles in other parts of the world highlight valuable lessons for the development and sustainability of HLMs. These experiences underscore the importance of incorporating community-specific knowledge and adapting roles to local contexts. In Romania, the Roma Health Mediators initiative serves as a notable case where health mediators have strengthened connections between marginalized communities and health services . This programme has demonstrated that the success of such roles often depends on robust training, community acceptance, and ongoing support. However, these programmes also demonstrate that while community-based health workers can enhance trust and access, they also face challenges related to sustainability, training, and funding. These insights could inform the design and implementation of HLMs in Tasmania, emphasizing the need for a strong framework that considers the social and cultural factors unique to each community. The HLM role could then go beyond navigation and connection to acknowledge the SDH and could enhance individuals’ HL assets to improve their health outcomes. Addressing SDH through an HLM could play a crucial role in reducing health inequities and improving health outcomes. Social determinants, such as socioeconomic status, education, and living conditions, significantly influence health outcomes . By focusing on these non-medical factors, HLMs can help bridge the gap between healthcare access and the broader social environment that affects individual and community health. HLMs could play a vital role in creating equitable health opportunities by helping to empower individuals with knowledge and resources to navigate their social contexts effectively. This approach not only addresses immediate health needs but also tackles the root causes of health disparities, promoting long-term health equity . The co-design of the HLM position brings us a step closer to developing an HL-responsive role in Tasmania. This may play a crucial part in improving the HL and health outcomes for the community. The co-designed workshops generated a number of important concepts, from the expectations of the HLM, recommendations for the HLM role, and finally other ideas that would be important for the HLM position’s development. In order to be successful and sustainable, the role of HLMs must be assessed within Tasmania’s current healthcare landscape. Existing gaps in HL and accessibility indicate where HLMs could be most impactful. HLMs could collaborate with health navigators, social workers, and other healthcare professionals to complement rather than duplicate efforts, enhancing coordination and resource use. However, barriers such as funding, training needs, and acceptance within the community need to be considered and addressed to facilitate effective integration. Addressing expectations An HLM could significantly enhance health outcomes by addressing critical areas such as time management, navigation, right care, and community trust. These issues have been recognized in previous papers and give researchers and policy-makers a starting point when creating practical solutions to address the inequity that exists within health and healthcare . By optimizing time utilization, HLMs may assist both individuals and organizations to focus on essential health-related tasks without unnecessary delays, thereby improving efficiency and productivity. HLMs could serve as a central resource for health-related inquiries, simplifying navigation by providing a single point of contact, which reduces the complexity often associated with accessing healthcare services . This approach could help to ensure that individuals receive the right care tailored to their specific needs, enhancing the quality of healthcare delivery . Furthermore, HLMs build community trust by being accessible and reliable members who understand local nuances and concerns . Setting clear expectations for HLMs is crucial in aligning stakeholders, ensuring that all parties have a shared understanding of the HLM’s role and responsibilities. This alignment fosters collaboration among individuals, communities, and organizations, which could lead to more coordinated efforts and improved health outcomes. By establishing these expectations from the outset, HLMs could effectively bridge gaps in healthcare delivery and empower communities to overcome barriers related to SDH. HLMs should play a crucial role in creating connections, advocating for individuals, and breaking down barriers within communities. By fostering effective communication, HLMs could help to enable individuals to express themselves and be heard, which is essential for building trust and empowering communities . Advocacy should be a fundamental aspect of HLMs’ duties, as they work to protect and promote the rights of individuals, ensuring that their voices are amplified in health-related discussions . This advocacy involves not only supporting individuals in navigating complex health systems but also pushing for systemic changes that address broader health inequities. Breaking down barriers requires HLMs to listen deeply and provide tailored support that respects the unique needs of each community member. Clear communication is vital in achieving these duties, as it ensures that all stakeholders are aligned and informed about the goals and processes involved . By maintaining open lines of communication, HLMs could effectively coordinate efforts across various sectors, ultimately leading to more inclusive and equitable health outcomes for all community members. Role in community HLMs could significantly expand their roles in various community settings such as state libraries, universities, not-for-profits, and other organizations by integrating with existing roles like nurses, teachers, and health navigators. Libraries serve as accessible hubs for information dissemination, making them ideal partners for HLMs to collaborate with librarians to provide HL resources tailored to a community’s needs . Not-for-profit organizations offer another avenue for HLMs to reach underserved populations by working with health connectors and coaches to deliver targeted interventions. Integrating lessons learned from similar initiatives could help define whether HLMs should be volunteers or professionals and highlight potential challenges. The experiences of health mediators in Eastern Europe, for instance, illustrate that while volunteer-based roles foster community trust, they can suffer from high turnover and inconsistent support . On the other hand, structured, professional approaches, like those in multidisciplinary models, provide stability but can be more costly and require significant investment. In schools and universities, HLMs could work alongside educators to embed HL into curricula, ensuring that students across disciplines develop essential skills for navigating health information . By utilizing the expertise of nurses and teachers who already play pivotal roles in health education, HLMs could create a more cohesive approach to improving HL . This integration could not only enhance the effectiveness of existing programmes but also ensure a comprehensive strategy that addresses the diverse needs of communities, ultimately leading to improved health outcomes and reduced disparities . Timing for impact The HLM role could make a significant impact through early interventions in educational settings and community initiatives. By focusing on schools and educational environments, HLMs could integrate HL into the curriculum, fostering a culture of informed health decision-making from a young age . This early engagement is crucial as it equips students with the necessary skills to navigate health information throughout their lives, thereby reducing health disparities linked to SDH . In community settings, HLMs could initiate programs that address specific local health challenges, tailoring strategies to meet the unique needs of diverse populations. Long-term engagement in these communities is essential to build trust and ensure sustained improvements in HL. Tailored strategies that consider cultural, social, and economic factors are necessary for these interventions to be effective . By maintaining an ongoing presence and adapting approaches based on community feedback, HLMs can ensure that their efforts lead to meaningful and lasting changes in health outcomes. This proactive approach not only empowers individuals but also strengthens community resilience against health inequities. Recommendations and future research In summary, these findings suggest that policy-makers could consider incorporating HLMs into policies as a strategic approach to improving public health outcomes. However, the nature of the HLM role—whether as volunteers or professionals—must be critically examined. Integration of the role may follow a spectrum ranging from volunteerism, enhancing the capabilities of current workers without significant cost, and extending to purposely paid professionals who provide dedicated, consistent, and expert support. This spectrum allows for flexible implementation tailored to community needs and resources, balancing cost, sustainability, and impact. While volunteer HLMs could foster trust and connection within communities due to their grassroots nature, research indicates that volunteerism comes with challenges, including limited availability, high turnover, and inconsistent training . Upskilling existing professionals to take on HLM responsibilities can enhance workforce capacity, improve continuity of care, and provide cost-effective, immediate HL support within the current system. Employing professional HLMs provides more stability and comprehensive expertise but raises concerns regarding sustainability and costs. Ensuring trust between professionals and community members would also need strategic efforts . Future initiatives could consider a hybrid model where HLMs start as trained volunteers with pathways to professional roles, which would balance trust-building with sustainability. Also, each health or community setting may require an assessment of their current resource requirements, existing skills, and capacity building needs prior to introducing an HLM role. This sort of assessment may support the success and sustainability of such interventions. For successful implementation, it would be crucial to develop clear guidelines and training programmes that equip HLMs with the necessary skills and resources. These programmes could outline the specific competencies needed, with adaptations for either volunteer or professional tracks, ensuring that all HLMs are prepared to navigate their roles effectively. Drawing on the successes and challenges experienced by health mediator programmes in Eastern Europe, it is evident that best practices should be tailored to local needs. Programmes like the Roma Health Mediators Project emphasize the importance of sustainable training and support structures, which would be critical for the HLM role in Tasmania. Future research should explore the specific challenges of integrating HLMs into various community contexts, including potential resistance from existing healthcare structures and the need for sustainable funding models. It would be valuable to investigate funding strategies that could support either volunteer or paid HLMs, such as community grants, partnerships with local organizations, or government subsidies. Additionally, evaluating the long-term impact of HLM interventions on health outcomes will be essential in refining their role and maximizing their effectiveness in reducing health disparities. The future of health promotion is poised for transformative impact, emphasizing a holistic approach that integrates education, community engagement, and policy development . A realistic pathway for the evolution of the HLM role could account for resource constraints and community expectations. Informed by this research the research team will develop a clear position description which will include the roles and responsibilities of an HLM to ensure practical implementation. An HLM could play a pivotal role in this evolution by addressing SDH and empowering individuals with the knowledge and skills needed to navigate complex health systems. Establishing trust and credibility within communities will be crucial, whether the HLMs are volunteer-based or part of a professional workforce. As health promotion strategies continue to evolve, they will increasingly focus on creating supportive environments and strengthening community actions . By integrating into diverse settings such as schools, workplaces, and community centres, HLMs may support early interventions and tailor strategies to meet the unique needs of different populations. This proactive approach not only addresses immediate health concerns but could also lay the groundwork for sustained improvements in public health outcomes, ultimately contributing to a healthier, more equitable society. Additionally, informed by this research pilot programmes should be considered to assess the feasibility and impact of different HLM approaches, enabling the identification of the most effective structure for Tasmania. Strengths and limitations This study utilizes a co-design approach, which is a significant strength as it ensures that the perspectives of both users and providers are incorporated into the planning and design of the HLM role. By grounding the co-design process in local knowledge and expertise, the study was able to develop context-specific solutions that are more likely to create a HL-responsive environment tailored to the unique needs of the Tasmanian community. This participatory approach promotes stakeholder buy-in and enhances the relevance of the proposed solutions to the local population. However, there are limitations to this approach. While the co-design process generated expectations, recommendations, and ideas based on stakeholders’ personal knowledge and experiences, it does not provide empirical evidence of the effectiveness or feasibility of the HLM role. Consequently, further research is needed to implement and evaluate this role to determine its potential impact on health and equity outcomes within communities. Additionally, the vignettes used in this study were based on only five scales of the HLQ. This approach, while focused, might have excluded other important HL strengths and challenges, potentially affecting the comprehensiveness of the findings and any future decisions informed by this data. Furthermore, the participant group composition was limited, with only one male participant, which could introduce biases. The findings may therefore have limited generalizability, and caution should be exercised when interpreting or applying these results to other contexts. While conducting online workshops via Microsoft Teams facilitated engagement with diverse stakeholders across Tasmania, it may have inadvertently limited participation from those with restricted access to digital technology or low digital literacy. This constraint highlights a potential barrier to inclusive participation. Future research should consider employing a hybrid model that combines in-person and online engagement options to accommodate stakeholders’ preferences, thereby enhancing participation and ensuring a more comprehensive representation of perspectives and capturing different viewpoints. An HLM could significantly enhance health outcomes by addressing critical areas such as time management, navigation, right care, and community trust. These issues have been recognized in previous papers and give researchers and policy-makers a starting point when creating practical solutions to address the inequity that exists within health and healthcare . By optimizing time utilization, HLMs may assist both individuals and organizations to focus on essential health-related tasks without unnecessary delays, thereby improving efficiency and productivity. HLMs could serve as a central resource for health-related inquiries, simplifying navigation by providing a single point of contact, which reduces the complexity often associated with accessing healthcare services . This approach could help to ensure that individuals receive the right care tailored to their specific needs, enhancing the quality of healthcare delivery . Furthermore, HLMs build community trust by being accessible and reliable members who understand local nuances and concerns . Setting clear expectations for HLMs is crucial in aligning stakeholders, ensuring that all parties have a shared understanding of the HLM’s role and responsibilities. This alignment fosters collaboration among individuals, communities, and organizations, which could lead to more coordinated efforts and improved health outcomes. By establishing these expectations from the outset, HLMs could effectively bridge gaps in healthcare delivery and empower communities to overcome barriers related to SDH. HLMs should play a crucial role in creating connections, advocating for individuals, and breaking down barriers within communities. By fostering effective communication, HLMs could help to enable individuals to express themselves and be heard, which is essential for building trust and empowering communities . Advocacy should be a fundamental aspect of HLMs’ duties, as they work to protect and promote the rights of individuals, ensuring that their voices are amplified in health-related discussions . This advocacy involves not only supporting individuals in navigating complex health systems but also pushing for systemic changes that address broader health inequities. Breaking down barriers requires HLMs to listen deeply and provide tailored support that respects the unique needs of each community member. Clear communication is vital in achieving these duties, as it ensures that all stakeholders are aligned and informed about the goals and processes involved . By maintaining open lines of communication, HLMs could effectively coordinate efforts across various sectors, ultimately leading to more inclusive and equitable health outcomes for all community members. HLMs could significantly expand their roles in various community settings such as state libraries, universities, not-for-profits, and other organizations by integrating with existing roles like nurses, teachers, and health navigators. Libraries serve as accessible hubs for information dissemination, making them ideal partners for HLMs to collaborate with librarians to provide HL resources tailored to a community’s needs . Not-for-profit organizations offer another avenue for HLMs to reach underserved populations by working with health connectors and coaches to deliver targeted interventions. Integrating lessons learned from similar initiatives could help define whether HLMs should be volunteers or professionals and highlight potential challenges. The experiences of health mediators in Eastern Europe, for instance, illustrate that while volunteer-based roles foster community trust, they can suffer from high turnover and inconsistent support . On the other hand, structured, professional approaches, like those in multidisciplinary models, provide stability but can be more costly and require significant investment. In schools and universities, HLMs could work alongside educators to embed HL into curricula, ensuring that students across disciplines develop essential skills for navigating health information . By utilizing the expertise of nurses and teachers who already play pivotal roles in health education, HLMs could create a more cohesive approach to improving HL . This integration could not only enhance the effectiveness of existing programmes but also ensure a comprehensive strategy that addresses the diverse needs of communities, ultimately leading to improved health outcomes and reduced disparities . The HLM role could make a significant impact through early interventions in educational settings and community initiatives. By focusing on schools and educational environments, HLMs could integrate HL into the curriculum, fostering a culture of informed health decision-making from a young age . This early engagement is crucial as it equips students with the necessary skills to navigate health information throughout their lives, thereby reducing health disparities linked to SDH . In community settings, HLMs could initiate programs that address specific local health challenges, tailoring strategies to meet the unique needs of diverse populations. Long-term engagement in these communities is essential to build trust and ensure sustained improvements in HL. Tailored strategies that consider cultural, social, and economic factors are necessary for these interventions to be effective . By maintaining an ongoing presence and adapting approaches based on community feedback, HLMs can ensure that their efforts lead to meaningful and lasting changes in health outcomes. This proactive approach not only empowers individuals but also strengthens community resilience against health inequities. In summary, these findings suggest that policy-makers could consider incorporating HLMs into policies as a strategic approach to improving public health outcomes. However, the nature of the HLM role—whether as volunteers or professionals—must be critically examined. Integration of the role may follow a spectrum ranging from volunteerism, enhancing the capabilities of current workers without significant cost, and extending to purposely paid professionals who provide dedicated, consistent, and expert support. This spectrum allows for flexible implementation tailored to community needs and resources, balancing cost, sustainability, and impact. While volunteer HLMs could foster trust and connection within communities due to their grassroots nature, research indicates that volunteerism comes with challenges, including limited availability, high turnover, and inconsistent training . Upskilling existing professionals to take on HLM responsibilities can enhance workforce capacity, improve continuity of care, and provide cost-effective, immediate HL support within the current system. Employing professional HLMs provides more stability and comprehensive expertise but raises concerns regarding sustainability and costs. Ensuring trust between professionals and community members would also need strategic efforts . Future initiatives could consider a hybrid model where HLMs start as trained volunteers with pathways to professional roles, which would balance trust-building with sustainability. Also, each health or community setting may require an assessment of their current resource requirements, existing skills, and capacity building needs prior to introducing an HLM role. This sort of assessment may support the success and sustainability of such interventions. For successful implementation, it would be crucial to develop clear guidelines and training programmes that equip HLMs with the necessary skills and resources. These programmes could outline the specific competencies needed, with adaptations for either volunteer or professional tracks, ensuring that all HLMs are prepared to navigate their roles effectively. Drawing on the successes and challenges experienced by health mediator programmes in Eastern Europe, it is evident that best practices should be tailored to local needs. Programmes like the Roma Health Mediators Project emphasize the importance of sustainable training and support structures, which would be critical for the HLM role in Tasmania. Future research should explore the specific challenges of integrating HLMs into various community contexts, including potential resistance from existing healthcare structures and the need for sustainable funding models. It would be valuable to investigate funding strategies that could support either volunteer or paid HLMs, such as community grants, partnerships with local organizations, or government subsidies. Additionally, evaluating the long-term impact of HLM interventions on health outcomes will be essential in refining their role and maximizing their effectiveness in reducing health disparities. The future of health promotion is poised for transformative impact, emphasizing a holistic approach that integrates education, community engagement, and policy development . A realistic pathway for the evolution of the HLM role could account for resource constraints and community expectations. Informed by this research the research team will develop a clear position description which will include the roles and responsibilities of an HLM to ensure practical implementation. An HLM could play a pivotal role in this evolution by addressing SDH and empowering individuals with the knowledge and skills needed to navigate complex health systems. Establishing trust and credibility within communities will be crucial, whether the HLMs are volunteer-based or part of a professional workforce. As health promotion strategies continue to evolve, they will increasingly focus on creating supportive environments and strengthening community actions . By integrating into diverse settings such as schools, workplaces, and community centres, HLMs may support early interventions and tailor strategies to meet the unique needs of different populations. This proactive approach not only addresses immediate health concerns but could also lay the groundwork for sustained improvements in public health outcomes, ultimately contributing to a healthier, more equitable society. Additionally, informed by this research pilot programmes should be considered to assess the feasibility and impact of different HLM approaches, enabling the identification of the most effective structure for Tasmania. This study utilizes a co-design approach, which is a significant strength as it ensures that the perspectives of both users and providers are incorporated into the planning and design of the HLM role. By grounding the co-design process in local knowledge and expertise, the study was able to develop context-specific solutions that are more likely to create a HL-responsive environment tailored to the unique needs of the Tasmanian community. This participatory approach promotes stakeholder buy-in and enhances the relevance of the proposed solutions to the local population. However, there are limitations to this approach. While the co-design process generated expectations, recommendations, and ideas based on stakeholders’ personal knowledge and experiences, it does not provide empirical evidence of the effectiveness or feasibility of the HLM role. Consequently, further research is needed to implement and evaluate this role to determine its potential impact on health and equity outcomes within communities. Additionally, the vignettes used in this study were based on only five scales of the HLQ. This approach, while focused, might have excluded other important HL strengths and challenges, potentially affecting the comprehensiveness of the findings and any future decisions informed by this data. Furthermore, the participant group composition was limited, with only one male participant, which could introduce biases. The findings may therefore have limited generalizability, and caution should be exercised when interpreting or applying these results to other contexts. While conducting online workshops via Microsoft Teams facilitated engagement with diverse stakeholders across Tasmania, it may have inadvertently limited participation from those with restricted access to digital technology or low digital literacy. This constraint highlights a potential barrier to inclusive participation. Future research should consider employing a hybrid model that combines in-person and online engagement options to accommodate stakeholders’ preferences, thereby enhancing participation and ensuring a more comprehensive representation of perspectives and capturing different viewpoints. In conclusion, the HLM role represents a significant opportunity to address health inequities by enhancing time management, streamlining healthcare navigation, ensuring appropriate care delivery, and fostering community trust. By bridging gaps between individuals and healthcare systems, advocating for equity, and tailoring support to community needs, HLMs could play a transformative role in breaking down barriers to healthcare. Their integration into diverse community settings, such as libraries, schools, and universities, alongside collaborations with existing roles like nurses, teachers, and social workers, underscores their potential to amplify HL efforts. Stakeholders identified key expectations for the HLM role, including its focus on being solution-oriented, community-based, and targeted towards populations with the greatest need. Furthermore, embedding HLMs in educational and early intervention initiatives highlights the importance of long-term engagement and proactive strategies to build a health-literate population. These findings emphasize the need for a structured and sustainable implementation of the HLM role to promote equitable access to health resources and improved public health outcomes. Future research and pilot programmes will be essential to refine this role and evaluate its impact on reducing health disparities.
Prokaryotic Diversity and Community Distribution in the Complex Hydrogeological System of the Añana Continental Saltern
baf1c82b-37be-4aa5-9849-709fc651603b
11739210
Microbiology[mh]
Hypersaline environments represent highly bioproductive habitats with a large microbial diversity adapted to these conditions . To better understand these environments, it is essential to characterize the microbial communities due to their contribution to nutrient cycling and ecosystem functioning . Therefore, the study of hypersaline environments such as solar salterns , salt mines , and salt lakes has received considerable interest. Many different types of halophilic and halotolerant microorganisms including archaea of the order Halobacteriales and bacterial species of the order Halanaerobiales and Halomonadaceae , Desulfohalobiaceae , and Salinibacteriaceae families, among others , are found in these environments. While halophiles are less common within the Eucarya domain, the green alga Dunaliella is typically present in aerobic environments with high salt content . As microbial communities are vital to ecosystem functions and are influenced by the physical and geological characteristics of the site, parallel studies in these other areas are also necessary to gain a comprehensive understanding of these ecosystems . Among the different types of salterns that exist, continental ones are less well known, and not many are still in use today. The Añana Salt Valley (Álava, Basque Country, Spain) is placed in one of the best-preserved continental salterns in Europe. Salt production is active in the valley since at least 7000 years, and it has been recognized by Food and Agriculture Organization (FAO) as being Europe’s first Globally Important Agricultural Heritage System (GIAHS). The underground evaporitic rocks, comprising salt, gypsum, clay and others, were formed during the initial stages of the fragmentation of Pangaea supercontinent (approximately 200 million years ago). These rocks now form a geologically complex diapiric structure where the salts are actively rising . The geological complexity of the area leads to hydrogeological complexity, with very slow deep flows mixing with shallower flows in some areas of the valley. This results in the emergence of springs with varying salinity levels, dependent on the route taken by the water through the subsoil. This gives rise to the presence of springs with salty (≈ 200 g/L salinity) or brackish (≤ 20 g/L salinity) water that are very close to each other. Despite the singularity of this environment, and although studies have been carried out in this saltern on archeology, hydrogeology, or the salt itself, there is limited microbiological data available. There is only one publication on microbial and viral changes from groundwater to surface water and one on fungal community diversity , so further research is needed to fully understand this ecosystem. Therefore, in this context, this work present an extended study along this ancient saltern in which the prokaryotic diversity was studied by Illumina-based 16S rRNA gene sequencing to determine whether the waters within the valley have distinctive microbiological characteristics. Taking into account the physico-chemical parameters of the main waters supplying the valley (six springs, a stream that cross the saltern, and groundwater) and water involved in salt production (water from a brine distribution channel and three brine resting ponds), prokaryotic community structure and composition was studied. The integration of all data is not only important for expanding scientific understanding but also for facilitating the recovery and enhancement of this active saltern. Sampling Site Features and Sampling Procedure Añana Salt Valley (42°47′59.82″ N, 2°59′3.23″ W; altitude 570–598 m) is in western Álava, Basque Country, Spain. For this study, 12 sampling sites (Fig. a) were selected based on their accessibility and minimal human intervention. The Santa Engracia spring is situated at the highest point of the valley, and it is the main brine supplier to the saltern. Immediately, its water is channeled (called in this study Santa Engracia channel) and distributed throughout the valley to the temporary accumulation or resting ponds (i.e., Pond I, Pond II, and Pond III) from which the brine is distributed to the crystallization pans at the end of the salt production system. Santa Engracia spring is very close to the brackish water source of the San Juan stream, which runs all along the saltern, but it is not part of the salt production system. A little further north are the other springs, some salty (El Cautivo, El Pico, Fuenterriba, and Hontana) and some brackish (El Pico Dulce, which is only two meters from El Pico). To the east in the valley, there is a piezometer (called S8) from which brackish groundwater can be sampled at a depth of 60 m. Most of the springs are in direct contact with the natural surrounding environment, except for the Santa Engracia spring and the El Cautivo spring, which have a wooden structure around them (Fig. a). More information about the complex geological and hydrogeological context of the valley is presented in the Supplementary Material. Given the importance of salt production in the economic and social development of the area and its relevance as a natural, cultural and heritage site, there is since 2017 a surface- and ground-water monitoring network throughout the valley. Of the 12 sampling sites, eight (the six springs, the stream, and the piezometer) are part of this monitoring network; therefore, the ionic composition of the water samples was available. This composition is periodically analyzed by ion chromatography at the SGIker Advanced Research Facilities of the University of the Basque Country UPV/EHU. In a first phase of this study (spring 2018), water samples from three springs (Santa Engracia spring, El Pico spring, and El Pico Dulce spring), groundwater from S8 piezometer, and water from the ponds (pond I, II, and III) were collected. In a second phase of the study (spring 2021), the sampling was extended to the rest of the springs (Fuenterriba spring, El Cautivo spring, and Hontana spring), the San Juan stream, and the brine distribution channel (Santa Engracia channel). Water samples were collected directly using sterile glass bottles. Water from the Santa Engracia and El Cautivo springs was collected using a Niskin bottle (Aquatic BioTechnology, El Puerto de Santa María, Spain) both at 2 m depth. Groundwater from the S8 piezometer (60 m depth) was collected using a manual bailer system (Eijkelkamp, Giesbeek, The Netherlands). At all sampling sites, water temperature, pH, and electrical conductivity were measured with a Combo tester (Hanna Instruments, Eibar, Spain) and salinity with a density hydrometer (Hartwig Instruments, Netherlands), all parameters measured in situ. The samples were then transported to the laboratory in properly labeled containers at 4 °C and within 1 h for further processing. Prokaryotic Community and Diversity Study by 16S rRNA Gene Deep Sequencing DNA Extraction and Sequencing Water samples were enriched by filtering 5 L of water under aseptic conditions through a custom-made device driven by vacuum/compression pumps (Labbox, Premia de Dalt, Spain) and ending up in a 0.22-µm pore size filter. DNA was extracted using the Genomic Mini AX Bacteria Kit (A&A Biotechnology, Gdansk, Poland) and quantified using the QuantiFluor dsDNA System (Promega, Madison, WI, USA). The V3-V4 hypervariable region of the 16S rRNA gene was amplified , and the amplicons were sequenced using the paired-end method on a MiSeq Illumina platform at Biogenetics (Álava, Spain). Libraries were prepared from isolated DNA using the Nextera DNA Library prep kit (Ilumina, San Diego, CA, USA). Sequencing data from the study are available under the accession number PRJNA1115049. Prokaryote Diversity and Distribution Sequence analysis was performed using the packages and pipelines of the Quantitative Insights Into Microbial Ecology (QIIME2) program (version 2021.4) . Joining of paired-end reads, sequence quality control, and feature table construction were performed by denoising with DADA2 plugin (q2-dada2) with standard parameters . Taxonomic assignment of DADA2-generated amplicon sequence variants (ASVs) was performed using the Bayesian taxonomy classifier classify-sklearn with the q2-feature-classifier add-on with taxonomy.py using the SILVA database v.138 , clustered at 99% sequence similarity. Alpha diversity metrics (observed ASVs, Faith’s Phylogenetic Diversity, Pielou’s evenness, Shannon’s and Simpson’s diversity indexes) and beta diversity metric (Bray‐Curtis dissimilarity) were estimated using q2‐diversity after samples were rarefied to the smallest number of non-chimeric sequences of all samples tested. Statistical significance of the alpha diversity values between sampling sites was assessed using the Kruskal–Wallis H test. To assess the relationship of the water properties with α -diversity indices, Pearson’s coefficient test was carried out. For comparing the microbial community structure among samples, Bray–Curtis dissimilarity coefficient was assessed by analysis of similarities (ANOSIM) with 999 permutations. In all analyses, it was assumed a p < 0.05 for statistically significant differences. An analysis of composition of microbiome (ANCOM) test was applied to assess differentially abundance genera using the q2-composition QIIME2 plugin . A heatmap was performed by ggplot2 R package to visualize differences at genus-level composition between samples. Association between samples were established by the Bray–Curtis dissimilarity. Venn analysis was performed by Venn diagram software (available online at: http://bioinformatics.psb.ugent.be/webtools/Venn/ , accessed on 13 October 2023) to determine common and unique prokaryotic taxa on samples. A Canonical Correspondence Analysis (CCA) was performed to correlate environmental variables with prokaryotic taxa and samples. To make the CCA, PAST 4 software package was used and a Monte Carlo test with 999 unrestricted permutations was carried. Functional Prediction of Prokaryotic Communities The functional potential of the microbial community was predicted using 16S rRNA gene abundance data via Phylogenetic Investigation of Communities by Reconstruction of Unobserved States (PICRUSt2) . The recommended maximum NSTI cut-off point of two was implemented by default in PICRUSt2, which excluded 1.3% of ASVs. Predicted pathways were categorized according to KEGG (Kyoto Encyclopedia of Genes and Genomes) orthology groups and compared between samples. In addition, the functional annotation of prokaryotic taxa via Functional Annotation of Prokaryotic Taxa (FAPROTAX) was carried out ( http://www.ehbio.com/ImageGP/index.php/Home/Index/FAPROTAX.html ) . Añana Salt Valley (42°47′59.82″ N, 2°59′3.23″ W; altitude 570–598 m) is in western Álava, Basque Country, Spain. For this study, 12 sampling sites (Fig. a) were selected based on their accessibility and minimal human intervention. The Santa Engracia spring is situated at the highest point of the valley, and it is the main brine supplier to the saltern. Immediately, its water is channeled (called in this study Santa Engracia channel) and distributed throughout the valley to the temporary accumulation or resting ponds (i.e., Pond I, Pond II, and Pond III) from which the brine is distributed to the crystallization pans at the end of the salt production system. Santa Engracia spring is very close to the brackish water source of the San Juan stream, which runs all along the saltern, but it is not part of the salt production system. A little further north are the other springs, some salty (El Cautivo, El Pico, Fuenterriba, and Hontana) and some brackish (El Pico Dulce, which is only two meters from El Pico). To the east in the valley, there is a piezometer (called S8) from which brackish groundwater can be sampled at a depth of 60 m. Most of the springs are in direct contact with the natural surrounding environment, except for the Santa Engracia spring and the El Cautivo spring, which have a wooden structure around them (Fig. a). More information about the complex geological and hydrogeological context of the valley is presented in the Supplementary Material. Given the importance of salt production in the economic and social development of the area and its relevance as a natural, cultural and heritage site, there is since 2017 a surface- and ground-water monitoring network throughout the valley. Of the 12 sampling sites, eight (the six springs, the stream, and the piezometer) are part of this monitoring network; therefore, the ionic composition of the water samples was available. This composition is periodically analyzed by ion chromatography at the SGIker Advanced Research Facilities of the University of the Basque Country UPV/EHU. In a first phase of this study (spring 2018), water samples from three springs (Santa Engracia spring, El Pico spring, and El Pico Dulce spring), groundwater from S8 piezometer, and water from the ponds (pond I, II, and III) were collected. In a second phase of the study (spring 2021), the sampling was extended to the rest of the springs (Fuenterriba spring, El Cautivo spring, and Hontana spring), the San Juan stream, and the brine distribution channel (Santa Engracia channel). Water samples were collected directly using sterile glass bottles. Water from the Santa Engracia and El Cautivo springs was collected using a Niskin bottle (Aquatic BioTechnology, El Puerto de Santa María, Spain) both at 2 m depth. Groundwater from the S8 piezometer (60 m depth) was collected using a manual bailer system (Eijkelkamp, Giesbeek, The Netherlands). At all sampling sites, water temperature, pH, and electrical conductivity were measured with a Combo tester (Hanna Instruments, Eibar, Spain) and salinity with a density hydrometer (Hartwig Instruments, Netherlands), all parameters measured in situ. The samples were then transported to the laboratory in properly labeled containers at 4 °C and within 1 h for further processing. DNA Extraction and Sequencing Water samples were enriched by filtering 5 L of water under aseptic conditions through a custom-made device driven by vacuum/compression pumps (Labbox, Premia de Dalt, Spain) and ending up in a 0.22-µm pore size filter. DNA was extracted using the Genomic Mini AX Bacteria Kit (A&A Biotechnology, Gdansk, Poland) and quantified using the QuantiFluor dsDNA System (Promega, Madison, WI, USA). The V3-V4 hypervariable region of the 16S rRNA gene was amplified , and the amplicons were sequenced using the paired-end method on a MiSeq Illumina platform at Biogenetics (Álava, Spain). Libraries were prepared from isolated DNA using the Nextera DNA Library prep kit (Ilumina, San Diego, CA, USA). Sequencing data from the study are available under the accession number PRJNA1115049. Prokaryote Diversity and Distribution Sequence analysis was performed using the packages and pipelines of the Quantitative Insights Into Microbial Ecology (QIIME2) program (version 2021.4) . Joining of paired-end reads, sequence quality control, and feature table construction were performed by denoising with DADA2 plugin (q2-dada2) with standard parameters . Taxonomic assignment of DADA2-generated amplicon sequence variants (ASVs) was performed using the Bayesian taxonomy classifier classify-sklearn with the q2-feature-classifier add-on with taxonomy.py using the SILVA database v.138 , clustered at 99% sequence similarity. Alpha diversity metrics (observed ASVs, Faith’s Phylogenetic Diversity, Pielou’s evenness, Shannon’s and Simpson’s diversity indexes) and beta diversity metric (Bray‐Curtis dissimilarity) were estimated using q2‐diversity after samples were rarefied to the smallest number of non-chimeric sequences of all samples tested. Statistical significance of the alpha diversity values between sampling sites was assessed using the Kruskal–Wallis H test. To assess the relationship of the water properties with α -diversity indices, Pearson’s coefficient test was carried out. For comparing the microbial community structure among samples, Bray–Curtis dissimilarity coefficient was assessed by analysis of similarities (ANOSIM) with 999 permutations. In all analyses, it was assumed a p < 0.05 for statistically significant differences. An analysis of composition of microbiome (ANCOM) test was applied to assess differentially abundance genera using the q2-composition QIIME2 plugin . A heatmap was performed by ggplot2 R package to visualize differences at genus-level composition between samples. Association between samples were established by the Bray–Curtis dissimilarity. Venn analysis was performed by Venn diagram software (available online at: http://bioinformatics.psb.ugent.be/webtools/Venn/ , accessed on 13 October 2023) to determine common and unique prokaryotic taxa on samples. A Canonical Correspondence Analysis (CCA) was performed to correlate environmental variables with prokaryotic taxa and samples. To make the CCA, PAST 4 software package was used and a Monte Carlo test with 999 unrestricted permutations was carried. Functional Prediction of Prokaryotic Communities The functional potential of the microbial community was predicted using 16S rRNA gene abundance data via Phylogenetic Investigation of Communities by Reconstruction of Unobserved States (PICRUSt2) . The recommended maximum NSTI cut-off point of two was implemented by default in PICRUSt2, which excluded 1.3% of ASVs. Predicted pathways were categorized according to KEGG (Kyoto Encyclopedia of Genes and Genomes) orthology groups and compared between samples. In addition, the functional annotation of prokaryotic taxa via Functional Annotation of Prokaryotic Taxa (FAPROTAX) was carried out ( http://www.ehbio.com/ImageGP/index.php/Home/Index/FAPROTAX.html ) . Water samples were enriched by filtering 5 L of water under aseptic conditions through a custom-made device driven by vacuum/compression pumps (Labbox, Premia de Dalt, Spain) and ending up in a 0.22-µm pore size filter. DNA was extracted using the Genomic Mini AX Bacteria Kit (A&A Biotechnology, Gdansk, Poland) and quantified using the QuantiFluor dsDNA System (Promega, Madison, WI, USA). The V3-V4 hypervariable region of the 16S rRNA gene was amplified , and the amplicons were sequenced using the paired-end method on a MiSeq Illumina platform at Biogenetics (Álava, Spain). Libraries were prepared from isolated DNA using the Nextera DNA Library prep kit (Ilumina, San Diego, CA, USA). Sequencing data from the study are available under the accession number PRJNA1115049. Sequence analysis was performed using the packages and pipelines of the Quantitative Insights Into Microbial Ecology (QIIME2) program (version 2021.4) . Joining of paired-end reads, sequence quality control, and feature table construction were performed by denoising with DADA2 plugin (q2-dada2) with standard parameters . Taxonomic assignment of DADA2-generated amplicon sequence variants (ASVs) was performed using the Bayesian taxonomy classifier classify-sklearn with the q2-feature-classifier add-on with taxonomy.py using the SILVA database v.138 , clustered at 99% sequence similarity. Alpha diversity metrics (observed ASVs, Faith’s Phylogenetic Diversity, Pielou’s evenness, Shannon’s and Simpson’s diversity indexes) and beta diversity metric (Bray‐Curtis dissimilarity) were estimated using q2‐diversity after samples were rarefied to the smallest number of non-chimeric sequences of all samples tested. Statistical significance of the alpha diversity values between sampling sites was assessed using the Kruskal–Wallis H test. To assess the relationship of the water properties with α -diversity indices, Pearson’s coefficient test was carried out. For comparing the microbial community structure among samples, Bray–Curtis dissimilarity coefficient was assessed by analysis of similarities (ANOSIM) with 999 permutations. In all analyses, it was assumed a p < 0.05 for statistically significant differences. An analysis of composition of microbiome (ANCOM) test was applied to assess differentially abundance genera using the q2-composition QIIME2 plugin . A heatmap was performed by ggplot2 R package to visualize differences at genus-level composition between samples. Association between samples were established by the Bray–Curtis dissimilarity. Venn analysis was performed by Venn diagram software (available online at: http://bioinformatics.psb.ugent.be/webtools/Venn/ , accessed on 13 October 2023) to determine common and unique prokaryotic taxa on samples. A Canonical Correspondence Analysis (CCA) was performed to correlate environmental variables with prokaryotic taxa and samples. To make the CCA, PAST 4 software package was used and a Monte Carlo test with 999 unrestricted permutations was carried. The functional potential of the microbial community was predicted using 16S rRNA gene abundance data via Phylogenetic Investigation of Communities by Reconstruction of Unobserved States (PICRUSt2) . The recommended maximum NSTI cut-off point of two was implemented by default in PICRUSt2, which excluded 1.3% of ASVs. Predicted pathways were categorized according to KEGG (Kyoto Encyclopedia of Genes and Genomes) orthology groups and compared between samples. In addition, the functional annotation of prokaryotic taxa via Functional Annotation of Prokaryotic Taxa (FAPROTAX) was carried out ( http://www.ehbio.com/ImageGP/index.php/Home/Index/FAPROTAX.html ) . Physico-chemical Characteristics of Waters The physico-chemical analysis of the water samples revealed that they could be classified into two distinct categories: salty and brackish (Table ). The table presents two sets of data on which the classification of the type of water is based: the first set comprises data obtained in situ at all sampling sites, while the second set comprises ion concentration data for sites belonging to the monitoring network. The values displayed for the latter dataset represent the mean value obtained from measurements conducted between 2017 and 2022. Detailed information is given in Supplementary Table . Monitoring showed that there is very little change in ion concentration over the years, thus adequately reflecting the stable character of the physico-chemical characteristics of the waters of the valley, which are considered stable over the years. They also maintain their flow rates (a total of about 3 L/s for the salty and 5 L/s for the brackish) fairly constant over time. Both salty and brackish water show a clear sodium chloride facies, a direct result of the dissolution of halite in the diapiric structure. However, it is important to note the high presence of sulfate and calcium, due to the dissolution of gypsum-anhydrite, which is comparatively higher in the brackish waters (Table ). The water taken from the S8 piezometer sampling site has a higher sulfate content than the other brackish water (Table ), due to its particular position in the flow scheme (Fig. b). It is also worth noting the high concentration of magnesium and potassium in the salty waters (Table ). In addition, as expected, nitrates are only present in the shallower brackish waters (Table ). The waters of the five salty springs originate from very slow deep flows (hundreds of meters) and have a very high salinity (220–240 g/L) and relatively high temperature (16–20 °C). The two brackish waters springs have a much lower salinity (15–30 g/L) and correspond to the mixture of deep waters with shallower currents. The pH of the brackish water is close to neutral (7.2 to 7.4) while the pH of the salty water is slightly lower (6.2 to 6.6). Prokaryotic Diversity and Community Composition A total of 583,454 sequences corresponding to 1801 different ASVs were obtained from 11 sampling sites. The data from the Hontana spring had to be discarded due to the low number of sequences obtained. Rarefaction curves were computed by rarefying each sample to the minimum frequency of sequences, which was 14,641 reads (Fig. ). Diversity was calculated after normalization of the samples. Prokaryotic Diversity Alpha diversity analysis revealed that prokaryotic community richness (Observed ASVs and Chao1), quantitative community richness or diversity (Shannon and Simpson indexes), and community equality or evenness (Pielou’s evenness) varied widely among the samples (Table ). In particular, the highest prokaryotic richness, diversity, and evenness were observed in the El Pico Dulce spring, while the lowest richness was found in the El Pico spring. In contrast, the lowest diversity and evenness were found in the San Juan stream, with values similar to those of the El Pico spring. Pearson’s coefficient test showed a statistically significant negative correlation between salinity and ASVs, and also between salinity and Chao1 diversity indexes ( p value < 0.05). Beta diversity identifies dissimilarities between the sampling sites. Dissimilarity in ASV composition was represented by the Principal Coordinate Analysis (PCoA) plot. The first three principal components explained 57.61% (PC1 + PC2 + PC3) of the total variation (Fig. ). For the analysis of multivariate homogeneity among groups, the analysis of similarities (ANOSIM) test showed significant differences in the prokaryotic diversity between groups based on their salinity (salty and brackish water) ( p -value < 0.05). However, no statistically significant differences were observed when comparing samples according to their sampling site (spring, pond, stream, or groundwater). Assessment of the Prokaryotic Community Composition and Its Distribution The 1801 different ASVs were subsequently assigned to different taxonomic levels. Fifty-nine of the ASVs (3.3%) could not be assigned to any known phylum. The taxonomic assignment of each ASV is shown in Supplementary Table . The taxonomic assignment showed that the archaeal domain was essentially restricted to salty waters, whereas bacteria were present at all sampling sites, with the majority in brackish water (99.4 to 99.8%) (Fig. a). The bacterial and archaeal domains were distributed in 31 phyla, 57 classes, 120 orders, 182 families, and 258 genera. The distribution of genera whose relative abundance was greater than 3% in at least one of the analyzed sites is shown in Fig. b. It can be observed that the 60.7% of the ASVs were classified at genus level (6.3% of them as uncultured organisms), while 39.3% remained Unclassified. The major bacterial and archaeal phyla were Pseudomonadota and Halobacterota , respectively. At the genus level, the most representative bacterial genera (relative abundance > 1%) were Pseudomonas , Undibacterium , Salinibacter , Marinomonas , and Bacteroides . Similarly, the most abundant archaeal genera (relative abundance > 1%) were Halorubrum , Halonotius , Haloplanus , Halomicroarcula , and Natronomonas . Each of these genera varied greatly in abundance at different sampling sites (Fig. b). Although some genera were found in more than one location, different compositional profiles can be defined at the genus level. Clustering using Bray–Curtis dissimilarity coefficient based on genus abundance confirmed that brackish waters showed greater similarity to one another, as did salty waters (Fig. ). ANCOM analysis identified the genera Natronomonas and Halorubrum as significantly more abundant genera in the salty water samples. Furthermore, when the analysis was performed according to the water salinity of the samples, only 25 out of 258 genera were shared (Fig. a). However, 64 and 169 genera were unique to salty and brackish water, respectively. The distribution of genera between salty springs revealed 12 genera shared among these samples (Fig. b); including Halomarina , Halonotius , Owenweeksia , Halohasta , Natronomonas , Flexistipes , Salinibacter , Halorubellus , Haloplanus , Halorubrum , Halobaculum , and Halomicroarcula . Halorubrum was the most abundant (8.6 to 61.2%) among them, except in the Santa Engracia spring, where Halohasta was the most abundant one. In the Fuenterriba spring, 22 genera were identified not present in the other saline springs. However, the genera found there were in the minority ( Acinetobacter , 0.8%; Marinobacter , 0.4%; Enterococcus , 0.2%). The investigation of the possibility of groundwater contact between the El Pico and the El Pico Dulce springs, which are only two meters apart but have different physico-chemical parameters (Fig. and Table ), revealed a different taxonomic composition. Of the 132 genera detected, the majority ( n = 111) were only present at the El Pico Dulce spring and only two ( Halomonas and Cellulosimicrobium ) were shared (Fig. c). Moreover, the relative abundance of both genera based on ASV taxonomic assignment was very low in both sampling sites, 0.05% and 0.71% and 0.19% and 0.54%, respectively. Relationship Between Microbial Communities and Water Characteristics The CCA results revealed the relationship between microbial community structure and water characteristics (Fig. ). CCA axes 1 and 2 explained 77.8% of the total variance of all sampling sites. The highest values of magnesium (Mg 2+ ), sulfate ( [12pt]{minimal} $${}_{4}^{2-}$$ SO 4 2 - ), calcium (Ca 2+ ), sodium (Na + ), chloride (Cl − ), and potassium (K + ) were associated with the four salty water springs and with the presence of mainly halophilic archaea and bacteria such as Halorubrum and Salinibacter , respectively. On the other hand, the San Juan stream and El Pico Dulce spring contained the highest nitrate ( [12pt]{minimal} $${}_{3}^{-}$$ NO 3 - ) concentration (Table ). The genera more positively affected by these conditions appeared to be Pseudomonas , Marinomonas , Luteolibacter , Nitrospira , Marinobacterium , and Arcobacter . The particular physico-chemical characteristics of the groundwater obtained from the piezometer (Table ) appear to favor the presence of genera such as Undibacterium , Bacteroides , and Lactobacillus . Spearman's correlation test (at genus level) showed correlations between environmental variables. Thus, the only reliable statistically significant ( p < 0.05) positive correlation was confirmed between Halolamina , Halanaerobium , Halofilum , and Haloarchaeobius (all found in saline waters) and Ca 2+ ions. Prediction of the Ecological Function of Prokaryotic Microorganisms Based on the taxonomically derived metabolic inferences, the PICRUSt2 results showed 2904 enzyme counts (ECs), 489 MetaCyc pathways, and 10,493 KEGG orthologs (KOs) belonging to 291 KEGG pathways. The metabolic MetaCyc pathways predicted by PICRUSt2 showed changes between brackish and salty water samples (Fig. ). According to the ASV-based metabolic inference performed, L-glutamate and L-glutamine biosynthesis and pyruvate fermentation to propanoate I pathways were found with a significantly higher relative abundance in salty sampling sites, whereas toluene degradation III (aerobic, via p-cresol) and phenylacetate degradation I (aerobic) pathways were detected with a higher relative abundance in brackish water samples (Fig. ). FAPROTAX predicted 56 different ecological functions in at least one sampling site for bacterial and archaeal taxa derived from 16S rRNA gene amplicon sequencing (Fig. ). The microbiome involved in chemoheterotrophy and aerobic chemoheterotrophy followed by fermentation was the one that was most widely distributed along the valley. Salty water samples had the most abundant prokaryotic groups involved in phototrophy and photoautotrophy. In particular, sulfur-oxidizing anoxygenic photoautotrophy was observed in all salty springs except in the El Pico spring, where microbiome associated with chloroplasts was predominated. The abundance of microbiome involved in oxygenic photoautotrophy and photosynthetic cyanobacteria was also observed in the ponds, except in pond II where cellulolysis was more abundant. Regarding the brackish water samples, microbiome related to animal parasites or symbionts and human pathogens was detected in the groundwater, while the abundance of microbiome involved in nitrogen metabolism was more pronounced in the San Juan stream and El Pico Dulce spring. The physico-chemical analysis of the water samples revealed that they could be classified into two distinct categories: salty and brackish (Table ). The table presents two sets of data on which the classification of the type of water is based: the first set comprises data obtained in situ at all sampling sites, while the second set comprises ion concentration data for sites belonging to the monitoring network. The values displayed for the latter dataset represent the mean value obtained from measurements conducted between 2017 and 2022. Detailed information is given in Supplementary Table . Monitoring showed that there is very little change in ion concentration over the years, thus adequately reflecting the stable character of the physico-chemical characteristics of the waters of the valley, which are considered stable over the years. They also maintain their flow rates (a total of about 3 L/s for the salty and 5 L/s for the brackish) fairly constant over time. Both salty and brackish water show a clear sodium chloride facies, a direct result of the dissolution of halite in the diapiric structure. However, it is important to note the high presence of sulfate and calcium, due to the dissolution of gypsum-anhydrite, which is comparatively higher in the brackish waters (Table ). The water taken from the S8 piezometer sampling site has a higher sulfate content than the other brackish water (Table ), due to its particular position in the flow scheme (Fig. b). It is also worth noting the high concentration of magnesium and potassium in the salty waters (Table ). In addition, as expected, nitrates are only present in the shallower brackish waters (Table ). The waters of the five salty springs originate from very slow deep flows (hundreds of meters) and have a very high salinity (220–240 g/L) and relatively high temperature (16–20 °C). The two brackish waters springs have a much lower salinity (15–30 g/L) and correspond to the mixture of deep waters with shallower currents. The pH of the brackish water is close to neutral (7.2 to 7.4) while the pH of the salty water is slightly lower (6.2 to 6.6). A total of 583,454 sequences corresponding to 1801 different ASVs were obtained from 11 sampling sites. The data from the Hontana spring had to be discarded due to the low number of sequences obtained. Rarefaction curves were computed by rarefying each sample to the minimum frequency of sequences, which was 14,641 reads (Fig. ). Diversity was calculated after normalization of the samples. Prokaryotic Diversity Alpha diversity analysis revealed that prokaryotic community richness (Observed ASVs and Chao1), quantitative community richness or diversity (Shannon and Simpson indexes), and community equality or evenness (Pielou’s evenness) varied widely among the samples (Table ). In particular, the highest prokaryotic richness, diversity, and evenness were observed in the El Pico Dulce spring, while the lowest richness was found in the El Pico spring. In contrast, the lowest diversity and evenness were found in the San Juan stream, with values similar to those of the El Pico spring. Pearson’s coefficient test showed a statistically significant negative correlation between salinity and ASVs, and also between salinity and Chao1 diversity indexes ( p value < 0.05). Beta diversity identifies dissimilarities between the sampling sites. Dissimilarity in ASV composition was represented by the Principal Coordinate Analysis (PCoA) plot. The first three principal components explained 57.61% (PC1 + PC2 + PC3) of the total variation (Fig. ). For the analysis of multivariate homogeneity among groups, the analysis of similarities (ANOSIM) test showed significant differences in the prokaryotic diversity between groups based on their salinity (salty and brackish water) ( p -value < 0.05). However, no statistically significant differences were observed when comparing samples according to their sampling site (spring, pond, stream, or groundwater). Assessment of the Prokaryotic Community Composition and Its Distribution The 1801 different ASVs were subsequently assigned to different taxonomic levels. Fifty-nine of the ASVs (3.3%) could not be assigned to any known phylum. The taxonomic assignment of each ASV is shown in Supplementary Table . The taxonomic assignment showed that the archaeal domain was essentially restricted to salty waters, whereas bacteria were present at all sampling sites, with the majority in brackish water (99.4 to 99.8%) (Fig. a). The bacterial and archaeal domains were distributed in 31 phyla, 57 classes, 120 orders, 182 families, and 258 genera. The distribution of genera whose relative abundance was greater than 3% in at least one of the analyzed sites is shown in Fig. b. It can be observed that the 60.7% of the ASVs were classified at genus level (6.3% of them as uncultured organisms), while 39.3% remained Unclassified. The major bacterial and archaeal phyla were Pseudomonadota and Halobacterota , respectively. At the genus level, the most representative bacterial genera (relative abundance > 1%) were Pseudomonas , Undibacterium , Salinibacter , Marinomonas , and Bacteroides . Similarly, the most abundant archaeal genera (relative abundance > 1%) were Halorubrum , Halonotius , Haloplanus , Halomicroarcula , and Natronomonas . Each of these genera varied greatly in abundance at different sampling sites (Fig. b). Although some genera were found in more than one location, different compositional profiles can be defined at the genus level. Clustering using Bray–Curtis dissimilarity coefficient based on genus abundance confirmed that brackish waters showed greater similarity to one another, as did salty waters (Fig. ). ANCOM analysis identified the genera Natronomonas and Halorubrum as significantly more abundant genera in the salty water samples. Furthermore, when the analysis was performed according to the water salinity of the samples, only 25 out of 258 genera were shared (Fig. a). However, 64 and 169 genera were unique to salty and brackish water, respectively. The distribution of genera between salty springs revealed 12 genera shared among these samples (Fig. b); including Halomarina , Halonotius , Owenweeksia , Halohasta , Natronomonas , Flexistipes , Salinibacter , Halorubellus , Haloplanus , Halorubrum , Halobaculum , and Halomicroarcula . Halorubrum was the most abundant (8.6 to 61.2%) among them, except in the Santa Engracia spring, where Halohasta was the most abundant one. In the Fuenterriba spring, 22 genera were identified not present in the other saline springs. However, the genera found there were in the minority ( Acinetobacter , 0.8%; Marinobacter , 0.4%; Enterococcus , 0.2%). The investigation of the possibility of groundwater contact between the El Pico and the El Pico Dulce springs, which are only two meters apart but have different physico-chemical parameters (Fig. and Table ), revealed a different taxonomic composition. Of the 132 genera detected, the majority ( n = 111) were only present at the El Pico Dulce spring and only two ( Halomonas and Cellulosimicrobium ) were shared (Fig. c). Moreover, the relative abundance of both genera based on ASV taxonomic assignment was very low in both sampling sites, 0.05% and 0.71% and 0.19% and 0.54%, respectively. Alpha diversity analysis revealed that prokaryotic community richness (Observed ASVs and Chao1), quantitative community richness or diversity (Shannon and Simpson indexes), and community equality or evenness (Pielou’s evenness) varied widely among the samples (Table ). In particular, the highest prokaryotic richness, diversity, and evenness were observed in the El Pico Dulce spring, while the lowest richness was found in the El Pico spring. In contrast, the lowest diversity and evenness were found in the San Juan stream, with values similar to those of the El Pico spring. Pearson’s coefficient test showed a statistically significant negative correlation between salinity and ASVs, and also between salinity and Chao1 diversity indexes ( p value < 0.05). Beta diversity identifies dissimilarities between the sampling sites. Dissimilarity in ASV composition was represented by the Principal Coordinate Analysis (PCoA) plot. The first three principal components explained 57.61% (PC1 + PC2 + PC3) of the total variation (Fig. ). For the analysis of multivariate homogeneity among groups, the analysis of similarities (ANOSIM) test showed significant differences in the prokaryotic diversity between groups based on their salinity (salty and brackish water) ( p -value < 0.05). However, no statistically significant differences were observed when comparing samples according to their sampling site (spring, pond, stream, or groundwater). The 1801 different ASVs were subsequently assigned to different taxonomic levels. Fifty-nine of the ASVs (3.3%) could not be assigned to any known phylum. The taxonomic assignment of each ASV is shown in Supplementary Table . The taxonomic assignment showed that the archaeal domain was essentially restricted to salty waters, whereas bacteria were present at all sampling sites, with the majority in brackish water (99.4 to 99.8%) (Fig. a). The bacterial and archaeal domains were distributed in 31 phyla, 57 classes, 120 orders, 182 families, and 258 genera. The distribution of genera whose relative abundance was greater than 3% in at least one of the analyzed sites is shown in Fig. b. It can be observed that the 60.7% of the ASVs were classified at genus level (6.3% of them as uncultured organisms), while 39.3% remained Unclassified. The major bacterial and archaeal phyla were Pseudomonadota and Halobacterota , respectively. At the genus level, the most representative bacterial genera (relative abundance > 1%) were Pseudomonas , Undibacterium , Salinibacter , Marinomonas , and Bacteroides . Similarly, the most abundant archaeal genera (relative abundance > 1%) were Halorubrum , Halonotius , Haloplanus , Halomicroarcula , and Natronomonas . Each of these genera varied greatly in abundance at different sampling sites (Fig. b). Although some genera were found in more than one location, different compositional profiles can be defined at the genus level. Clustering using Bray–Curtis dissimilarity coefficient based on genus abundance confirmed that brackish waters showed greater similarity to one another, as did salty waters (Fig. ). ANCOM analysis identified the genera Natronomonas and Halorubrum as significantly more abundant genera in the salty water samples. Furthermore, when the analysis was performed according to the water salinity of the samples, only 25 out of 258 genera were shared (Fig. a). However, 64 and 169 genera were unique to salty and brackish water, respectively. The distribution of genera between salty springs revealed 12 genera shared among these samples (Fig. b); including Halomarina , Halonotius , Owenweeksia , Halohasta , Natronomonas , Flexistipes , Salinibacter , Halorubellus , Haloplanus , Halorubrum , Halobaculum , and Halomicroarcula . Halorubrum was the most abundant (8.6 to 61.2%) among them, except in the Santa Engracia spring, where Halohasta was the most abundant one. In the Fuenterriba spring, 22 genera were identified not present in the other saline springs. However, the genera found there were in the minority ( Acinetobacter , 0.8%; Marinobacter , 0.4%; Enterococcus , 0.2%). The investigation of the possibility of groundwater contact between the El Pico and the El Pico Dulce springs, which are only two meters apart but have different physico-chemical parameters (Fig. and Table ), revealed a different taxonomic composition. Of the 132 genera detected, the majority ( n = 111) were only present at the El Pico Dulce spring and only two ( Halomonas and Cellulosimicrobium ) were shared (Fig. c). Moreover, the relative abundance of both genera based on ASV taxonomic assignment was very low in both sampling sites, 0.05% and 0.71% and 0.19% and 0.54%, respectively. The CCA results revealed the relationship between microbial community structure and water characteristics (Fig. ). CCA axes 1 and 2 explained 77.8% of the total variance of all sampling sites. The highest values of magnesium (Mg 2+ ), sulfate ( [12pt]{minimal} $${}_{4}^{2-}$$ SO 4 2 - ), calcium (Ca 2+ ), sodium (Na + ), chloride (Cl − ), and potassium (K + ) were associated with the four salty water springs and with the presence of mainly halophilic archaea and bacteria such as Halorubrum and Salinibacter , respectively. On the other hand, the San Juan stream and El Pico Dulce spring contained the highest nitrate ( [12pt]{minimal} $${}_{3}^{-}$$ NO 3 - ) concentration (Table ). The genera more positively affected by these conditions appeared to be Pseudomonas , Marinomonas , Luteolibacter , Nitrospira , Marinobacterium , and Arcobacter . The particular physico-chemical characteristics of the groundwater obtained from the piezometer (Table ) appear to favor the presence of genera such as Undibacterium , Bacteroides , and Lactobacillus . Spearman's correlation test (at genus level) showed correlations between environmental variables. Thus, the only reliable statistically significant ( p < 0.05) positive correlation was confirmed between Halolamina , Halanaerobium , Halofilum , and Haloarchaeobius (all found in saline waters) and Ca 2+ ions. Based on the taxonomically derived metabolic inferences, the PICRUSt2 results showed 2904 enzyme counts (ECs), 489 MetaCyc pathways, and 10,493 KEGG orthologs (KOs) belonging to 291 KEGG pathways. The metabolic MetaCyc pathways predicted by PICRUSt2 showed changes between brackish and salty water samples (Fig. ). According to the ASV-based metabolic inference performed, L-glutamate and L-glutamine biosynthesis and pyruvate fermentation to propanoate I pathways were found with a significantly higher relative abundance in salty sampling sites, whereas toluene degradation III (aerobic, via p-cresol) and phenylacetate degradation I (aerobic) pathways were detected with a higher relative abundance in brackish water samples (Fig. ). FAPROTAX predicted 56 different ecological functions in at least one sampling site for bacterial and archaeal taxa derived from 16S rRNA gene amplicon sequencing (Fig. ). The microbiome involved in chemoheterotrophy and aerobic chemoheterotrophy followed by fermentation was the one that was most widely distributed along the valley. Salty water samples had the most abundant prokaryotic groups involved in phototrophy and photoautotrophy. In particular, sulfur-oxidizing anoxygenic photoautotrophy was observed in all salty springs except in the El Pico spring, where microbiome associated with chloroplasts was predominated. The abundance of microbiome involved in oxygenic photoautotrophy and photosynthetic cyanobacteria was also observed in the ponds, except in pond II where cellulolysis was more abundant. Regarding the brackish water samples, microbiome related to animal parasites or symbionts and human pathogens was detected in the groundwater, while the abundance of microbiome involved in nitrogen metabolism was more pronounced in the San Juan stream and El Pico Dulce spring. Prokaryotic Diversity in Different Types of Water in the Valley The Añana Salt Valley is a continental solar saltern with a thalassohaline composition. Physico-chemical monitoring of the main watercourses that feed the valley has revealed the presence of salty and brackish waters of different origins. This may be attributed to the existence of diverse pathways for unsaturated water to infiltrate the salt deposits within the subsoil, where salt dissolution occurs. . This process ultimately gives rise to the emergence of springs, characterized by the emanation of water with varying salinities as a consequence of the salinization of subterranean waters. Consequently, it can be assumed that the different water flows through the Añana diapiric structure are responsible for the observed differences in the hydrochemical characteristics of the spring water. The dissolution of evaporite (halite) hundreds of meters deep in the diapir structure is characterized by a stable ionic composition such as Cl − , Na + , and K + , observed in salty waters. On the other hand, the dissolution of gypsum and anhydrite, and of [12pt]{minimal} $${}_{3}^{-}$$ NO 3 - ions, are the result of mixing of deep flows with shallower flows, characterizing the brackish waters, with comparatively higher presence of [12pt]{minimal} $${}_{4}^{2-},$$ SO 4 2 - , Ca 2 , + and Mg 2+ ions. The low variability of the salt facies is a peculiarity of this valley and indicates a stable depositional environment, contrary to what has been described in other diapiric environments (e.g., Zechstein 2, Germany) . The differences in the physico-chemical composition of the waters showed that the main factor influencing the prokaryotic distribution in this valley was the salinity, as shown by the beta diversity indices. In fact, this microbial distribution did not appear to be influenced by parameters such as pH, temperature or even the site itself. This fact was also observed in the studies by Lozupone and Knight and Leoni et al. . Regarding alpha diversity indices, it was shown that samples with lower salinity had higher values of those indices than the more salty ones, as in other locations such as The Arava Valley (between the Dead Sea to the Red Sea, Israel) and Saltern of Margherita di Savoia (Italy) . In fact, the El Pico Dulce spring (10 g/L salinity) has the highest values of richness, diversity and evenness. On the contrary, the El Pico spring (205 g/L salinity, sited only two meters away from the El Pico Dulce spring) had the lowest richness values. This could be partially explained due to the higher salty stress experienced by microorganisms living in high salinity environments, which may have limited diversity due to the energetically costly lifestyle . However, other aspects, such as the availability of nutrients and oxygen, which were not determined in this study, must be considered among the main limiting factors of microbial diversity in waters coming from very deep flows , which is the case of the water of this spring. Interestingly, the lowest diversity and evenness was found in the San Juan stream (brackish water), even though the measured physico-chemical characteristics of the water in this sampling site were similar to those of the El Pico Dulce spring. This suggested the presence of dominant taxa in the San Juan stream, Arcobacter genus in this case. The location characteristics of the sampling place result in stream water remaining in contact with the surface for a longer period than El Pico Dulce spring water. This allows for interaction with surrounding factors over a longer period (e.g., flora and rocks). This fact could provide an opportunity to certain microorganisms present in the surface areas to significantly increase their population and become dominant. In this case, the adjacent vegetation could provide an opportunity for the Arcobacter genus to increase its population and become dominant due to a larger surface area for adhesion , which could favor the formation of biofilms, an ability described in this genus . In addition, a microcosm experiment in which nitrogen was added revealed a significant decline in OTU (operational taxonomic unit) richness, accompanied by a notable increase in the prevalence of the genus Arcobacter . Given that Arcobacter is known to oxidize sulfur as well as reduce nitrate, the authors propose that sulfur may serve as an electron donor in denitrification but may also inhibit the reduction of nitrous oxide to nitrogen gas. This highly specific niche may account for the pronounced reduction in species richness observed in the experiment, potentially linked to trace element accessibility. It is plausible that a similar scenario is occurring in the San Juan stream. Community Composition and Distribution The taxonomic approach performed in this study showed that the salty waters of this saltern were essentially restricted to the domain of Archaea . However, bacteria were present in all sampling sites and were particularly abundant in brackish waters. This distribution has been previously described in other hypersaline environments , confirming that the salinity shapes the microbial community. Furthermore, it was found that the main phyla found in the valley were Pseudomonadota and Halobacteriota . The archaeal phylum Halobacteriota is the best known group of extreme halophiles , and the predominance of the Pseudomonadota phylum in hypersaline environments has also been observed . The study of the prokaryotic community by Illumina-based 16S rRNA gene sequencing in hypersaline environments reveals the possibilities and limits of life in the most extreme conditions , bearing in mind that it is based on the putative association of the 16S rRNA gene with a taxon defined as an amplicon sequence variant (ASV). The genus-level study of the main waters supplying the valley (springs, the stream, and the groundwater), identified two main distinct microbial communities, one from brackish waters and the other from salty waters. Even if all the salty waters are considered to have the same origin, there are marked differences in the relative abundance of the ASVs belonging to the main genera, with some of them accounting for less than 1% of the relative abundance in some sampling sites. Thus, a niche effect can also be observed, but with less influence than salinity. There is a study of hypersaline soils where site-specific characteristics correlated with community structure and salinity played a secondary role , which is not the case in the Añana Salt Valley. Within the brackish waters, according to the CCA analysis, which relates physico-chemical data to community structure, a subdivision was observed. Specifically, groundwater obtained from the piezometer was observed to exhibit distinct characteristics compared to the other brackish waters. The physico-chemical variables studied explained about 78.6% of the variance in community structure between samples. No specific water physico-chemical variables predicting community structure could be identified. However, it was observed a higher relative abundance of the genera Halolamina , Halanaerobium , Halofilum , and Haloarchaeobius when the Ca 2+ ion concentration was higher. All but Halophilum belonged to the Nanohaloarchaeota , in agreement with Vera-Gargallo et al. who also detected Nanohaloarchaeota associated with Ca 2 ⁺ ions. Regarding the salty waters community, the most abundant archaeal ASVs belonged to the genus Halorubrum , followed by the genus Natronomonas , except in the Santa Engracia spring, where Halohasta was the most abundant one. Members of the genus Halorubrum have also been found in high abundance in the few studies carried out using 16S rRNA gene sequencing in continental salterns; at Redonda and Penalva inland salterns in Spain and at Ciechocinek Spa on a salt diapir in Poland . However, its abundance in coastal salterns, such as the Santa Pola salt marshes , is of secondary occurrence. Abundance of the genus Halohasta (in this study mainly in the Santa Engracia spring, which is the main brine supplier of the saltern) was also reported in water samples from the Ciechocinek Spa. The authors reported that the genus Halohasta was part of a stable community together with Natronomonas , Halobacterium , and Haloplanus in waters with salinities between 16 and 27% . On the other hand, the genus Pseudomonas is also an important part of the prokaryotic community inhabiting the Añana’s saltern, being present in all the brackish waters, although in different abundances. The genus Pseudomonas has previously been found in hypersaline environments due to its versatile energetic metabolism, which allows it to live in different environments. Finally, the differentiation of the groundwater obtained from the piezometer from the other brackish waters was due to the high relative concentration of sulfate and the high abundance of the genus Undibacterium . These two factors may be related, as seen in the study by Damo et al. in which an increase of the genus Undibacterium was observed in the rhizosphere where sulfur (oxidized to available sulfate by microbes) was applied. When comparing the relative abundance of the genera in the salty waters involved in the salt production system (the channel and brine resting ponds), a decrease in the genera Halohasta and Halomicroarcula was observed, which were abundant in the Santa Engracia spring. The Halomicroarcula genus appeared to be exclusively to groundwater brine samples, as seen in Najjari et al. , where they did not found Halomicroarcula in the sediment or halite crusted samples at the same sampling site. Once in the resting ponds, the relative abundance of the genera Haloplanus and Salinibacter increased in comparison to the salty springs waters. However, it should be noted that the water from different salty springs could be mixed in these ponds, although water from Santa Engracia is the more abundant, as previously stated. The genus Salinibacter is the bacterium that can be found even in the saltiest waters; in the case of Margherita di Savoia (Italy), it reaches its highest abundance in saline ponds with a salinity of 30.6% . In addition, a number of species of the genus Haloplanus have even been isolated from commercial salt crystals . These results may indicate that some abundant microbial genera found in the resting ponds may play a role in the salt production from the Añana Salt Valley, although their importance remains unknown. Lastly, the high percentage of ASVs assigned to Unclassified genera (39.3%), especially in the El Pico Dulce spring (43.0%), together with the large number of taxa grouped as Uncultured (12.6%) in the Santa Engracia spring, was striking in this study. This could mean that they correspond to sequences of taxa (genera or species) that have not yet been identified. Culture-based microbiological studies carried out in this saltern support this hypothesis . This could mean that dominant or abundant taxa could be overtaken by new unidentified taxa, as mentioned by Oren 2006 . Prediction of Microbial Functions According to the metabolic prediction performed based on the 16S rRNA gene datasets on this study, the significant increase in pyruvate fermentation to propionate I found in the salty water samples is striking, as fermentative halophilic archaea are very rare . However, some genera of halophilic archaea, such as Halorubrum or Haloarcula , which are largely found in the salty water samples of this study, have the ability to convert sugars to pyruvate . It was shown in Halobacterium salinarum that pyruvate can be utilized under anaerobic conditions , although the understanding of pyruvate transport in halophilic archaea is very limited . However, it is important to acknowledge that, despite the established efficacy of metabolic inference based on 16S rRNA gene datasets, this technique is not without its inherent limitations. These include the availability of correct published sequenced genomes and high genomic plasticity . Taxonomic-inferred metabolic prediction showed that the microbiome involved in chemoheterotrophy and fermentation was the most abundant and widespread along the valley. It is known that most halophilic prokaryotes are aerobic chemoorganotrophs, capable of degrading organic compounds up to NaCl saturation . However, oxygen is poorly soluble in brines, allowing anaerobic halophilic heterotrophs to thrive and actively coexist in the same niche. According to Wang and Bao , chemoheterotrophy is the main bacterial function, but also predominate in archaea, while fermentation is widespread among bacterial functions. The present study also found that salty water samples, dominated by archaea, had taxa involved in phototrophy. Many haloarchaea are known to have the ability to grow phototrophically, making them physiologically versatile microorganisms . Chloroplast function detected in El Pico spring was also noteworthy. It seems plausible to suggest that the detection of chloroplast function is related to the presence of cyanobacteria, due to the similarity between the genetic material of chloroplasts and that of certain bacteria . Therefore, errors in taxonomical assignment may occur . This phenomenon was observed in certain instances in the taxonomic assignment, with some sequences, identified as cyanobacteria at phylum level, being erroneously assigned as chloroplasts. Consequently, as the identification of metabolic functions is based on taxonomic assignment in this study, this inaccuracy can also be extrapolated to the determination of functions. Microbiota involved in nitrogen metabolism was also found, especially in brackish water from San Juan stream and El Pico Dulce, with the genus Pseudomonas found to be widespread. There is a study in which Pseudomonas was the dominant genus in epiphytic bacterial communities, suggesting that the epiphytic bacteria of submerged plants may play an important role in the denitrification . This may be the case at these two sampling sites, which are surrounded by local vegetation and whose waters are influenced by shallower water flows with the presence of nitrogenous compounds from agricultural practices. Lastly, water from the S8 piezometer sampling site showed major abundance of microbiota involved in animal parasites or symbionts and human pathogenicity due to the presence of Bacteroides and Lactobacillus , both genera related to the gut microbiota . This fact supports the idea of the presence of lateral flow waters, without an upward component, coming from shallower areas to the east of the diapiric structure, where several livestock farms are located. These findings on the putative ecological functions of the prokaryotic community could be considered as a further piece of evidence to support the approach to the origin of the water samples studied in this work. The Añana Salt Valley is a continental solar saltern with a thalassohaline composition. Physico-chemical monitoring of the main watercourses that feed the valley has revealed the presence of salty and brackish waters of different origins. This may be attributed to the existence of diverse pathways for unsaturated water to infiltrate the salt deposits within the subsoil, where salt dissolution occurs. . This process ultimately gives rise to the emergence of springs, characterized by the emanation of water with varying salinities as a consequence of the salinization of subterranean waters. Consequently, it can be assumed that the different water flows through the Añana diapiric structure are responsible for the observed differences in the hydrochemical characteristics of the spring water. The dissolution of evaporite (halite) hundreds of meters deep in the diapir structure is characterized by a stable ionic composition such as Cl − , Na + , and K + , observed in salty waters. On the other hand, the dissolution of gypsum and anhydrite, and of [12pt]{minimal} $${}_{3}^{-}$$ NO 3 - ions, are the result of mixing of deep flows with shallower flows, characterizing the brackish waters, with comparatively higher presence of [12pt]{minimal} $${}_{4}^{2-},$$ SO 4 2 - , Ca 2 , + and Mg 2+ ions. The low variability of the salt facies is a peculiarity of this valley and indicates a stable depositional environment, contrary to what has been described in other diapiric environments (e.g., Zechstein 2, Germany) . The differences in the physico-chemical composition of the waters showed that the main factor influencing the prokaryotic distribution in this valley was the salinity, as shown by the beta diversity indices. In fact, this microbial distribution did not appear to be influenced by parameters such as pH, temperature or even the site itself. This fact was also observed in the studies by Lozupone and Knight and Leoni et al. . Regarding alpha diversity indices, it was shown that samples with lower salinity had higher values of those indices than the more salty ones, as in other locations such as The Arava Valley (between the Dead Sea to the Red Sea, Israel) and Saltern of Margherita di Savoia (Italy) . In fact, the El Pico Dulce spring (10 g/L salinity) has the highest values of richness, diversity and evenness. On the contrary, the El Pico spring (205 g/L salinity, sited only two meters away from the El Pico Dulce spring) had the lowest richness values. This could be partially explained due to the higher salty stress experienced by microorganisms living in high salinity environments, which may have limited diversity due to the energetically costly lifestyle . However, other aspects, such as the availability of nutrients and oxygen, which were not determined in this study, must be considered among the main limiting factors of microbial diversity in waters coming from very deep flows , which is the case of the water of this spring. Interestingly, the lowest diversity and evenness was found in the San Juan stream (brackish water), even though the measured physico-chemical characteristics of the water in this sampling site were similar to those of the El Pico Dulce spring. This suggested the presence of dominant taxa in the San Juan stream, Arcobacter genus in this case. The location characteristics of the sampling place result in stream water remaining in contact with the surface for a longer period than El Pico Dulce spring water. This allows for interaction with surrounding factors over a longer period (e.g., flora and rocks). This fact could provide an opportunity to certain microorganisms present in the surface areas to significantly increase their population and become dominant. In this case, the adjacent vegetation could provide an opportunity for the Arcobacter genus to increase its population and become dominant due to a larger surface area for adhesion , which could favor the formation of biofilms, an ability described in this genus . In addition, a microcosm experiment in which nitrogen was added revealed a significant decline in OTU (operational taxonomic unit) richness, accompanied by a notable increase in the prevalence of the genus Arcobacter . Given that Arcobacter is known to oxidize sulfur as well as reduce nitrate, the authors propose that sulfur may serve as an electron donor in denitrification but may also inhibit the reduction of nitrous oxide to nitrogen gas. This highly specific niche may account for the pronounced reduction in species richness observed in the experiment, potentially linked to trace element accessibility. It is plausible that a similar scenario is occurring in the San Juan stream. The taxonomic approach performed in this study showed that the salty waters of this saltern were essentially restricted to the domain of Archaea . However, bacteria were present in all sampling sites and were particularly abundant in brackish waters. This distribution has been previously described in other hypersaline environments , confirming that the salinity shapes the microbial community. Furthermore, it was found that the main phyla found in the valley were Pseudomonadota and Halobacteriota . The archaeal phylum Halobacteriota is the best known group of extreme halophiles , and the predominance of the Pseudomonadota phylum in hypersaline environments has also been observed . The study of the prokaryotic community by Illumina-based 16S rRNA gene sequencing in hypersaline environments reveals the possibilities and limits of life in the most extreme conditions , bearing in mind that it is based on the putative association of the 16S rRNA gene with a taxon defined as an amplicon sequence variant (ASV). The genus-level study of the main waters supplying the valley (springs, the stream, and the groundwater), identified two main distinct microbial communities, one from brackish waters and the other from salty waters. Even if all the salty waters are considered to have the same origin, there are marked differences in the relative abundance of the ASVs belonging to the main genera, with some of them accounting for less than 1% of the relative abundance in some sampling sites. Thus, a niche effect can also be observed, but with less influence than salinity. There is a study of hypersaline soils where site-specific characteristics correlated with community structure and salinity played a secondary role , which is not the case in the Añana Salt Valley. Within the brackish waters, according to the CCA analysis, which relates physico-chemical data to community structure, a subdivision was observed. Specifically, groundwater obtained from the piezometer was observed to exhibit distinct characteristics compared to the other brackish waters. The physico-chemical variables studied explained about 78.6% of the variance in community structure between samples. No specific water physico-chemical variables predicting community structure could be identified. However, it was observed a higher relative abundance of the genera Halolamina , Halanaerobium , Halofilum , and Haloarchaeobius when the Ca 2+ ion concentration was higher. All but Halophilum belonged to the Nanohaloarchaeota , in agreement with Vera-Gargallo et al. who also detected Nanohaloarchaeota associated with Ca 2 ⁺ ions. Regarding the salty waters community, the most abundant archaeal ASVs belonged to the genus Halorubrum , followed by the genus Natronomonas , except in the Santa Engracia spring, where Halohasta was the most abundant one. Members of the genus Halorubrum have also been found in high abundance in the few studies carried out using 16S rRNA gene sequencing in continental salterns; at Redonda and Penalva inland salterns in Spain and at Ciechocinek Spa on a salt diapir in Poland . However, its abundance in coastal salterns, such as the Santa Pola salt marshes , is of secondary occurrence. Abundance of the genus Halohasta (in this study mainly in the Santa Engracia spring, which is the main brine supplier of the saltern) was also reported in water samples from the Ciechocinek Spa. The authors reported that the genus Halohasta was part of a stable community together with Natronomonas , Halobacterium , and Haloplanus in waters with salinities between 16 and 27% . On the other hand, the genus Pseudomonas is also an important part of the prokaryotic community inhabiting the Añana’s saltern, being present in all the brackish waters, although in different abundances. The genus Pseudomonas has previously been found in hypersaline environments due to its versatile energetic metabolism, which allows it to live in different environments. Finally, the differentiation of the groundwater obtained from the piezometer from the other brackish waters was due to the high relative concentration of sulfate and the high abundance of the genus Undibacterium . These two factors may be related, as seen in the study by Damo et al. in which an increase of the genus Undibacterium was observed in the rhizosphere where sulfur (oxidized to available sulfate by microbes) was applied. When comparing the relative abundance of the genera in the salty waters involved in the salt production system (the channel and brine resting ponds), a decrease in the genera Halohasta and Halomicroarcula was observed, which were abundant in the Santa Engracia spring. The Halomicroarcula genus appeared to be exclusively to groundwater brine samples, as seen in Najjari et al. , where they did not found Halomicroarcula in the sediment or halite crusted samples at the same sampling site. Once in the resting ponds, the relative abundance of the genera Haloplanus and Salinibacter increased in comparison to the salty springs waters. However, it should be noted that the water from different salty springs could be mixed in these ponds, although water from Santa Engracia is the more abundant, as previously stated. The genus Salinibacter is the bacterium that can be found even in the saltiest waters; in the case of Margherita di Savoia (Italy), it reaches its highest abundance in saline ponds with a salinity of 30.6% . In addition, a number of species of the genus Haloplanus have even been isolated from commercial salt crystals . These results may indicate that some abundant microbial genera found in the resting ponds may play a role in the salt production from the Añana Salt Valley, although their importance remains unknown. Lastly, the high percentage of ASVs assigned to Unclassified genera (39.3%), especially in the El Pico Dulce spring (43.0%), together with the large number of taxa grouped as Uncultured (12.6%) in the Santa Engracia spring, was striking in this study. This could mean that they correspond to sequences of taxa (genera or species) that have not yet been identified. Culture-based microbiological studies carried out in this saltern support this hypothesis . This could mean that dominant or abundant taxa could be overtaken by new unidentified taxa, as mentioned by Oren 2006 . According to the metabolic prediction performed based on the 16S rRNA gene datasets on this study, the significant increase in pyruvate fermentation to propionate I found in the salty water samples is striking, as fermentative halophilic archaea are very rare . However, some genera of halophilic archaea, such as Halorubrum or Haloarcula , which are largely found in the salty water samples of this study, have the ability to convert sugars to pyruvate . It was shown in Halobacterium salinarum that pyruvate can be utilized under anaerobic conditions , although the understanding of pyruvate transport in halophilic archaea is very limited . However, it is important to acknowledge that, despite the established efficacy of metabolic inference based on 16S rRNA gene datasets, this technique is not without its inherent limitations. These include the availability of correct published sequenced genomes and high genomic plasticity . Taxonomic-inferred metabolic prediction showed that the microbiome involved in chemoheterotrophy and fermentation was the most abundant and widespread along the valley. It is known that most halophilic prokaryotes are aerobic chemoorganotrophs, capable of degrading organic compounds up to NaCl saturation . However, oxygen is poorly soluble in brines, allowing anaerobic halophilic heterotrophs to thrive and actively coexist in the same niche. According to Wang and Bao , chemoheterotrophy is the main bacterial function, but also predominate in archaea, while fermentation is widespread among bacterial functions. The present study also found that salty water samples, dominated by archaea, had taxa involved in phototrophy. Many haloarchaea are known to have the ability to grow phototrophically, making them physiologically versatile microorganisms . Chloroplast function detected in El Pico spring was also noteworthy. It seems plausible to suggest that the detection of chloroplast function is related to the presence of cyanobacteria, due to the similarity between the genetic material of chloroplasts and that of certain bacteria . Therefore, errors in taxonomical assignment may occur . This phenomenon was observed in certain instances in the taxonomic assignment, with some sequences, identified as cyanobacteria at phylum level, being erroneously assigned as chloroplasts. Consequently, as the identification of metabolic functions is based on taxonomic assignment in this study, this inaccuracy can also be extrapolated to the determination of functions. Microbiota involved in nitrogen metabolism was also found, especially in brackish water from San Juan stream and El Pico Dulce, with the genus Pseudomonas found to be widespread. There is a study in which Pseudomonas was the dominant genus in epiphytic bacterial communities, suggesting that the epiphytic bacteria of submerged plants may play an important role in the denitrification . This may be the case at these two sampling sites, which are surrounded by local vegetation and whose waters are influenced by shallower water flows with the presence of nitrogenous compounds from agricultural practices. Lastly, water from the S8 piezometer sampling site showed major abundance of microbiota involved in animal parasites or symbionts and human pathogenicity due to the presence of Bacteroides and Lactobacillus , both genera related to the gut microbiota . This fact supports the idea of the presence of lateral flow waters, without an upward component, coming from shallower areas to the east of the diapiric structure, where several livestock farms are located. These findings on the putative ecological functions of the prokaryotic community could be considered as a further piece of evidence to support the approach to the origin of the water samples studied in this work. The study of the prokaryote community and its distribution, together with the physico-chemical analysis of the main waters supplying the valley, distinguishes salty and brackish waters. Archaea were mainly restricted to salty water, whereas bacteria were present in all sampling sites. Considering all the salty spring waters of the same origin, given their physico-chemical similarities, the differences observed at the prokaryotic genus level could be due to site-specific characteristics, suggesting the enormous and still unknown importance of niche specificity. However, our results do support the hypothesis that the origin of the water and the associated salinity stress affect the microbiome beyond the niche location in the valley. This is evident in the El Pico and El Pico Dulce springs, located just two meters from each other in the valley, that have different water origin and different microbial compositions. Further studies are needed, including more groundwater samples from other piezometers located in the valley, to gain insights into water flow scheme, together with physiology and ecology studies of microbial taxa and isolation of new undescribed microorganisms. Below is the link to the electronic supplementary material. Supplementary file1 (PDF 496 KB) Supplementary file2 (XLSX 219 KB)
A Randomized Hybrid‐Effectiveness Trial Comparing Pharmacogenomics (
d4122cd2-097b-4103-8cb2-f5bdef7756f9
11805805
Pharmacology[mh]
Introduction Twenty percent of US adults experienced chronic pain in 2021 . Chronic pain has pervasive effects on quality of life, and treatment is notoriously difficult . Opioids are effective for moderate‐to‐severe acute pain and their use in chronic pain has garnered enhanced scrutiny in light of the opioid epidemic . Hydrocodone and tramadol are the most widely prescribed opioids in the United States, with 8.6 and 5 million patients prescribed them in 2021, respectively . Prior studies indicate clinical responses to hydrocodone and tramadol are associated with genetic polymorphisms in cytochrome P450 2D6 ( CYP2D6 ) . The CYP2D6 enzyme bioactivates hydrocodone, tramadol, and codeine to more potent metabolites responsible for pain relief. These metabolites, hydromorphone, O‐desmethyl tramadol, and morphine, have a 100‐, 200‐, and 200‐fold higher affinity for the μ‐opioid receptor, respectively . Patients with reduced CYP2D6 activity, termed CYP2D6 intermediate metabolizers (IMs) and poor metabolizers (PMs), may be at increased risk of therapeutic failure from these medications. Smith et al. conducted a non‐randomized cluster design pragmatic proof‐of‐concept trial that found CYP2D6‐guided analgesic prescribing was associated with improved pain symptoms in CYP2D6 IM/PMs prescribed tramadol or codeine compared to usual care . A post hoc analysis identified that CYP2D6‐guided care may also improve pain symptoms among those prescribed hydrocodone. Although encouraging, a randomized trial could provide more conclusive evidence about the benefit of PGx testing. Implementation methods also need to be tested. The utilization of pharmacogenomics (PGx; the use of genetics to guide medication therapy) has been impeded by a lack of proven strategies to integrate PGx into care . We performed a randomized type 2 hybrid‐effectiveness trial to identify the effects of providing PGx results and recommendations for patients with CYP2D6 IM/PM and chronic pain treated in primary care practices compared to standard care. Methods 2.1 Study Design The Pharmacogenomics Applied to Chronic Pain Treatment in Primary Care (PGx‐ACT) trial was an open‐label randomized type 2 hybrid‐effectiveness trial (NCT04685304). A type 2 hybrid‐effectiveness trial examines both implementation outcomes (e.g., prescribing alignment with PGx recommendations) and clinical outcomes (e.g., pain intensity) . Participants were randomized to PGx vs. standard care (Figure ) . The trial was designed to not require in‐person activities beyond usual care. A statistician not involved in the study randomized participants in mixed blocks of 6 and 4. Allocation was stratified by baseline opioid, with 1:1 to tramadol or codeine to hydrocodone, respectively. 2.2 Population Eligibility included patients 18 years or older with chronic pain (i.e., pain lasting for 3 or more months) who were prescribed either tramadol, hydrocodone, or codeine. Patients had to be treated at a participating primary care practice site and prescribed one of the target opioids by a provider within the MedStar Health system (additional eligibility details in supplement). Providers could have participants randomized to either arm. The study was approved by the IRB at MedStar Health Research Institute, and all procedures were in accordance with the ethical standards of the Declarations of Helsinki. Electronic health records (EHR) were screened for patients at participating sites prescribed tramadol, codeine, or hydrocodone. Multiple recruitment modalities were used, including provider referrals, patient portal messaging, emails, text messages, and telephone calls. Participants provided physically or electronically signed informed consent before any study intervention (e.g., sample collection). Participants received $25 after trial completion. 2.3 Pharmacogenetic Testing and CYP2D6 Phenotype Assignment PGx testing was performed in a Clinical Laboratory Improvement Amendments (CLIA) certified laboratory (Kailos Genetics Inc.) using targeted next‐generation sequencing on select genes (i.e., CYP2C19 , CYP2C9 , CYP2D6 , CYP3A4 , CYP3A5 , SLCO1B1 , TPMT , VKORC1 ). Allele coverage is available in Table , with all tier one CYP2D6 alleles covered Allele function was assigned per the Clinical Pharmacogenetics Implementation Consortium (CPIC) . CYP2D6 phenotype was assigned by CYP2D6 activity score thresholds per the most recent CPIC guidelines: 0, PM; 0.25–1, IM; 1.25–2.25, NM; above 2.25, UM . In cases with gene multiplications, ranged phenotypes (e.g., IM to NM) communicated possible activity score ranges. Drug interactions were incorporated into CYP2D6 phenotype assignment. CYP2D6 inhibitors can reduce enzyme function compared to genotype . This process, called phenoconversion, was accounted for similar to prior work . In brief, CYP2D6 activity scores were converted to zero in the presence of FDA‐defined strong CYP2D6 inhibitors (i.e., bupropion, fluoxetine, quinidine, paroxetine, terbinafine) . For moderate CYP2D6 inhibitors (i.e., abiraterone, cinacalcet, duloxetine, fluvoxamine, lorcaserin, mirabegron), the CYP2D6 activity score was multiplied by 0.5, often leading to CYP2D6 NMs becoming IMs. 2.4 Intervention and Implementation Strategy Prescribing decisions were at the treating provider's discretion. Participants provided a buccal sample via at‐home mailing kits or in‐office collection. Participants were randomized to PGx or standard care after the laboratory receipt of their sample. The PGx arm had samples processed and released as soon as they were available. Results were released for the standard care participants upon completion of active participation (i.e., 3 months after baseline). A multimodal approach incorporated PGx results into clinical care. First, results (e.g., genotype, phenotype, activity score) were entered as structured data into the results review section of the EHR within a “Pharmacogenomics” tab. Although the trial focused on CYP2D6 , structured data were also entered for CYP2C9 , CYP2C19 , CYP3A5 , SLCO1B1 , and TPMT . Second, targeted posttest interruptive electronic clinical decision support (CDS) alerts provided recommendations for alternative therapy upon order entry of tramadol or codeine for patients with CYP2D6 PM or UM phenotypes based on CYP2D6 genotype (Figure ). The alerts triggered for any patient regardless of study enrollment. Therefore, therapeutic recommendations in alerts aligned with CPIC guidelines . Alerts did not account for phenoconversion and did not trigger other medications related to the treatment of pain. Third, PGx‐trained pharmacists used PGx results to create a consultation note (PharmD consult) to aid providers in interpreting and applying PGx results. PharmD consults were delivered as results were available and were asynchronous to patient visits. After discussion with providers before the trial, the PharmD consult was placed as a consultation note in the EHR and sent to the primary care provider (PCP), the provider who ordered the opioid (if different from the PCP), and any other relevant provider per the pharmacist's discretion. A PGx‐trained pharmacist was available upon request to discuss results with patients/providers directly, but this was rarely utilized. Pharmacists used CYP2D6 phenotypes informed by genotype and concomitant CYP2D6 inhibitor use. Recommendations are shown in Table and were based on CPIC guidelines and the findings from the prior prospective trial by Smith et al. . Due to varying pain etiologies and treatment histories, when alternative therapy was recommended, it included broad options for providers to select from. For example, “select a non‐opioid analgesic (e.g., naproxen, gabapentin, acetaminophen) or a different opioid (e.g., morphine, hydromorphone, oxycodone).” We did not recommend oxycodone in UMs. Recommendations were provided for additional drug‐gene pairs per CPIC guidelines If the patient had a documented condition treated by a medication with CPIC guidelines. This was intended to provide the best patient‐centered care, and it is possible other drug–gene pairs could contribute to pain symptoms (e.g., CYP2C9 ‐NSAIDs, CYP2D6 / CYP2C19 ‐antidepressants). Results for other drug–gene pairs are beyond the scope of this work. 2.5 Outcomes The outcomes address multiple domains of the Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE‐AIM) framework (Table ), focusing on the effectiveness and implementation domains . The primary analysis was a modified intention‐to‐treat (mITT) analysis of those with a CYP2D6 IM/PM result who completed the trial (i.e., the follow‐up survey at 3 months). The primary endpoint was the change in pain intensity Patient Reported Outcome Measurement Information System (PROMIS) composite among CYP2D6 IM/PMs between baseline and 3 months . The patient‐reported outcome (PRO) instruments utilized at baseline and 3 months were PROMIS‐29 Profile v2.0 and PROMIS Scale v1.0 Pain Intensity 3a . The pain intensity composite was the average of the pain intensity current, worst, and average over the past 7 days . The baseline visit was defined as the next scheduled appointment with their outpatient provider who prescribed the opioid of interest (i.e., visit after enrollment) if it was ≥ 10 days after the lab received the sample. The delay between enrollment and baseline visits allowed for time to process PGx results and provide a PharmD consult note, which ensured PGx result availability in the PGx arm. The outpatient provider and the participant determined the timing of baseline visits through usual care practices. The study coordinator distributed questionnaires electronically and, if necessary, via phone or in person. Planned secondary endpoints included an assessment in CYP2D6 IM/PMs between baseline and 3 months: change in morphine milligram equivalents (MMEs) prescribed , pain interference (how much pain affects a patient's quality of life), physical function, proportion with a ≥ 30% improvement in pain intensity (clinically meaningful improvement), and proportion with PGx‐aligned care. PGx‐aligned care was defined as pharmacotherapy concordant with PGx recommendations provided in the PharmD consults. It is possible providers made pharmacotherapy decisions for participants in the standard care arm that happened to align with PGx results without knowledge of PGx results. An intention‐to‐treat (ITT) analysis was performed among all randomized participants with pain intensity composite data available at baseline and follow‐up. Implementation metrics included turnaround times between various steps within the workflow, recruitment success, clinician responses to CDS alerts, the proportion of PGx sample collection kits returned, and the number of pharmacists delivering PharmD consults. Data were securely stored in research electronic data capture (REDCap) . 2.6 Analysis The primary analysis used penalized multiple linear regression with elastic‐net methods to assess the effect of other covariates on the primary endpoint, the change in pain intensity composite. This trial possessed 80% power to detect an effect size of 0.6 at an alpha of 0.05 when 90 IM/PMs completed the trial. This large effect size was selected based on a prior nonrandomized prospective study's results, further sample size calculation details are available in the supplement . Elastic net methods shrink regression coefficients toward zero, thereby identifying variables with the most impact. Specifically, covariates such as age, sex, self‐reported race, opioid type, and baseline PROMIS measures were included in the initial model, along with the interactions of trial arm with age, sex, and race. In addition, site was included as a random effect to assess whether between‐site variability could be ignored. Two planned sensitivity analyses were performed. They repeated the primary analysis after (1) excluding participants who had unplanned surgical procedures that typically necessitate postoperative opioids that occurred during the study period (sensitivity analyses 1 and 2) excluding participants who had their baseline visit in‐between questionnaires but > 7 days from the baseline questionnaire (risk: symptoms may not be reflective of symptoms at time of baseline visit) or the baseline visit was with a non‐PCP or nonopioid‐ordering provider (risk: bias to the null as provider may have been less likely to modify pain management; sensitivity analysis 2). Secondary analyses are considered preliminary and utilized standardized differences (SD) as SD is not dependent on sample size . For proportions, SD is the difference in proportions divided by the standard deviation of the difference: SD P = (P1—P2)/sqrt(variance1 + variance2)/2. Standard deviation and interquartile range (IQR) are reported as appropriate. Descriptive statistics were reported for implementation metrics. Study Design The Pharmacogenomics Applied to Chronic Pain Treatment in Primary Care (PGx‐ACT) trial was an open‐label randomized type 2 hybrid‐effectiveness trial (NCT04685304). A type 2 hybrid‐effectiveness trial examines both implementation outcomes (e.g., prescribing alignment with PGx recommendations) and clinical outcomes (e.g., pain intensity) . Participants were randomized to PGx vs. standard care (Figure ) . The trial was designed to not require in‐person activities beyond usual care. A statistician not involved in the study randomized participants in mixed blocks of 6 and 4. Allocation was stratified by baseline opioid, with 1:1 to tramadol or codeine to hydrocodone, respectively. Population Eligibility included patients 18 years or older with chronic pain (i.e., pain lasting for 3 or more months) who were prescribed either tramadol, hydrocodone, or codeine. Patients had to be treated at a participating primary care practice site and prescribed one of the target opioids by a provider within the MedStar Health system (additional eligibility details in supplement). Providers could have participants randomized to either arm. The study was approved by the IRB at MedStar Health Research Institute, and all procedures were in accordance with the ethical standards of the Declarations of Helsinki. Electronic health records (EHR) were screened for patients at participating sites prescribed tramadol, codeine, or hydrocodone. Multiple recruitment modalities were used, including provider referrals, patient portal messaging, emails, text messages, and telephone calls. Participants provided physically or electronically signed informed consent before any study intervention (e.g., sample collection). Participants received $25 after trial completion. Pharmacogenetic Testing and CYP2D6 Phenotype Assignment PGx testing was performed in a Clinical Laboratory Improvement Amendments (CLIA) certified laboratory (Kailos Genetics Inc.) using targeted next‐generation sequencing on select genes (i.e., CYP2C19 , CYP2C9 , CYP2D6 , CYP3A4 , CYP3A5 , SLCO1B1 , TPMT , VKORC1 ). Allele coverage is available in Table , with all tier one CYP2D6 alleles covered Allele function was assigned per the Clinical Pharmacogenetics Implementation Consortium (CPIC) . CYP2D6 phenotype was assigned by CYP2D6 activity score thresholds per the most recent CPIC guidelines: 0, PM; 0.25–1, IM; 1.25–2.25, NM; above 2.25, UM . In cases with gene multiplications, ranged phenotypes (e.g., IM to NM) communicated possible activity score ranges. Drug interactions were incorporated into CYP2D6 phenotype assignment. CYP2D6 inhibitors can reduce enzyme function compared to genotype . This process, called phenoconversion, was accounted for similar to prior work . In brief, CYP2D6 activity scores were converted to zero in the presence of FDA‐defined strong CYP2D6 inhibitors (i.e., bupropion, fluoxetine, quinidine, paroxetine, terbinafine) . For moderate CYP2D6 inhibitors (i.e., abiraterone, cinacalcet, duloxetine, fluvoxamine, lorcaserin, mirabegron), the CYP2D6 activity score was multiplied by 0.5, often leading to CYP2D6 NMs becoming IMs. Intervention and Implementation Strategy Prescribing decisions were at the treating provider's discretion. Participants provided a buccal sample via at‐home mailing kits or in‐office collection. Participants were randomized to PGx or standard care after the laboratory receipt of their sample. The PGx arm had samples processed and released as soon as they were available. Results were released for the standard care participants upon completion of active participation (i.e., 3 months after baseline). A multimodal approach incorporated PGx results into clinical care. First, results (e.g., genotype, phenotype, activity score) were entered as structured data into the results review section of the EHR within a “Pharmacogenomics” tab. Although the trial focused on CYP2D6 , structured data were also entered for CYP2C9 , CYP2C19 , CYP3A5 , SLCO1B1 , and TPMT . Second, targeted posttest interruptive electronic clinical decision support (CDS) alerts provided recommendations for alternative therapy upon order entry of tramadol or codeine for patients with CYP2D6 PM or UM phenotypes based on CYP2D6 genotype (Figure ). The alerts triggered for any patient regardless of study enrollment. Therefore, therapeutic recommendations in alerts aligned with CPIC guidelines . Alerts did not account for phenoconversion and did not trigger other medications related to the treatment of pain. Third, PGx‐trained pharmacists used PGx results to create a consultation note (PharmD consult) to aid providers in interpreting and applying PGx results. PharmD consults were delivered as results were available and were asynchronous to patient visits. After discussion with providers before the trial, the PharmD consult was placed as a consultation note in the EHR and sent to the primary care provider (PCP), the provider who ordered the opioid (if different from the PCP), and any other relevant provider per the pharmacist's discretion. A PGx‐trained pharmacist was available upon request to discuss results with patients/providers directly, but this was rarely utilized. Pharmacists used CYP2D6 phenotypes informed by genotype and concomitant CYP2D6 inhibitor use. Recommendations are shown in Table and were based on CPIC guidelines and the findings from the prior prospective trial by Smith et al. . Due to varying pain etiologies and treatment histories, when alternative therapy was recommended, it included broad options for providers to select from. For example, “select a non‐opioid analgesic (e.g., naproxen, gabapentin, acetaminophen) or a different opioid (e.g., morphine, hydromorphone, oxycodone).” We did not recommend oxycodone in UMs. Recommendations were provided for additional drug‐gene pairs per CPIC guidelines If the patient had a documented condition treated by a medication with CPIC guidelines. This was intended to provide the best patient‐centered care, and it is possible other drug–gene pairs could contribute to pain symptoms (e.g., CYP2C9 ‐NSAIDs, CYP2D6 / CYP2C19 ‐antidepressants). Results for other drug–gene pairs are beyond the scope of this work. Outcomes The outcomes address multiple domains of the Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE‐AIM) framework (Table ), focusing on the effectiveness and implementation domains . The primary analysis was a modified intention‐to‐treat (mITT) analysis of those with a CYP2D6 IM/PM result who completed the trial (i.e., the follow‐up survey at 3 months). The primary endpoint was the change in pain intensity Patient Reported Outcome Measurement Information System (PROMIS) composite among CYP2D6 IM/PMs between baseline and 3 months . The patient‐reported outcome (PRO) instruments utilized at baseline and 3 months were PROMIS‐29 Profile v2.0 and PROMIS Scale v1.0 Pain Intensity 3a . The pain intensity composite was the average of the pain intensity current, worst, and average over the past 7 days . The baseline visit was defined as the next scheduled appointment with their outpatient provider who prescribed the opioid of interest (i.e., visit after enrollment) if it was ≥ 10 days after the lab received the sample. The delay between enrollment and baseline visits allowed for time to process PGx results and provide a PharmD consult note, which ensured PGx result availability in the PGx arm. The outpatient provider and the participant determined the timing of baseline visits through usual care practices. The study coordinator distributed questionnaires electronically and, if necessary, via phone or in person. Planned secondary endpoints included an assessment in CYP2D6 IM/PMs between baseline and 3 months: change in morphine milligram equivalents (MMEs) prescribed , pain interference (how much pain affects a patient's quality of life), physical function, proportion with a ≥ 30% improvement in pain intensity (clinically meaningful improvement), and proportion with PGx‐aligned care. PGx‐aligned care was defined as pharmacotherapy concordant with PGx recommendations provided in the PharmD consults. It is possible providers made pharmacotherapy decisions for participants in the standard care arm that happened to align with PGx results without knowledge of PGx results. An intention‐to‐treat (ITT) analysis was performed among all randomized participants with pain intensity composite data available at baseline and follow‐up. Implementation metrics included turnaround times between various steps within the workflow, recruitment success, clinician responses to CDS alerts, the proportion of PGx sample collection kits returned, and the number of pharmacists delivering PharmD consults. Data were securely stored in research electronic data capture (REDCap) . Analysis The primary analysis used penalized multiple linear regression with elastic‐net methods to assess the effect of other covariates on the primary endpoint, the change in pain intensity composite. This trial possessed 80% power to detect an effect size of 0.6 at an alpha of 0.05 when 90 IM/PMs completed the trial. This large effect size was selected based on a prior nonrandomized prospective study's results, further sample size calculation details are available in the supplement . Elastic net methods shrink regression coefficients toward zero, thereby identifying variables with the most impact. Specifically, covariates such as age, sex, self‐reported race, opioid type, and baseline PROMIS measures were included in the initial model, along with the interactions of trial arm with age, sex, and race. In addition, site was included as a random effect to assess whether between‐site variability could be ignored. Two planned sensitivity analyses were performed. They repeated the primary analysis after (1) excluding participants who had unplanned surgical procedures that typically necessitate postoperative opioids that occurred during the study period (sensitivity analyses 1 and 2) excluding participants who had their baseline visit in‐between questionnaires but > 7 days from the baseline questionnaire (risk: symptoms may not be reflective of symptoms at time of baseline visit) or the baseline visit was with a non‐PCP or nonopioid‐ordering provider (risk: bias to the null as provider may have been less likely to modify pain management; sensitivity analysis 2). Secondary analyses are considered preliminary and utilized standardized differences (SD) as SD is not dependent on sample size . For proportions, SD is the difference in proportions divided by the standard deviation of the difference: SD P = (P1—P2)/sqrt(variance1 + variance2)/2. Standard deviation and interquartile range (IQR) are reported as appropriate. Descriptive statistics were reported for implementation metrics. Results Twenty of 23 clinics approached to participate in the trial agreed to participate. Reasons clinics declined participation included COVID‐19 response activities ( n = 1), providers not interested ( n c= 1), and no response ( n = 1). Training was delivered to 109 providers at these sites, and 86 providers provided care to at least one participant who enrolled in the trial. Of these 86 providers, the median number of enrolled participants was 3 , with a maximum of 18 participants. Between January 2021 and December 2022, 4573 patients were evaluated, with 2731 ineligible (e.g., not prescribed a relevant opioid), 944 who did not respond to outreach, 428 who declined to participate, 154 who did not participate for an unknown reason, and 315 participants enrolled (Table ). Consent was provided virtually (e.g., eConsent) in 267 of 315 (85%) enrolled participants. Two hundred and fifty‐three participants were randomized to PGx ( n = 128) and Standard Care arms ( N = 125; Figure ). Common reasons for attrition between enrollment and randomization were patients not returning the PGx collection kit ( n = 42), screen failures ( n = 13), and consent withdrawal ( n = 7). Additional attrition occurred given that 223 completed the baseline visit out of 253 randomized participants. The most common reason was the patient not returning to the primary care practice for care ( n = 20; Figure ). Patients who completed the baseline visit were likely to complete the trial (97%; 217 of 223). Trial participation was often virtual; 175 (81%) of 217 participants completed all questionnaires electronically (e.g., link via text message or email). Follow‐up was completed in July 2023. Trial participants consisted mainly of older adults with pain for over 5 years (Table ). Back pain was the most common pain management indication. The median pain intensity at baseline was 7 , translating to moderate to severe pain. Tramadol was the most commonly prescribed opioid at baseline (176 [79%]). Some populations typically underrepresented in genomic research were well represented, as 85 (38%) participants self‐identified as Black or African American, and 164 (74%) were female. Baseline characteristics were similar between arms. CYP2D6 IM/PM phenotypes were present in 106 (49%) participants who completed the trial (Tables and ). Twenty‐two of 122 (18%) participants who completed the trial with NM per genotype were phenoconverted to IM or PM, with 11 experiencing it in each arm. Ten of 76 (13%) IMs per genotype were phenoconverted to PM (PGx: 4; Standard care: 6). The most commonly prescribed CYP2D6 inhibitors overall were duloxetine , bupropion , fluoxetine , mirabegron , and paroxetine . The ITT analysis, among all participants who completed the trial ( n = 217), found the treatment arm had little effect on the change in pain intensity composite: −0.21 ± 0.79 vs. −0.12 ± 0.83; SD = −0.12 in the PGx and standard care arms, respectively. The mITT primary analysis did not identify a difference in pain intensity between the 49 and 57 participants with CYP2D6 IM/PM assigned to PGx and Standard Care, respectively (−0.10 ± 0.63 vs. −0.21 ± 0.75; p = 0.74; additional data in Table ). Sensitivity analyses (supplement) and other clinical endpoints (Table ) were also similar. Notably, the proportion with PGx‐aligned care was similar between arms: 34 of 49 (69%) in PGx vs. 36 of 57 (63%) in the standard care arm; SD = 0.08. This suggests the clinical intervention was not effective in changing prescribing decisions. A planned secondary analysis examined the change in pain intensity among CYP2D6 IM/PMs by alignment with PGx recommendations (Table ). This identified a minor reduction in pain intensity with PGx‐aligned care ( n = 70; −0.21 ± 0.70) compared to unaligned care ( n = 36; −0.06 ± 0.69; SD = −0.22). The magnitude of effect increased when limiting the analysis to those with at least one change in an analgesic medication, favoring PGx‐aligned care ( n = 31) vs. unaligned care ( n = 36; −0.28 ± 0.76 vs. −0.06 ± 0.69; SD = −0.31). However, the proportion of patients with a clinically meaningful improvement (≥ 30%) in pain intensity was similar (Table ). Of IM/PMs with a change in an analgesic medication, nonopioid analgesics were prescribed to 25 of (81%) of those with PGx‐aligned care vs. 32 (89%) with unaligned care (SD = −0.23). Opioid analgesics were prescribed to 9 (29%) of those with PGx‐aligned care vs. 36 (100%) with unaligned care (SD = −2.2). Post hoc analyses in this population also identified that PGx‐aligned care was associated with improved physical function (Table ; SD = 0.54), pain interference (SD = −0.41), and a reduction in MMEs prescribed (SD = −0.79). The improvement in pain intensity composite among those with PGx‐aligned care ( n = 70) occurred regardless of treatment arm (−0.20 ± 0.65 vs. −0.22 ± 0.76; SD = 0.04 in PGx [ n = 34] vs. standard care [ n = 36]), which suggests the potential benefit of PGx‐aligned care was not the result of a placebo effect among those assigned to the PGx arm. This was also found in the subanalysis among those with an analgesic medication change ( n = 31): −0.26 ± 0.80 vs. −0.30 ± 0.76; SD = 0.05. Changes in pain intensity by CYP2D6 activity score, opioid, and PGx alignment are shown in Tables . Additional medication data and analyses by race are available in Tables . 3.1 Implementation Metrics Forty‐two (13%) enrolled participants did not return the PGx kit that was mailed to their home. The trial delivered PGx results for 249 of 253 (98%) randomized participants, albeit the standard care arm received them after trial participation completion. The four randomized participants who did not receive results were withdrawn ( n = 3) or had a screen failure ( n = 1) before the results were available. One buccal swab generated PGx results for 238 of 250 (95%) participants for whom sample processing was attempted. The other 12 participants received PGx results after a second sample collection. Seventeen (16%) of 109 patients in the PGx arm had their PGx results mentioned in the provider's note at the baseline visit. The median (IQR) time between (1) enrollment and PGx result return: 38 (30, 49) days; (2) PGx result return and PGx consult note upload: 9 (5.5, 18) days; (3) PharmD consult note upload and baseline visit: 31 (8, 66) days. The goal was to deliver PharmD consults prior to the baseline visit, which meant the trial did not focus on rapid turnaround times. Sixteen pharmacists provided PharmD consults, with a median of 9 consults provided by each pharmacist. The targeted interruptive electronic CDS alerts were overridden due to “prior use with no reported problems” for all five participants with CYP2D6 PM who had an alert triggered for codeine or tramadol. Implementation Metrics Forty‐two (13%) enrolled participants did not return the PGx kit that was mailed to their home. The trial delivered PGx results for 249 of 253 (98%) randomized participants, albeit the standard care arm received them after trial participation completion. The four randomized participants who did not receive results were withdrawn ( n = 3) or had a screen failure ( n = 1) before the results were available. One buccal swab generated PGx results for 238 of 250 (95%) participants for whom sample processing was attempted. The other 12 participants received PGx results after a second sample collection. Seventeen (16%) of 109 patients in the PGx arm had their PGx results mentioned in the provider's note at the baseline visit. The median (IQR) time between (1) enrollment and PGx result return: 38 (30, 49) days; (2) PGx result return and PGx consult note upload: 9 (5.5, 18) days; (3) PharmD consult note upload and baseline visit: 31 (8, 66) days. The goal was to deliver PharmD consults prior to the baseline visit, which meant the trial did not focus on rapid turnaround times. Sixteen pharmacists provided PharmD consults, with a median of 9 consults provided by each pharmacist. The targeted interruptive electronic CDS alerts were overridden due to “prior use with no reported problems” for all five participants with CYP2D6 PM who had an alert triggered for codeine or tramadol. Discussion This hybrid‐effectiveness trial found participants experienced a similar change in pain intensity in CYP2D6 IM/PMs in the PGx and standard care arms. The two arms experienced similar care, as evidenced by the proportion of participants with PGx‐aligned care (PGx: 69% vs. Standard Care: 63%; SD = 0.13). Therefore, we conclude that the implementation methods used in this trial were ineffective for this endeavor. Since the two arms received similar care, similar pain symptoms are expected. A planned secondary analysis found a small improvement in pain intensity among CYP2D6 IM/PMs when care aligned with PGx recommendations compared to unaligned care (SD = −0.22). The magnitude of the effect increased when limiting the population to those with at least one change in an analgesic medication (SD = −0.31). Within this population, PGx‐aligned care was further associated with improved physical function and pain interference, which are recognized as core chronic pain outcomes . The effect sizes seen with a change in pain intensity among IM/PMs with PGx‐aligned vs. unaligned care can be considered a small effect, and there was minimal association with a ≥ 30% improvement in pain; however, they are consistent with interventions in other clinical trials studying chronic pain . These improvements occurred in the setting of reduction in MMEs prescribed. This trial builds upon the prior prospective trial by Smith et al., which found improved pain intensity among CYP2D6 IM/PMs prescribed tramadol, codeine, or hydrocodone . Recent data also support the use of PGx‐guided opioid therapy in patients with acute pain and cancer pain . Additional data from the ongoing ADOPT‐PGx trial are forthcoming, which is a well‐powered trial with over 1000 participants . One difference between the PGx‐ACT, ADOPT‐PGx, and the prior trial by Smith et al. is the cutoff between normal and intermediate metabolism. CPIC changed the translation of a CYP2D6 activity score of one from an NM to an IM . All trials recommended alternative therapy for IMs. However, the original definition (activity score 1 = NM) was used in the Smith et al. trial and the ongoing ADOPT‐PGx trial. PGx‐ACT utilized the new phenotype translation and, therefore, had a broader definition of IM. Thus, alternative therapy was recommended for participants with a CYP2D6 activity score of 1, which was present in nearly a quarter of participants. This trial had several limitations. First, the trial was not blinded; however, outcome data were collected by a coordinator who was not involved in the clinical care or the clinical intervention. Allocation was not actively communicated to participants, but participants could discover arm allocation. It is unlikely that a placebo effect confounded the analysis, as evidenced by the similar improvements in pain intensity among those with PGx‐aligned care regardless of arm assignment. Second, despite the implementation approach applying with recommended practices, the implementation methods were unsuccessful in changing prescribing practices in IM/PMs between arms . The median number of enrolled participants for each enrolling provider was 3 over a 2‐year period, suggesting most providers had a limited opportunity to practice applying PGx. This infrequent opportunity may reduce the effectiveness of the intervention methods designed to enable PGx‐guided care. In addition, the trial occurred during the COVID‐19 pandemic, which may have impacted provider engagement. Providers later suggested the expansion of EHR‐based messaging (i.e., inbasket messaging) to notify them of PGx consult note availability for patients with upcoming visits. The CDS alerts were triggered rarely and were overridden in all five instances. We have since consulted with the MedStar National Center for Human Factors in Healthcare to redesign the alerts using human factors engineering principles . Last, a clinical consideration in the recommendations for IMs may have biased PGx‐aligned care between arms to the null: if the provider and patient considered analgesia adequate, providers could continue tramadol, hydrocodone, or codeine therapy or replace the opioid with a nonopioid analgesic and thereby use PGx as part of an opioid‐de‐escalation strategy. While this is clinically appropriate based on the current evidence, it means a lack of any prescribing change in IMs is defined as PGx‐aligned care. However, this limitation is addressed by the analysis of those with at least one analgesic medication change. Although improvements in multiple outcomes were found with PGx‐aligned care, these analyses do not use randomized data and are considered exploratory. The timing of the baseline visit, the next visit after randomization, was a pragmatic design element to ensure all participants had an opportunity to receive pharmacotherapy modifications. This came at the cost of increased attrition after randomization: 36 (14%) of 253 randomized participants. Thirty (83%) of the withdrawn participants did not have a baseline visit with an associated baseline questionnaire. Secondly, the timing of the baseline visit created a potential scenario where providers could have modified therapy between randomization and the baseline visit. To mitigate this risk: (1) providers were consulted during trial design and indicated they rarely modify analgesic therapy between visits, and (2) participants were withdrawn if their PGx results were available and they did not complete the baseline questionnaire prior to the visit ( n = 9). Last, the population studied may underestimate the value of PGx‐guided opioid prescribing. The trial required participants to be prescribed an opioid. This enabled a more efficient trial design, as a higher proportion of enrolled participants were eligible for the primary analysis than the trial that informed this design . However, this may bias the sample to participants prescribed the enrollment opioid for an extended time, which may select participants satisfied with the enrollment opioid or less likely to change therapy, particularly with ~70% of patients reporting having pain for 5 or more years. These results are most reflective of patients already prescribed tramadol (Table ); however, it is unknown how long participants were prescribed their enrollment opioid before enrollment or their adherence. The population hypothesized to receive the most benefit from PGx‐guided opioid prescribing is those naïve to tramadol, codeine, or hydrocodone. This trial has several strengths, including that, to our knowledge, it is the largest randomized trial investigating the effectiveness of PGx testing for the management of chronic pain. Second, it included a diverse sample of participants, including 85 (38%) who self‐identified as Black or African American and 164 (74%) female participants. Third, this trial was operationally efficient and largely a virtual clinical trial: 85% provided consent virtually. Further, among participants who completed the trial, 81% completed all questionnaires electronically. Fourth, it utilized best practices in PGx implementation, including storing PGx results as structured data, creating CDS alerts, providing PharmD consults, delivering provider education, sufficient CYP2D6 allele coverage, incorporation of CYP2D6 inhibitors into phenotype assessments, and obtaining stakeholder input prior to trial initiation . Finally, this pragmatic hybrid‐effectiveness design allowed for a real‐world examination of the effectiveness of PGx testing. Unrealistic trial conditions are a component behind the 17‐year lag between translating research to clinical practice . A clinical trial solely focused on efficacy would lack the underlying implementation effectiveness data that mediate the role of CYP2D6 and clinical response to tramadol, hydrocodone, and codeine. Conclusion Providing PGx results and recommendations (an asynchronous PGx PharmD consult with supporting CDS alerts and provider education) was not an effective implementation strategy for patients with chronic pain treated with tramadol, hydrocodone, or codeine. However, a secondary analysis identified that prescribing aligned with PGx results was associated with improved pain symptoms and reduced MMEs prescribed. This study highlights the importance of hybrid‐effectiveness trial design. Additional future efforts should identify effective implementation strategies for integrating PGx results into opioid prescribing. D.M.S., R.B., and S.M.S., wrote the manuscript; D.M.S., A.D.W., M.B.J., B.N.P., and S.M.S., designed the research; D.M.S., T.A.Y., V.N., A.L., R.W., S.D., and S.Z. performed the research; D.M.S., P.K., and R.H.P., analyzed the data; T.M. contributed new reagents/analytical tools. The study was approved by a single IRB, the IRB at MedStar Health Research Institute, and all procedures were in accordance with the ethical standards of the Declarations of Helsinki. Participants provided physically or electronically signed informed consent prior to any study intervention (e.g., sample collection for PGx testing). Identifiable information was collected, but data were deidentified for analysis and reporting. DMS reports research funding to the institution from Kailos Genetics Inc. SMS reports personal Fees for consulting/advisory services/nonpromotional speaking: AstraZeneca, Daiichi‐Sankyo, Genentech/Roche, Sanofi, Merck, Lilly, Chugai Pharmaceutical Co.; Research support (to institution) Genentech/Roche, Kailos Genetics Inc.; Scientific Advisory Board: Napo Pharmaceuticals; Board of Directors: SEAGEN Stock and Stock options (end 12/14/2023), Immunome; Stipend and stock options: Immunome; Other support: Genentech/Roche and AstraZeneca (third‐party writing assistance); In Kind Travel: Seagen, Napo Pharmaceuticals, Sanofi, Daiichi Sankyo, Genentech/Roche, Chugai Pharmaceutical Co. All other authors report no conflicts of interest. File S1. Data S1. File S2.
Challenges and Progress in Nonsteroidal Anti-Inflammatory Drugs Co-Crystal Development
b6e70c5d-9d93-4183-b2ca-41d035aeb365
8303568
Pharmacology[mh]
Non-steroidal anti-inflammatory drugs (NSAID) are antipyretic, anti-inflammatory, and analgesic agents commonly used to treat muscle pain, dysmenorrhea, rheumatism, pyrexia, gout, migraine, etc. These drugs also are combined with opioids to treat acute trauma cases . The use of NSAID is increasing every year. In 2018, NSAID prescriptions increased by 40.7%, the most significant percentage prescribed to women on long-term treatment. Paracetamol and ibuprofen are the most frequently prescribed NSAID for chronic therapy using fixed-dose combination treatment . Based on their chemical structure and selectivity, NSAID are divided into salicylic acid derivatives (aspirin, ethenzamide, diflunisal), para-aminophenol derivatives (paracetamol), indoles (etodolac, indomethacin), hetero-acyl acetic acid (diclofenac, ethyl diclofenac, ketorolac), aryl-propionic acid derivatives (ketoprofen, naproxen, ibuprofen, flurbiprofen), anthralinic acid derivatives (mefenamic acid, flufenamic acid, nifluminic acid, meclo-fenamic acid), enolic acid derivatives (tenoxicam, piroxicam, meloxicam), and diaryl-heterocycles or selective cyclooxygenase (COX)-2 inhibitors (celecoxib, etoricoxib) . NSAID are classified in the Biopharmaceutics Classification System (BCS) as class II, because they have low solubility and high permeability . The low solubility of NSAID can cause many problems in using and producing these drugs, especially regarding the onset of action. NSAID are the most widely used drugs for pain relief, so a fast onset of action is needed . It is a challenge to develop an optimal potential effect. Various attempts have been made to increase the solubility, such as the formation of salts, polymorphs, solvates, and hydrates . Nowadays, most NSAID on the market are in their salt form, such as diclofenac sodium, diclofenac potassium, ibuprofen, etc. However, salt formation can only be applied to active pharmaceutical ingredients (APIs), forming ions. It is a problem for some compounds that are difficult to ionize, such as zaltoprofen (ZFN). ZFN is included in the propionic acid group and is an effective NSAID to treat acute somatic pain , chronic inflammation, rheumatoid arthritis, and phase II clinical trials for giant tenosynovial tumor cells therapy . Polymorphs are not preferred because of the danger of polymorphic transformation, which affects the formulated product . Therefore, it is necessary to develop new approaches to modify or improve the physicochemical properties of these drugs . The Food and Drug Administration (FDA) defines co-crystals as crystalline materials consisting of two or more different molecules. Usually, APIs and co-crystal formers (co-formers) form a crystal lattice with non-ionic interactions . Over the past few years, co-crystals have shown significant results in drug development, especially in modifying APIs’ physicochemical properties and pharmacokinetics, such as solubility and dissolution rate, bioavailability, morphology, particle size melting point, physical form, biochemical stability, and permeability . In addition, several studies regarding the application of co-crystals in drug delivery studies have been published . Moreover, the formation of co-crystals can increase the analgesic activity of the ibuprofen–nicotinamide (IBU–NIC) co-crystal, studied in vivo by Yuliandra et al. in male Swiss-Webster rats. Based on these tests, the IBU–NIC co-crystal increased the level of pain inhibition by two-fold, i.e., it can reduce pain better than IBU and its physical mixture . In another publication, the ketoprofen–nicotinamide (KET–NIC) co-crystal had the potential to increase the elimination of intracellular pathogens because KET–NIC can increase the production of hydrogen peroxide (H 2 O 2 ), which is produced by phagocytes to eliminate intracellular microorganisms, such as Leishmania spp., Histoplasma capsulatum , and Pneumocystis jirovecii . Furthermore, based on a cytotoxic assay using a colorimetric method, KET–NIC could improve cell survival compared to KET . More recently, crystal engineering has become a promising trend in modulating drug physicochemical properties and, in some cases, could improve the activity . In this review, we report that the manufacture of co-crystals can be categorized into two main techniques, namely solution-based and solid-based methods. The conventional solution-based methods include fast evaporation, slow evaporation, and slurry methods. Moreover, supercritical or gas anti-solvent co-crystallization using CO 2 were also reported recently. On the other hand, solid-based procedures are considered more efficient because they reduce solvent usage significantly. For example, neat grinding and co-melting using heating can be performed without solvent. Meanwhile, solvent-dropped grinding and microwaving only need a minimal amount of solvent. NSAID co-crystal development, its challenges, and procedures will be discussed in this manuscript, respectively. Various experts have discussed the definition, but it is still challenging to distinguish between co-crystals, salts, and solvates/hydrates . Besides, in the single-form, API may form multicomponent crystals, as shown in . Historically, co-crystals were first reported by Friedrich Wöhler in 1844 and successfully characterized in 1968 . However, the term co-crystal itself was only introduced in 1963 by Lawton and Lopez. Co-crystals are classified as multicomponent crystals that consist of APIs and co-formers with non-covalent bonding intermolecular interactions such as hydrogen bonds, van der Waals bonds, halogen bonds, and π-π stacking interactions. Repeated intermolecular interactions between functional groups in a crystal involving proton donors and acceptors form supramolecular synthons, which may be composed of the same or different functional groups, named as homo- and hetero-synthons . The number of hydrogen interactions may represent the solubility of the co-crystals. This phenomenon is supported by the hydrotropic effect of co-formers such as proline (PRO) to dissolve a more hydrophobic API with excellent solubility . Additionally, co-crystals show increased dissolution, bioavailability, stability, and mechanical properties. The FDA initially classified co-crystals as intermediate medicinal products in complex molecular APIs and excipients but has now revised its guidelines. A co-crystal is classified as a specific solvate product similar to a polymorph in API. Co-crystals are determined based on the ΔpKa rule, where a pKa value > 1 indicates the occurrence of proton transfer, which produces salt or salt co-crystal. Co-crystals that do not transfer proton are characterized by a pKa value < 1 . Moreover, for co-crystal salts, there is partial proton transfer with a pKa between 1–3 . A salt co-crystal is an ionic co-crystal formed from organic molecules and ionic salts in the form of cation halides . Several publications have reported that salt co-crystals can be generated from the salt form of API and the co-former, but the other components are neutral . Furthermore, the co-crystal can be determined based on the length of the CO bond of the carboxylate group, the dihedral angle, and the CNC angle, where deprotonation will cause the CO bonds in the carboxylate groups to be similar to each other. In contrast, if the CO bonds are significantly different, it indicates protonation . The bigger the CNC angle, the higher the protonation degree, causing a higher pK value . For example, the CO distance in paracetamol (PCM)–Oxalic Acid (OXA) is 1.03 and 1.04, while in PCM–Maleic Acid (MLA), it is 1.056 and 1.08. A similar CO distance indicates no proton transfer, so the co-crystal formed is not a salt co-crystal . Commonly, Fourier transform infra-red (FTIR), thermal analysis (differential scanning calorimetry/DSC, differential thermal analysis/DTA, and thermogravimetry/TG), powder X-ray diffractometry/PXRD, and single-crystal X-ray diffractometry/SCXRD are used to confirm the presence of a new solid phase. Based on the IR spectrum, a shift in the C=O peaks of the two co-crystals, indicating no proton transfer to carboxylic acids and decreases in the frequency, showed that the functional groups play a role in the formation of strong hydrogen bonds. Several analysis methods using FTIR have been developed, including the multivariate curve resolution alternating least squares (MCR–ALS) method developed for IBU–NIC co-crystal analysis. Analysis by MCR–ALS is a chemometric method for data analysis performed by outlining the IR spectrum pattern in a single component . Thermal analysis with TGA and DSC is an adequate method to detect the new solid phase formation. The sharp thermogram indicates the purity and homogeneity of the co-crystal and good thermal stability . DSC analysis exhibits the unique thermal melting point of the co-crystal, which differs from those of the parent drugs . The differences in diffraction peaks and intensity in PXRD inform the new crystal interaction, i.e., interactions between the oxygen atoms (O) and the H atoms of primary amide groups, which SCXRD can structurally determine . A change in the solid form indicates crystalline formation, which SEM can characterize . Computational studies have shown co-crystal interactions through the calculation of free energy using Schrodinger software. For example, piroxicam (PXC)–sodium acetate (SAT) co-crystal was reported to have lower free energy (−1671.29 kJ) compared to PXC (−1442.71 kJ) and SAT (−228, 49 kJ). Low free energy indicates that the co-crystal is stable . NSAID co-crystals were first reported and patented in 1973 by Gerhard Dorler and Maria Kuhnert-Brandstetter. Over the decades, NSAID have been developed into co-crystals, salt co-crystals, co-crystal hydrates, salt hydrate co-crystals, and drug–drug co-crystals. For example, a co-crystal consisting of two APIs, namely pyrithyldione as sedative and propyphenazone as NSAID, produces three polymorphs; the polymorph form II was the most stable . Since then, NSAID co-crystals have continued to develop in terms of screening, manufacturing techniques, and enhancement of physicochemical properties . Furthermore, some invented NSAID co-crystals are summarized in . 3.1. Screening NSAID Co-Crystals NSAID co-crystals exhibit various supramolecular synthon motifs . For example, diflunisal (DIF) co-crystal has three hydrogen bonding patterns based on the Cambridge Structural Database (CSD), namely (1) a ring hetero-synthon with aromatic N, (2) a COOH-COOH homo-synthon, and (3) four ring homo-synthons between OH and C=O from acids. Interestingly, DIF and other salicylic acid (SA) derivatives have an ortho-hydroxy group with steric resonance and hydrogen bonding called the “ortho effect”. Only eight DIF co-crystals have been created with the following features: (a) a COOH homo-synthon between acid co-formers and o-hydroxy carboxylic acids when one of them has an electron-attracting group, (b) COOH homo-synthons between acid co-formers and o-hydroxy carboxylic acids when the o-hydroxy acid has a competing group, such as OH or NH 2 , which can form hydrogen bonds with the co-former, and (c) a combination of motifs (a) and (b) with a COOH interaction distance of more than 3 Å; the type (c) motif is rare. Based on their feature exploration, the ability of donors is in the following order: COOH > NH > OH with the most interactions found in DIF with COOH . Indomethacin (INC) also forms a co-crystal with the saccharine (SAC) co-former by forming N–H·· O hydrogen bonds between acid dimers in INC and imide dimers in SAC . Naproxen (NPX)–picolinamide (PA) co-crystal was formed from carboxylic acid-carbamide dimers between NPX and PA, as shown in . PA offers an ortho effect with a pKa value of 1.17. For that PA, carboxamide interacts with NPX via acid-amide synthesis; however, SCXRD results do not match the asymmetric unit. The position of the proton determined by H nuclear magnetic resonance (NMR) shows that the H atom involved in hydrogen bonding at the synthon gives a different peak. The H atom involved has a delay of 20s compared to the other H atom, which was determined by comparing the NMR data with the predicted shift values. The RMSD obtained was 0.09 Å by optimizing the atomic position at 0.2 Å SCXRD so that the hydrogen bonds are more symmetrical and the difference between the two hydrogen bonds O–H · O was reduced. In c, two bonding models are proposed as alternatives to improve the SCXRD results. In model 1, only the H atoms in the OH bond are removed, whereas, in model 2, the amide protons in PA also move along the N–H hydrogen bond. H atom position was validated by density functional theory (DFT) geometry optimization based on a significant decline in potential energy, which indicates that the naproxen–picolinamide (NPX–PA) is a co-crystal and not a salt . Lornoxicam (LNX) is included in the oxicam class of NSAID drugs. LNX has more selective anti-inflammatory activity without affecting the digestive tract. LNX can be used for osteoarthritis therapy and postoperative pain relief of the head and neck, and also reduced postoperative pain in the third molars in the first phase of perioperative pain management . LNX has better tolerance and safety than tramadol and is comparable to tramadol in postoperative analgesic therapy . Oral administration of LNX takes 2.5 h to produce a maximum concentration (Cmax), limiting LNX therapy for providing the optimal effect as an analgesic. The co-crystallization technique has succeeded in changing the physicochemical properties of LNX using the liquid-assisted grinding (LAG) method. From this method, co-crystals of LNX salt were produced with a benzoic acid co-former and 2,4 dihydroxy benzoic acid. In contrast, co-crystals were formed from catechol co-formers, resorcinol, hydroquinone, and SAC at a stoichiometric ratio of 1:1 . The crystal system is orthorhombic with space group P 212121 with one LNX molecule in asymmetrical units based on the synthon co-crystal approach. The decrease in the melting point of the parent drug indicates the formation of a multi-component system. Strong hydrogen bonds link the LNX N + –H···O=C and NH···OC causes a decrease in the conformational change in the amide bond to the ring. As a result, a zigzag band formed along the C axis in the molecule through the interaction of N + –H···O=S, C–H···O and C–H···Cl, then the CH···S interaction forms a 2D sheet and creates a 3D layered structure through CH···O hydrogen bonding interactions. Based on in vitro dissolution testing at the optimum temperature of LNX, the salt co-crystal has excellent solubility compared to the co-crystal and LNX, i.e., up to 1.52-fold higher than that of LNX . LNX co-crystals were also formed by the neat grinding method on the generally recognized as safe (GRAS) co-former and drugs such as malonic acid (MAL), succinic acid, tartaric acid, anthranilic acid (phenamate), cinnamic acid, p-aminobenzoic acid, ferulic acid, urea (URE), sodium saccharin (SS), citric acid, and OXA. LNX–SS increases the solubility of the parent drug significantly. It is related to the low lattice energy value, thereby reducing the interference from the solvent. Co-crystallization decreases the partition coefficient and shows the transformation from a hydrophobic to a hydrophilic compound. The LNX–SS co-crystal has good stability at extreme temperatures based on stability testing for 30 days under 40 °C and 75% humidity . Co-crystal diclofenac (DFA) with pyridine-based co-formers and acid pyridine synthon has been reported in nine new solid forms using the evaporation co-crystallization technique. DFA formed co-crystals with the 1,2-bis (4-pyridyl) ethane (BPE), 1,3-di (4-pyridyl) propane, and 4,4-bipyridine at a stoichiometric ratio of 2:1. The three co-formers have pyridine groups on both sides. The packing of DFA co-crystal is highlighted in a,b. Simultaneously, DFA formed salt co-crystal with two amino groups as co-formers, namely 2-aminopyridine (2-apy) and 3-aminopyridine. In the salt co-crystal DFA-2-apy, there is a weak bonding of C–H. There is a tetramer aggregate packaging in the salt co-crystal that connects the synthon, as shown in c . The formation of NSAID co-crystals with an amino acid as the co-former has been widely published, called zwitterionic co-crystals. For example, indomethacin–proline (INC–PRO) 2H 2 O, a co-crystal hydrate, exhibited the supramolecular synthons NH 3+ ···O − and OH···O. The INC–PRO co-crystal consists of an internal hydrophilic group and a hydrophobic group on the surface, increasing the solubility and permeability INC . Besides INC, a diclofenac–proline (DFA–PRO) was obtained as a stabilized co-crystal and linked with intramolecular hydrogen bonds N–H···O and C–H···C. The difference in the C–O bond length of the DFA and PRO carboxylate groups > 0.07 indicates that DFA–PRO is a zwitterion co-crystal . Furthermore, Nugrahani et al. developed the DFA–PRO co-crystal by forming a salt hydrate co-crystal to increase the multi-component solubility and stability over DFA–PRO co-crystal and the alkaline salt diclofenac. Besides sodium diclofenac proline tetrahydrate (SDPT), a co-crystal of sodium diclofenac–proline monohydrate (SDPM) was confirmed by a structural determination with SCXRD at −180 °C. The new phase consisted of diclofenac sodium–proline-water (1:1:1:1). A stability study of the two co-crystals was carried out under extreme drying and humidity and evaluated by differential thermal analysis (DTA)/TG/DSC). Based on the diffractogram results, SDPT was stable at high humidity, i.e., 94 ± 2%RH/25 ± 2 °C, for up to 15 days did not separate into the starting materials . Meanwhile, SDPM quickly changed to SDPT under these environmental conditions. Moreover, SDPM could be restored to SDPT under dry conditions, so SDPT is a more stable co-crystal than SDPM. These results show that water plays a critical role in forming co-crystals in SDPT to mediate the interactions between the components in the co-crystals of the tetrahydrate salt formed. The high solubility of SDPM occurs because there is a region consisting of Na + and water molecules with high affinity, such that it quickly dissolves and breaks down. The dissolution test was carried out at pH 1.2 and 6.8, based on previous studies; the sodium diclofenac (SD) profile increased at moderate to alkaline pH. Based on this test, the SDPM and SDPT dissolution results were faster than SD, with a superior increase in the dissolution of SDPM. This result showed that the monohydrate crystal lattice was smaller and had a looser and broader space than the tetrahydrate form, so it dissolved faster than SDPT at pH 6.8 . Furthermore, the DFA–PRO co-crystal was developed from diclofenac potassium, resulting in a salt co-crystal hydrate consisting of potassium, DFA, l -proline, and water (1:1:1:4) . The prediction of bond formation between co-crystal NSAID and co-formers using a computational method to calculate DFT with B3LYP/aug-cc-pVDZ 1 depends on the constituent molecules to form a supramolecular synthon . Co-crystal screening with various co-formers and methods has been established, one of which is by calculating the van Krevelen and Hoftyzer solubility parameters, involving the solubility of two compounds to find the ∆δ − factor with the equation: ∆δ − = [(δ d2 − δ d1 ) 2 + (δ p2 − δ p ) 2 + (δ h2 δ h1 ) 2 ] 0.5 (1) The partial solubility parameters are dispersion (δ d ), polarity (δ p ), and hydrogen bonding (δ h ). Good miscibility can be achieved if ∆δ ≤ 5 MPa 0.5 . The difference in total solubility between the drug and the ∆δt carrier is a means of predicting solubility: ∆δ t = |δ t2 − δ t1 | (2) As noted, t1 and t2 are carriers and drugs; materials are miscible with ∆δ ≤ 7 MPa 0.5 , while systems with ∆δ ≥ 7 MPa 0.5 are immiscible. This solubility parameters method was used for developing aceclofenac (ACF) co-crystals with several co-formers, namely gallic acid (GLA), CA, MLA, NIC, d -tartaric acid (TCA), URE, and vanillic acid, which showed that all co-formers could form co-crystals, except TCA . ACF 2-[2-[2-[(2,6-dichloro-phenyl)amino]phenyl]-acetyl] oxy-acetic acid (C 16 H 13 C l2 NO 4 ) is an analog of DFA in the form of glycolic acid ester. It is used as a first-line drug for the treatment of rheumatoid arthritis and osteoarthritis . The gastrointestinal side effects of ACF are relatively lower than other non-selective NSAID and are comparable to CEL . ACF does not interact directly with COX enzymes, but ACF in the form of a prodrug will produce DFA active metabolites, which will inhibit COX enzyme activity by forming hydrogen bonds on the TYR355 and SER530 residues of COX enzymes . The characterization of the ACF–NIC and ACF–GLA co-crystals obtained by the solvent evaporation technique at 1:1 stoichiometry was based on the SEM analysis. The ACF–GLA and ACF–NIC co-crystals had different crystal shapes than pure ACF, large and regular, while the co-crystals had a more irregular shape . Based on the FTIR structure, five hydrogen bonding motifs arranged the ACF–NIC co-crystal, which is (1) the formation between the amide group from NIC and the acid group from ACF, (2) hetero-synthon chloride-amide formation, (3) ACF linked to NIC by an acid–pyridine hetero-synthon (synthon I) or acid–amide hetero-synthon (synthon V), (4) amide–amide dimers by NIC and ACF groups bound to an acid-amide hetero-synthon, and (5) ACF formed by synthon I and an amide–chloride hetero-synthon . 3.2. Development of NSAID Co-Crystal Production In developing co-crystals, the method is crucial to produce the expected physicochemical properties, as demonstrated by the indomethacin–proline (INC–PRO) system, produced by LAG, and solvent evaporation. The LAG method yielded polar molecules, thereby improving the solubility and intrinsic dissolution rate (IDR) under each tested pH condition; high solubility indicates increased bioavailability and may improve the pharmacokinetic profile of INC . Jafari et al. categorized the co-crystal production route in two ways, solution and solid-state based. Evaporative co-crystallization, cooling crystallization, reaction co-crystallization, isothermal slurry conversion, and supercritical antisolvent (SAS) are solution-based production methods. Conversely, the solid-state process is a technique to combine the solid materials directly, and pressure is applied manually (mortar and pestle) or mechanically (automatic ball mill). The most common solid-state methods are neat (dry) grinding (NG), LAG, and hot-melt extrusion. The direct solid-state process significantly reduced solvent usage, so it is preferable in the context of the evergreen method . However, the different techniques may not produce the same form and physicochemical properties, evidenced by several publications. For example, Evora et al. reported a co-crystal with three methods: annealing a mortar ground mixture at 80 °C, annealing at room temperature after neat ball mill grinding, and ethanol-assisted (10 μL) ball milling. The neat mill grinding methods produced only a few crystals. Meanwhile, the annealing mixture at 80 °C formed the new solid phases with a melting point at around 113–146 °C. In this study, eight co-crystals were created with the ball milling method. In contrast, the diflunisal (DIF) co-crystal with pyrazine, which has a crystalline form like a diamond, was formed by a crystallization method using a solution. DIF can be crystallized with a greener solvent with OXA as the co-former . The solution method was then compared to LAG with ethanol and NG. Observation with FTIR at a specific time showed significant changes in the spectra ( and ) . With a longer grinding time, the specific peak transformation of the co-crystal was more apparent as the peak of the parent compound decreased and disappeared. The regions at wavelengths 1500–2500 cm −1 and 2500–4000 cm −1 show the specific evolutionary areas on the formation of the co-crystal, which indicates a gradual shift of the carbonyl groups to a lower wavenumber, i.e., from 1693 cm −1 to 1685 cm −1 after 30 m of the grinding process. This phenomenon is related to the decreased vibrational energy of the carbonyl, which is caused by the formation of heterocyclic N hydrogen bonds with the PRO group. Besides the carbonyl, the OH peak also experienced a shift from 3324 cm −1 to 3270 cm −1 accompanied by a new band at 3170 cm −1 , estimated from the OH carbon oxide group DFA which forms hydrogen bonds with the PRO carbonyl group. Therefore, this shift was accompanied by a change in the PRO carbonyl group from 1652 and 1617 cm −1 to 1623 and 1616 cm −1 . The new peak appeared at 1968 cm −1 due to the presence of hydrogen bonds O···H–N, which occurred after 2 min of grinding and formed a sharp spectrum after 60 min of grinding. The specific peaks of the co-crystal appeared regularly at 3270 and 3170 cm −1 . The DFA–PRO co-crystal dynamics using the NG method showed the same FTIR pattern as the LAG method; however, the initial co-crystal formation took longer in co-crystallization using the NG method, i.e., within 10 m, while with the LAG method, the initial co-crystal was formed within 2 min. This result is consistent with previous research showing that the addition of solvents will increase molecular diffusion, thereby increasing the interaction between the API and co-formers . Sevukarajan et al. successfully synthesized an ACF–NIC (1:1) co-crystal with the NG method which showed a better solubility because it produced smaller crystals than the solvent evaporation (SE) method . Besides offering a different solubility, different co-crystallization methods also yield various crystalline forms. Berry et al. obtained two co-crystal phases of ibuprofen–nicotinamide (IBU–NIC) by melting and slow evaporation method, namely the rectus-sinister ibuprofen–nicotinamide (RS-IBU–NIC) and sinister ibuprofen-nicotinamide (S-IBU–NIC) . Guerain et al. also studied the formation of IBU-NIC co-crystal with different co-crystallization methods: (1) milling, (2a) crystallization by melting the mixture at 100 °C and then cooling to room temperature, (2b) crystallization by melting the mixture at the glass transition temperature, and (3) the slow evaporation of the solvent. After being characterized by PXRD, each co-crystallization method produced different S-IBU–NIC co-crystals . Methods (1), (2b) and (3) resulted in a similar S-IBU–NIC found by Berry et al. In contrast, co-crystallization with a process (2a) resulted in a new S-IBU-NIC co-crystal. The stability of the new S–IBU–NIC co-crystal showed transformation to the previous S-IBU–NIC co-crystal at temperature 65 °C. In this study, the crystallization method by heating close to the melting temperature will produce polymorphisms . Mefenamic acid (MFA) can arrange co-crystals with several co-formers, such as NIC, URE, pyridoxine, etc. Of all the co-formers reported, NIC was selected as the co-former used in a study on co-crystal formation using the melt crystallization technique because contaminants can form when MFA is co-crystallized by a grinding process. This technique was carried out by melting MFA with NIC (1:2) in a porcelain cup over a paraffin oil bath with the temperature maintained at 200 °C, then incubating in a container containing water over a water bath with the temperature maintained at 90 °C, then drying at room temperature overnight. Based on the characterization results with PXRD, FTIR, DSC–thermogravimetric analysis (TGA), and thin-layer chromatography (TLC), the heating process did not cause co-crystal decomposition. Moreover, from the solubility test results, the solubility of MFA–NIC was higher than that of the parent compound . The supercritical anti-solvent (SAS) method has been used in the formation of NSAID co-crystals. The principle of this method is to dissolve the API and co-former until saturated with the appropriate solvent. The addition of CO 2 reduces the solubility of the API and co-former to induce precipitation to produce NSAID co-crystals . SAS methods can also compose naproxen (NPX)–NIC co-crystals at an equimolar ratio of 2:1 with a solvent mixture (acetone and CO 2 ) subjected to high pressure (10 MPa) at 298.15 310.65 K . Neurohr et al. also researched NPX co-crystallization techniques besides conventional co-crystallization techniques, namely the SAS technique, as shown in . PXRD and FTIR analysis concluded that the SAS technique produced NPX–NIC co-crystals with the same characteristics as conventional co-crystallization techniques . Wichianphong et al. formed MFA–NIC co-crystals using the gas anti-solvent (GAS) method under conditions optimized with the Behnken experimental design. Parameters that can affect the process were investigated, such as temperature (T), the molar ratio of the co-former to the drug (C), and the percentage of drug saturation (S) in the solution against t 63.2 (the time required to dissolve 63.2% of the drug). The resulting co-crystals were then compared also with the product from the conventional methods and physical mixtures. Based on the difference in melting point, PXRD pattern, and FTIR spectrum, the formation of a new phase was confirmed, namely the MFA–NIC co-crystal. The fastest dissolution time (5.07 min) at 450 °C, a co-former-to-drug ratio of 5:1, and 70% drug saturation. The t 63.2 obtained from the experimental results and the calculation of the equation was suitable and met the requirements of (R 2 ≥ 80%) with R 2 amounting to 96.25% and 89.51%; hence this model provides a good correlation. ANOVA showed that S and C significantly affected the t 63.2 ( p < 0.05 and value of F = 14.27) where, at low T, t 63.2 could reach the minimum value if S and C were high, which can increase the S value. A high S value can prevent the formation of MFA–NIC co-crystals and result in a faster dissolution time . In addition to DSC, PXRD, and FTIR, the co-crystal MFA–NIC co-crystallization results by the GAS method were also characterized by SEM. Observations showed that the form of MFA–NIC was different from the parent compound and the co-crystal produced by the conventional method . From the dissolution test, the co-crystals from both methods provided an increase in the dissolution rate . The advantage of this method is a uniformly shaped co-crystal with high purity due to the filtration process with a particular size and a washing process with CO 2 to remove the remaining solvent in the MEF–NIC co-crystal . In addition, this SAS technique depends on the composition of the fluid used and can cause heterogeneity . As the development of technology in pharmaceutical engineering increases, the co-crystallization technique continues to be developed to improve solubility and permeability, one of the premises of applying nanotechnology in co-crystallization. Nanotechnology is the most established technology for increasing the therapeutic index and overcoming the challenges of formulating compounds with poor solubility. From several studies, the nano co-crystal approach has been shown to increase the solubility and dissolution of drugs with poor solubility. This result is related to the size of the co-crystal being reduced to nano-size (less than 100 nm), increasing the surface area and the dissolution rate. Increased dissolution will be very useful to enhance bioavailability. Moreover, nano co-crystal engineering can reduce toxic solvents and surfactants and develop a formulation for various routes of administration where size is a critical factor (injection, ophthalmics, and topicals) . The nanocrystal mineral has limitations, including that it can only be used for class II BCS drugs, requiring high-cost instruments, with formation and stability depending on the drug molecule. Only certain compound groups meet the requirements. A large nano-co-crystal surface with high free energy or differences in surface charge can cause aggregation. However, an increase in solubility that exceeds the saturation point will cause recrystallization into larger particles, called Ostwald refining. Selecting a stabilizer in the formulation of a nano-co-crystal preparation to protect the particle surface in order to reduce the free energy of the system and the interface voltage of the nano-co-crystal overcame this occurrence . The nano–diclofenac–proline co-crystal (NDPC) is formed by combining two techniques, i.e., top-down with the NG method and bottom-up with microwave-assisted rapid evaporation. The production of NDPC with NG produced pure nano-co-crystals with a particle size of around 857.9 nm with a polydispersity index (PI) of 0.353 after 6 h. However, there are limitations to this method related to co-crystal instability. The best sized NDPC at 598.2 nm and PI 0.278 was successfully obtained by the fast evaporation method with ethanol as the solvent and 776 W microwave energy in 8–12 min. Microwave energy provides molecular rotation to form intermolecular bonding interactions in the co-crystal. In stabilizing NDPC, sodium lauryl sulfate (SLS) was added as a stabilizer, giving a zeta potential value of −660 mV, indicating the particles are stable without forming agglomeration. In addition, SLS provides a negative charge on the zeta potential to disperse the nano co-crystal solution . A scale-up process was carried out to produce pure crystals on the scale of 10 g using the refrigerant crystallization method without seeding (adding crystal seeds), equipped with a temperature sensor. This method used a heating and reflux technique to dissolve the co-crystal in ethyl acetate, slowly lowering the temperature with an optimized stirring cycle. Based on PXRD and DSC analysis, the scaled-up indomethacin–saccharine (INC–SAC) co-crystal product has the same purity with the yields of a small scale as the solvent evaporation method using ethyl acetate . Apart from these methods, a combination method (SAS and cooling co-crystallization) could scale up the INC–SAC co-crystal. Cooling time is a factor for optimal production conditions, as cooling will accelerate the precipitation of the co-crystal. The post-nucleation cooling time resulted in a greater amount of INC–SAC co-crystal with smaller particle sizes . 3.3. Enhancement of the Physicochemical Properties of Co-crystals Based on studies conducted by Skorupska et al. to increase solubility and prevent degradation, modified drug delivery was studied by inserting the naproxen–picolinamide (NPX–PA) co-crystal into mesoporous silica particles (MSP) using the thermal solvent-free method, by heating a mixture of co-crystal and MSP at 100°C for 2h. The complex NPX–PA co-crystal in MSP as shown in was prepared to protect the API from environmental effects, carry the drug across the cell membrane, accelerate the drug’s action, increase treatment efficiency, and deliver the drug to specific target organs. Two MSPs with different pore sizes were used; NPX–PA was successfully inserted into SBA-15 (100 Å), while MCM-41 (37 Å) acted as a separating medium. NPX stuck to the outer wall of MSP since the pore size of MCM-41 is smaller than NPX, so SBA-15 is more suitable for the NPX–PA co-crystal . Sohrab et al. conducted a study to observe the effect of a water-soluble polymer on aceclofenac–nicotinamide (ACF–NIC) co-crystal 1:1 obtained from the solvent evaporation and NG methods. PVP K30, hydroxypropyl methylcellulose (HPMC), sodium starch glycolate, and carboxymethylcellulose sodium (CMC-S) were mixed with the co-crystals, and the mixture was then tableted by the wet granulation method. The addition of water-soluble polymers lowered the melting point of the co-crystals, decreased the lattice energy, and thus increased the dissolution rate of ACF. Based on dissolution testing with the USP type I method in phosphate buffer pH 7.5 medium, the ACF–NIC co-crystal without the addition of 3% water-soluble polymer showed 78.9% and 78.5% drug release, while the co-crystal formulations MUNG01 (3% PVPK-30), MUNG02 (3% HPMCE5), MUSE03 (3% SSG) and MUSE04 (3% CMC-S) showed maximum release of 99.1%, 97.51%, 99%, and 98.25% . Next, ACF was also developed into a topical formulation, which was expected to have a high penetration rate through each layer of the skin to effectively and safely treat pain locally. Sharma et al. developed a nanoliposome formulation of ACF with amino acid lysine (LYS) to increase the co-crystal penetration to the skin. ACF–LYS was encapsulated using liposomes formed from lipids and cholesterol in a 70:30 ratio, then dissolved in chloroform and evaporated. ACF–LYS was suspended in the liposome and then inserted into a vehicle from carbopol 940 hydrogel to increase viscosity and form a translucent gel. The co-crystal-loaded liposome gel (COC–LG) formulation, with an average size of 120.4 ± 1.03 nm, showed greater drug penetration into the deeper skin layers. The co-crystal had a spherical shape based on imaging studies with confocal laser scanning microscopy and transmission electron microscopy. The COC–LG formulation had a lower viscosity than the gel on the market, which indicated good contact time and dispersion, resulting in maximum therapeutic effect. Based on ex vivo testing using the back skin of Wistar rats, the formula penetrated better and remained in the skin 2.31 times longer than the marketed gel. The penetration increase was associated with the interaction of the biocompatible component (phospholipon) with the skin. The COC–LG formulation increased the drug concentration in a short time, did not interfere with skin integrity, did not cause inflammation of the dermal tissue (as did other gels), and was more effective as an analgesic and anti-inflammatory for the treatment of arthritis and osteoarthritis . Co-crystallization can also consist of APIs, metals and organic materials to accelerate NSAID action and prolong the half-life, thereby increasing the drug’s duration of action. Hartlieb et al. formed an IBU–metal–organic co-crystal called a metal–organic framework (MOF). MOF is a γ-cyclodextrin (γ-CD) metal used in the form of cation metals such as potassium (K + ) or sodium (Na + ). CD–MOF was used to create an organic porous framework filled with IBU. The co-crystals produced by the diffusion of ethanol vapor in a CD solution and ibuprofen potassium salt formed a cyclodextrin metal–organic framework (CD–MOF-1) co-crystal with an IBU loading rate of 23% in the CD–MOF-1 co-crystal. One problem with IBU is the active S-enantiomer and less active R-enantiomer. However, IBU was absorbed by CD–MOF-1 as the less active enantiomer, but MOF succeeded in separating the enantiomer from IBU. Co-crystal CD–MOF-1 had better stability in the atmosphere than IBU salt and was not hygroscopic, as seen from the PXRD pattern, where it did not show co-crystal degradation in ambient humidity. Based on bioavailability testing with female rats, the pharmacokinetic data showed that the CD–MOF-1 co-crystal had a Cmax, area under the curve (AUC), and a half-life that was two times higher than that of the IBU salt. This result indicated that IBU absorption was fast with an onset of 10–20 min . 3.4. Drug–Drug Co-Crystals of NSAID Co-crystallization of two APIs was also arranged to improve the physicochemical properties of the drug combination. This strategy benefits NSAID drugs, where their therapeutic use is often connected to treat mild to moderate pain or for continued treatment of gout if single therapy cannot be used. Pathak et al. performed co-crystallization techniques on PCM, INC, and MFA drugs with various methods such as solvent evaporation, grinding, the addition of antisolvents, and ultrasound-assisted techniques. Co-crystals were formed from solvent evaporation, which worked best compared to other techniques. The PCM–INC and PCM–MFA co-crystals created hetero-synthon supermolecular synthons with strong hydrogen bonds between COOH–N and COOH–O . The formation of NSAID drug–drug co-crystals can improve drug stability, i.e., the piroxicam (PXC) co-crystal with an aniline–nicotinic acid derivative NSAID, namely clonixin (CNX). Based on stability testing, there was no change in color and transformation into a hydrate at a storage temperature of 25 °C and humidity up to 95% for 4 weeks. Co-crystallization of the PXC–CNX aimed to inhibit the phase transition from the anhydrous to hydrated form, changing the physicochemical and pharmacokinetic properties. CNX was chosen because it has a synergistic therapeutic effect with PXC and CNX structure and has many carboxyl groups to be a co-former. Moreover, it can form both neutral and zwitterionic molecules. In the formation of the PXC–CNX co-crystal, three parameters need to be considered, i.e., the hydrogen bond donor and acceptor’s ability and the polarity of the solvent used. The moderate polar solvents can be used to produce the PXC–CNX co-crystal. Moreover, thermal analysis with DSC showed that the solvent molecule played an essential role in stabilizing the crystal structure. The calculation of the interaction energy with DFT showed that homo-molecular interaction has superior energy than hetero-molecules. Homo-molecular bonds act as a driving force for the decomposition of PXC–CNX compared to a solvent-free co-crystal. Based on screening with the slurry method, there was only a difference in the ethyl acetate (EA) solvent. The PXC–CNX-EA had a different C–H···C distance EA molecule, longer than other solvent molecules. In addition, EA is a non-toxic compound, so that it is safe for food and drug formulations. The crystal structure of PXC–CNX was obtained after the solvent was evaporated and the two molecules were linked via O–H···O hydrogen bonds between the COOH group on CNX and the OH group on the deprotonated PXC. The PXC zwitterion formed hydrogen bonds with the CNX zwitterion via N–H···O, comprising a protonated pyridine donor and a carbonyl amide acceptor . Combined NSAID have better anti-inflammatory activity than single drugs, such as DFA-ethyl diclofenac (ED) co-crystalline, which is more effective than diclofenac acid because only a small amount of ED is bound to plasma. However, the esterification process between the ester and DFA can produce unwanted products. Therefore, a co-crystal between ED and DFA was made, and crystallographic studies were carried out. From the results of thermal analysis with DSC, the melting point of the co-crystal was greater than that of ED (67.7 °C) and lower than that of DFA (173.1 °C), i.e., around 103–104.3 °C. The results showed that the FTIR spectrum did not change. Thin layer chromatography analysis showed the same pattern for the two accelerated stability tests, heated with microwave energy and stored in high humidity for seven days. Thus, the ED and DFA co-crystal were chemically stable. Moreover, anti-inflammatory activity was studied using five groups of mice, which showed that the co-crystals between ED and DFA increased anti-inflammatory activity more than the single components . SCXRD, PXRD, and FTIR characterized crystal meloxicam (MLX) and aspirin (ASP), showing different spectra from the starting material. Based on the rules of pKa, the meloxicam–aspirin (MLX–ASP) co-crystal has not been defined, with a pKa value of 0.68, and a neutral phase with a CNC angle of 110.2°. In a comparable solubility study in the testing medium of phosphate buffer solution pH 7.4 at 37 °C, the solubility of MLX in MLX–ASP co-crystals increased to 0.22 mg/mL from 0.001 mg/mL. Pharmacokinetic studies using male Sprague-Dawley rats at an oral dose of 1 mg/kg showed an increase in absorption of the MLX–ASP co-crystal with a bioavailability of 69%. At the same time, that of MLX was only 16%. The MLX–ASP co-crystal could also achieve tmax four times faster than the single MLX, so it reached the concentration of 0.51 g/mL MLX in 10 min. This rapid onset is an advantage of the MLX–ASP co-crystal, which is indicated as a mild to moderate acute pain reliever. In this study, ASP in the MLX–ASP co-crystal was only 7.7 mg and MLX 10 mg/kg, so it did not cause significant side effects . The multi-APIs co-crystal was also constructed between celecoxib (CEL) and tramadol HCl (TML···HCl), named CTC. Initially, the co-crystal was formed by grinding method using isopropyl alcohol. This new phase is an ionic co-crystal, where the chloride ion interacts with an adjacent drug molecule by involving –N + H···Cl − and –OH···Cl hydrogen bonds. The maximum concentration of CEL in CTC was higher than the maximum concentration of a single CEL, thus extending drug action by slowing drug release. Additionally, this combination increased efficacy, preventing pain by four mechanisms of action and lower therapeutic doses, as the CTC dose was 100 mg (44 mg TML · HCl and 56 mg CEL). In contrast, the dose of CEL and TML · HCl is around 200–400 mg/day . Because of its very high efficacy, the co-crystal (TML · HCl–CEL) was tested in a phase I clinical trial in 2017 by Esteve (E-58425) and Mundiphara Research (MR308). In stage I, a single pharmacokinetic (PK) dose of CTC and its reference products were compared singly and in combination. Samples were randomly assigned with an initial dose of 200 mg of CTC, equivalent to 88 mg of tramadol and 112 mg of CEL. The results of the AUC value were comparable to the Cmax, but lower than that of tramadol. However, the t-max was longer than that of tramadol. This result is consistent with previous research where C-max reduction was proportional to the slow dissolution rate . Next, they continued the phase II clinical trial in patients. There were six doses of CTC tested (CTC 50, 100, 150, 200 mg; tramadol 100 mg; and placebo). The initial efficacy of this test was based on differences in pain intensity across the population. Based on the results of phase II, the potential for CTC was more significant than the risk. Furthermore, CTCs at 100, 150, and 200 mg doses were more effective than tramadol 100 mg and placebo for treating moderate to severe acute pain . CTC development is currently awaiting the results of a phase III trial , and this is the first co-crystal NSAID to enter clinical trials, while the MLX–ASP co-crystal is still in the in vivo testing phase . For gout therapy, NSAID are the first-line drugs to reduce pain, which need to minimize uric acid levels in the body by increasing excretion or inhibiting the formation of uric acid using uricosurics and xanthine oxidase (XO) inhibitors . To achieve optimal therapy, Modani et al. utilized co-crystallization in combining PXC with febuxostat (FBX). FBX is a novel non-purine xanthine oxidase (XO) used to treat hyperuricemia in gouty patients. Based on several studies, FBX is effective at inhibiting lung inflammation in animals caused by oxidative stress , has been shown to accelerate restoration of the pulmonary endothelial barrier , and is potent in treating mild to moderate COVID-19 infection by overcoming the early-phase pneumonia caused by the coronavirus . A single crystal of the piroxicam–febuxostat (PXC–FBX) co-crystal with a stoichiometric ratio of 1:1 resulted from the crystallization technique using acetonitrile solvent. The formation of co-crystals was confirmed by a decrease in the melting point, no stretching of the OH from carboxylic acids indicating no proton transfer, the specific shape, and differences in the diffractogram. A supramolecule synthon connects the carboxylate and azole groups with hydrogen bonds NH···O and 2D packaging, stabilized by the interaction of NH···O, CH···N, OH···O, and CH···O hydrogen bonds. It becomes interesting to learn when both APIs come from the same BCS class. Solubility testing at pH 1.2–7.4 resulted in 2.5 times higher solubility for FBX at pH 1.2 and continued to increase as pH increased; however, at pH 4.5, the solubility decreased. At pH 6.8 and 7.4, the solubility of PXC significantly increased without affecting FBX, so that at pH 1.2 PXC was a co-former for FBX but at pH 6.8 and 7.4, FBX was a co-former for PXC. In vitro, the FBX–PXC co-crystal improved the dissolution time by up to 2.8-fold compared to pure PXC without affecting the dissolution of FBX. In addition, the FBX–PXC co-crystal had a stable crystal structure and altered mechanical properties such as an increase in the flow rate and the formation of capping and laminating at the time of compression, showing good compressibility . NSAID, especially the anthranilic acid group, also can construct multi-APIs co-crystal with antibacterial drugs. This combination aimed to provide a unique combination drug or pain relief therapy and prevent postoperative infections. Hence, it minimizes or even replaces the use of opioids, reduces adverse effects, and improves the physicochemical properties of each API . For example, niflumic acid (NFA)–caprolactam (CPR) (1:1) is an example of an anthranilic acid class of NSAID that selectively inhibits COX-2 and is widely used in patients with rheumatoid arthritis. The co-formers used to form NFA co-crystals are CPR and 2-hydroxy pyridine (2HP). An analysis of the crystal structure showed the type of crystals that form between the salt and the co-crystal together by bonding through NH···O and OH···O hydrogen bonds, which form dimers, and CH···F and CH···O. The NFA–CPR co-crystal has a low melting point of 83 °C. In contrast, the NFA-2HP co-crystal has a melting point of 135 °C . Bhattacharya et al. performed a screening of the hydrate salts and multi-API co-crystals of anthranilic acid with an antibacterial. There are two co-crystals from APIs, namely flufenamic acid (FFA) with sulfamethazine (SFZ) and niflumic acid (NFA) and SFZ. In the crystal structure, hydrogen bonds are formed between the carboxylic acid groups of NFA/FFA with the sulfonamide (NH) groups and N atoms from the pyrimidine ring on the SFZ to form synthon III. Then, each dimer unit is connected by four pairs of identical NH···O hydrogen bonds to form synthon IV. This structure is stabilized by CH···interactions and π···π interactions . Machado et al. conducted a study on an NSAID co-crystal formed from propionic acid with levetiracetam (LEV), which was expected to treat epilepsy accompanied by inflammation. LEV is an etiracetam enantiomer and acts as an oral anti-epilepsy drug classified as BCS class I (good stability and permeability), so it is expected to improve the physicochemical properties of the NSAID. The eutectic mixture and co-crystal dissolution test with aryl propionic acid (IBU, naproxen (NPX), Flurbiprofen), and LEV showed an increased dissolution rate compared to the pure NSAID. LEV + (S)-IBU crystals were stable in accelerated stability tests over six months. However, the eutectic mixture of propionic acid and LEV melted at temperatures below that of the pure NSAID. This suggests that the crystalline form of propionic acid and LEV can also increase propionic acid derived NSAID pharmacokinetic parameters . Surov et al. conducted a study on the formation of co-crystal diclofenac (DFA), diflunisal (DIF) with theophylline (THP), as well as a DFA–THP co-crystal with a DIF-THP co-crystal; the co-crystal synthesis was carried out by grinding with the addition of mixed solvent droplets (acetonitrile, methanol, and water). The synthon that formed as a hetero synthon connected by O···HN involves carboxylic acid from API and an unsaturated N atom from the imidazole ring on theophylline. The DFA–THP co-crystal provides a lower energy grid than DIF–THP because the packing consists only of dispersion energy. Co-crystal DIF–THP had a lower melting point than pure DIF, causing the co-crystal to be less stable. The small co-crystal enthalpy indicated that the energy in the hydrogen bond was proportional to the parent compound. The packaging was strengthened by the weak interactions of van der Waals forces. Based on the intrinsic dissolution results, co-crystallization increased the solubility of DFA by up to 1.3 times. In contrast, in DIF, the solubility was comparable to that of pure DIF. In this study, DIF–THP dissolution illustrated the classic concept of “spring and parachute”. Apart from increasing the solubility of co-crystallization, it also increased the stability in different humidity . Aitipamula et al. conducted a study of oxaprozin (OXP) co-crystal formation with the co-former 4,4 bipyridine (4.4 BP), 1,2 BPE and salts with piperazine, 2-amino-3-picoline, and anti-asthma drugs such as salbutamol . OXP is included in the propionic acid group, which is used to relieve rheumatoid arthritis. It is thought to be able to inhibit urate reabsorption by selectively inhibiting glucuronidation . There have been very few publications on the co-crystallization of OXP. OXP salt formation showed a decrease in intrinsic solubility and dissolution rate. Therefore, OXP co-crystal can be utilized in tablet formulation of extended-release SAL to overcome the short half-life problem and lower the frequency of SAL use. The co-crystals arrangement occurs at a stoichiometric ratio (1:0.5) with hydrogen interactions through the interactions CH···O and CH···π. The OXP-4,4 BP co-crystal had a monoclinic crystal form with one OXP molecule and half a molecule of 4.4 BP in asymmetrical units. In comparison, the OXP–1,2 BPE co-crystal produced a triclinic crystal form with one OXP molecule and half a molecule of 1,2 BPE. The co-crystal and the salt were relatively stable and not hygroscopic . Drug–drug NSAID co-crystals can influence clinical practice, especially for pain management, gout therapy, and osteoarthritis. This phenomenon is related to the synergistic effect of analgesics without the potential to cause side effects, improving the physicochemical properties and clinical profile of drug release so that it is expected to reduce dosages, patient complaints related to drug use, and economic burden of treatment . In addition, based on the literature description of NSAID co-crystal drugs, the combination of drugs with the same BCS class can improve drugs’ stability and mechanical properties, such as the flow rate and compressibility. NSAID co-crystals exhibit various supramolecular synthon motifs . For example, diflunisal (DIF) co-crystal has three hydrogen bonding patterns based on the Cambridge Structural Database (CSD), namely (1) a ring hetero-synthon with aromatic N, (2) a COOH-COOH homo-synthon, and (3) four ring homo-synthons between OH and C=O from acids. Interestingly, DIF and other salicylic acid (SA) derivatives have an ortho-hydroxy group with steric resonance and hydrogen bonding called the “ortho effect”. Only eight DIF co-crystals have been created with the following features: (a) a COOH homo-synthon between acid co-formers and o-hydroxy carboxylic acids when one of them has an electron-attracting group, (b) COOH homo-synthons between acid co-formers and o-hydroxy carboxylic acids when the o-hydroxy acid has a competing group, such as OH or NH 2 , which can form hydrogen bonds with the co-former, and (c) a combination of motifs (a) and (b) with a COOH interaction distance of more than 3 Å; the type (c) motif is rare. Based on their feature exploration, the ability of donors is in the following order: COOH > NH > OH with the most interactions found in DIF with COOH . Indomethacin (INC) also forms a co-crystal with the saccharine (SAC) co-former by forming N–H·· O hydrogen bonds between acid dimers in INC and imide dimers in SAC . Naproxen (NPX)–picolinamide (PA) co-crystal was formed from carboxylic acid-carbamide dimers between NPX and PA, as shown in . PA offers an ortho effect with a pKa value of 1.17. For that PA, carboxamide interacts with NPX via acid-amide synthesis; however, SCXRD results do not match the asymmetric unit. The position of the proton determined by H nuclear magnetic resonance (NMR) shows that the H atom involved in hydrogen bonding at the synthon gives a different peak. The H atom involved has a delay of 20s compared to the other H atom, which was determined by comparing the NMR data with the predicted shift values. The RMSD obtained was 0.09 Å by optimizing the atomic position at 0.2 Å SCXRD so that the hydrogen bonds are more symmetrical and the difference between the two hydrogen bonds O–H · O was reduced. In c, two bonding models are proposed as alternatives to improve the SCXRD results. In model 1, only the H atoms in the OH bond are removed, whereas, in model 2, the amide protons in PA also move along the N–H hydrogen bond. H atom position was validated by density functional theory (DFT) geometry optimization based on a significant decline in potential energy, which indicates that the naproxen–picolinamide (NPX–PA) is a co-crystal and not a salt . Lornoxicam (LNX) is included in the oxicam class of NSAID drugs. LNX has more selective anti-inflammatory activity without affecting the digestive tract. LNX can be used for osteoarthritis therapy and postoperative pain relief of the head and neck, and also reduced postoperative pain in the third molars in the first phase of perioperative pain management . LNX has better tolerance and safety than tramadol and is comparable to tramadol in postoperative analgesic therapy . Oral administration of LNX takes 2.5 h to produce a maximum concentration (Cmax), limiting LNX therapy for providing the optimal effect as an analgesic. The co-crystallization technique has succeeded in changing the physicochemical properties of LNX using the liquid-assisted grinding (LAG) method. From this method, co-crystals of LNX salt were produced with a benzoic acid co-former and 2,4 dihydroxy benzoic acid. In contrast, co-crystals were formed from catechol co-formers, resorcinol, hydroquinone, and SAC at a stoichiometric ratio of 1:1 . The crystal system is orthorhombic with space group P 212121 with one LNX molecule in asymmetrical units based on the synthon co-crystal approach. The decrease in the melting point of the parent drug indicates the formation of a multi-component system. Strong hydrogen bonds link the LNX N + –H···O=C and NH···OC causes a decrease in the conformational change in the amide bond to the ring. As a result, a zigzag band formed along the C axis in the molecule through the interaction of N + –H···O=S, C–H···O and C–H···Cl, then the CH···S interaction forms a 2D sheet and creates a 3D layered structure through CH···O hydrogen bonding interactions. Based on in vitro dissolution testing at the optimum temperature of LNX, the salt co-crystal has excellent solubility compared to the co-crystal and LNX, i.e., up to 1.52-fold higher than that of LNX . LNX co-crystals were also formed by the neat grinding method on the generally recognized as safe (GRAS) co-former and drugs such as malonic acid (MAL), succinic acid, tartaric acid, anthranilic acid (phenamate), cinnamic acid, p-aminobenzoic acid, ferulic acid, urea (URE), sodium saccharin (SS), citric acid, and OXA. LNX–SS increases the solubility of the parent drug significantly. It is related to the low lattice energy value, thereby reducing the interference from the solvent. Co-crystallization decreases the partition coefficient and shows the transformation from a hydrophobic to a hydrophilic compound. The LNX–SS co-crystal has good stability at extreme temperatures based on stability testing for 30 days under 40 °C and 75% humidity . Co-crystal diclofenac (DFA) with pyridine-based co-formers and acid pyridine synthon has been reported in nine new solid forms using the evaporation co-crystallization technique. DFA formed co-crystals with the 1,2-bis (4-pyridyl) ethane (BPE), 1,3-di (4-pyridyl) propane, and 4,4-bipyridine at a stoichiometric ratio of 2:1. The three co-formers have pyridine groups on both sides. The packing of DFA co-crystal is highlighted in a,b. Simultaneously, DFA formed salt co-crystal with two amino groups as co-formers, namely 2-aminopyridine (2-apy) and 3-aminopyridine. In the salt co-crystal DFA-2-apy, there is a weak bonding of C–H. There is a tetramer aggregate packaging in the salt co-crystal that connects the synthon, as shown in c . The formation of NSAID co-crystals with an amino acid as the co-former has been widely published, called zwitterionic co-crystals. For example, indomethacin–proline (INC–PRO) 2H 2 O, a co-crystal hydrate, exhibited the supramolecular synthons NH 3+ ···O − and OH···O. The INC–PRO co-crystal consists of an internal hydrophilic group and a hydrophobic group on the surface, increasing the solubility and permeability INC . Besides INC, a diclofenac–proline (DFA–PRO) was obtained as a stabilized co-crystal and linked with intramolecular hydrogen bonds N–H···O and C–H···C. The difference in the C–O bond length of the DFA and PRO carboxylate groups > 0.07 indicates that DFA–PRO is a zwitterion co-crystal . Furthermore, Nugrahani et al. developed the DFA–PRO co-crystal by forming a salt hydrate co-crystal to increase the multi-component solubility and stability over DFA–PRO co-crystal and the alkaline salt diclofenac. Besides sodium diclofenac proline tetrahydrate (SDPT), a co-crystal of sodium diclofenac–proline monohydrate (SDPM) was confirmed by a structural determination with SCXRD at −180 °C. The new phase consisted of diclofenac sodium–proline-water (1:1:1:1). A stability study of the two co-crystals was carried out under extreme drying and humidity and evaluated by differential thermal analysis (DTA)/TG/DSC). Based on the diffractogram results, SDPT was stable at high humidity, i.e., 94 ± 2%RH/25 ± 2 °C, for up to 15 days did not separate into the starting materials . Meanwhile, SDPM quickly changed to SDPT under these environmental conditions. Moreover, SDPM could be restored to SDPT under dry conditions, so SDPT is a more stable co-crystal than SDPM. These results show that water plays a critical role in forming co-crystals in SDPT to mediate the interactions between the components in the co-crystals of the tetrahydrate salt formed. The high solubility of SDPM occurs because there is a region consisting of Na + and water molecules with high affinity, such that it quickly dissolves and breaks down. The dissolution test was carried out at pH 1.2 and 6.8, based on previous studies; the sodium diclofenac (SD) profile increased at moderate to alkaline pH. Based on this test, the SDPM and SDPT dissolution results were faster than SD, with a superior increase in the dissolution of SDPM. This result showed that the monohydrate crystal lattice was smaller and had a looser and broader space than the tetrahydrate form, so it dissolved faster than SDPT at pH 6.8 . Furthermore, the DFA–PRO co-crystal was developed from diclofenac potassium, resulting in a salt co-crystal hydrate consisting of potassium, DFA, l -proline, and water (1:1:1:4) . The prediction of bond formation between co-crystal NSAID and co-formers using a computational method to calculate DFT with B3LYP/aug-cc-pVDZ 1 depends on the constituent molecules to form a supramolecular synthon . Co-crystal screening with various co-formers and methods has been established, one of which is by calculating the van Krevelen and Hoftyzer solubility parameters, involving the solubility of two compounds to find the ∆δ − factor with the equation: ∆δ − = [(δ d2 − δ d1 ) 2 + (δ p2 − δ p ) 2 + (δ h2 δ h1 ) 2 ] 0.5 (1) The partial solubility parameters are dispersion (δ d ), polarity (δ p ), and hydrogen bonding (δ h ). Good miscibility can be achieved if ∆δ ≤ 5 MPa 0.5 . The difference in total solubility between the drug and the ∆δt carrier is a means of predicting solubility: ∆δ t = |δ t2 − δ t1 | (2) As noted, t1 and t2 are carriers and drugs; materials are miscible with ∆δ ≤ 7 MPa 0.5 , while systems with ∆δ ≥ 7 MPa 0.5 are immiscible. This solubility parameters method was used for developing aceclofenac (ACF) co-crystals with several co-formers, namely gallic acid (GLA), CA, MLA, NIC, d -tartaric acid (TCA), URE, and vanillic acid, which showed that all co-formers could form co-crystals, except TCA . ACF 2-[2-[2-[(2,6-dichloro-phenyl)amino]phenyl]-acetyl] oxy-acetic acid (C 16 H 13 C l2 NO 4 ) is an analog of DFA in the form of glycolic acid ester. It is used as a first-line drug for the treatment of rheumatoid arthritis and osteoarthritis . The gastrointestinal side effects of ACF are relatively lower than other non-selective NSAID and are comparable to CEL . ACF does not interact directly with COX enzymes, but ACF in the form of a prodrug will produce DFA active metabolites, which will inhibit COX enzyme activity by forming hydrogen bonds on the TYR355 and SER530 residues of COX enzymes . The characterization of the ACF–NIC and ACF–GLA co-crystals obtained by the solvent evaporation technique at 1:1 stoichiometry was based on the SEM analysis. The ACF–GLA and ACF–NIC co-crystals had different crystal shapes than pure ACF, large and regular, while the co-crystals had a more irregular shape . Based on the FTIR structure, five hydrogen bonding motifs arranged the ACF–NIC co-crystal, which is (1) the formation between the amide group from NIC and the acid group from ACF, (2) hetero-synthon chloride-amide formation, (3) ACF linked to NIC by an acid–pyridine hetero-synthon (synthon I) or acid–amide hetero-synthon (synthon V), (4) amide–amide dimers by NIC and ACF groups bound to an acid-amide hetero-synthon, and (5) ACF formed by synthon I and an amide–chloride hetero-synthon . In developing co-crystals, the method is crucial to produce the expected physicochemical properties, as demonstrated by the indomethacin–proline (INC–PRO) system, produced by LAG, and solvent evaporation. The LAG method yielded polar molecules, thereby improving the solubility and intrinsic dissolution rate (IDR) under each tested pH condition; high solubility indicates increased bioavailability and may improve the pharmacokinetic profile of INC . Jafari et al. categorized the co-crystal production route in two ways, solution and solid-state based. Evaporative co-crystallization, cooling crystallization, reaction co-crystallization, isothermal slurry conversion, and supercritical antisolvent (SAS) are solution-based production methods. Conversely, the solid-state process is a technique to combine the solid materials directly, and pressure is applied manually (mortar and pestle) or mechanically (automatic ball mill). The most common solid-state methods are neat (dry) grinding (NG), LAG, and hot-melt extrusion. The direct solid-state process significantly reduced solvent usage, so it is preferable in the context of the evergreen method . However, the different techniques may not produce the same form and physicochemical properties, evidenced by several publications. For example, Evora et al. reported a co-crystal with three methods: annealing a mortar ground mixture at 80 °C, annealing at room temperature after neat ball mill grinding, and ethanol-assisted (10 μL) ball milling. The neat mill grinding methods produced only a few crystals. Meanwhile, the annealing mixture at 80 °C formed the new solid phases with a melting point at around 113–146 °C. In this study, eight co-crystals were created with the ball milling method. In contrast, the diflunisal (DIF) co-crystal with pyrazine, which has a crystalline form like a diamond, was formed by a crystallization method using a solution. DIF can be crystallized with a greener solvent with OXA as the co-former . The solution method was then compared to LAG with ethanol and NG. Observation with FTIR at a specific time showed significant changes in the spectra ( and ) . With a longer grinding time, the specific peak transformation of the co-crystal was more apparent as the peak of the parent compound decreased and disappeared. The regions at wavelengths 1500–2500 cm −1 and 2500–4000 cm −1 show the specific evolutionary areas on the formation of the co-crystal, which indicates a gradual shift of the carbonyl groups to a lower wavenumber, i.e., from 1693 cm −1 to 1685 cm −1 after 30 m of the grinding process. This phenomenon is related to the decreased vibrational energy of the carbonyl, which is caused by the formation of heterocyclic N hydrogen bonds with the PRO group. Besides the carbonyl, the OH peak also experienced a shift from 3324 cm −1 to 3270 cm −1 accompanied by a new band at 3170 cm −1 , estimated from the OH carbon oxide group DFA which forms hydrogen bonds with the PRO carbonyl group. Therefore, this shift was accompanied by a change in the PRO carbonyl group from 1652 and 1617 cm −1 to 1623 and 1616 cm −1 . The new peak appeared at 1968 cm −1 due to the presence of hydrogen bonds O···H–N, which occurred after 2 min of grinding and formed a sharp spectrum after 60 min of grinding. The specific peaks of the co-crystal appeared regularly at 3270 and 3170 cm −1 . The DFA–PRO co-crystal dynamics using the NG method showed the same FTIR pattern as the LAG method; however, the initial co-crystal formation took longer in co-crystallization using the NG method, i.e., within 10 m, while with the LAG method, the initial co-crystal was formed within 2 min. This result is consistent with previous research showing that the addition of solvents will increase molecular diffusion, thereby increasing the interaction between the API and co-formers . Sevukarajan et al. successfully synthesized an ACF–NIC (1:1) co-crystal with the NG method which showed a better solubility because it produced smaller crystals than the solvent evaporation (SE) method . Besides offering a different solubility, different co-crystallization methods also yield various crystalline forms. Berry et al. obtained two co-crystal phases of ibuprofen–nicotinamide (IBU–NIC) by melting and slow evaporation method, namely the rectus-sinister ibuprofen–nicotinamide (RS-IBU–NIC) and sinister ibuprofen-nicotinamide (S-IBU–NIC) . Guerain et al. also studied the formation of IBU-NIC co-crystal with different co-crystallization methods: (1) milling, (2a) crystallization by melting the mixture at 100 °C and then cooling to room temperature, (2b) crystallization by melting the mixture at the glass transition temperature, and (3) the slow evaporation of the solvent. After being characterized by PXRD, each co-crystallization method produced different S-IBU–NIC co-crystals . Methods (1), (2b) and (3) resulted in a similar S-IBU–NIC found by Berry et al. In contrast, co-crystallization with a process (2a) resulted in a new S-IBU-NIC co-crystal. The stability of the new S–IBU–NIC co-crystal showed transformation to the previous S-IBU–NIC co-crystal at temperature 65 °C. In this study, the crystallization method by heating close to the melting temperature will produce polymorphisms . Mefenamic acid (MFA) can arrange co-crystals with several co-formers, such as NIC, URE, pyridoxine, etc. Of all the co-formers reported, NIC was selected as the co-former used in a study on co-crystal formation using the melt crystallization technique because contaminants can form when MFA is co-crystallized by a grinding process. This technique was carried out by melting MFA with NIC (1:2) in a porcelain cup over a paraffin oil bath with the temperature maintained at 200 °C, then incubating in a container containing water over a water bath with the temperature maintained at 90 °C, then drying at room temperature overnight. Based on the characterization results with PXRD, FTIR, DSC–thermogravimetric analysis (TGA), and thin-layer chromatography (TLC), the heating process did not cause co-crystal decomposition. Moreover, from the solubility test results, the solubility of MFA–NIC was higher than that of the parent compound . The supercritical anti-solvent (SAS) method has been used in the formation of NSAID co-crystals. The principle of this method is to dissolve the API and co-former until saturated with the appropriate solvent. The addition of CO 2 reduces the solubility of the API and co-former to induce precipitation to produce NSAID co-crystals . SAS methods can also compose naproxen (NPX)–NIC co-crystals at an equimolar ratio of 2:1 with a solvent mixture (acetone and CO 2 ) subjected to high pressure (10 MPa) at 298.15 310.65 K . Neurohr et al. also researched NPX co-crystallization techniques besides conventional co-crystallization techniques, namely the SAS technique, as shown in . PXRD and FTIR analysis concluded that the SAS technique produced NPX–NIC co-crystals with the same characteristics as conventional co-crystallization techniques . Wichianphong et al. formed MFA–NIC co-crystals using the gas anti-solvent (GAS) method under conditions optimized with the Behnken experimental design. Parameters that can affect the process were investigated, such as temperature (T), the molar ratio of the co-former to the drug (C), and the percentage of drug saturation (S) in the solution against t 63.2 (the time required to dissolve 63.2% of the drug). The resulting co-crystals were then compared also with the product from the conventional methods and physical mixtures. Based on the difference in melting point, PXRD pattern, and FTIR spectrum, the formation of a new phase was confirmed, namely the MFA–NIC co-crystal. The fastest dissolution time (5.07 min) at 450 °C, a co-former-to-drug ratio of 5:1, and 70% drug saturation. The t 63.2 obtained from the experimental results and the calculation of the equation was suitable and met the requirements of (R 2 ≥ 80%) with R 2 amounting to 96.25% and 89.51%; hence this model provides a good correlation. ANOVA showed that S and C significantly affected the t 63.2 ( p < 0.05 and value of F = 14.27) where, at low T, t 63.2 could reach the minimum value if S and C were high, which can increase the S value. A high S value can prevent the formation of MFA–NIC co-crystals and result in a faster dissolution time . In addition to DSC, PXRD, and FTIR, the co-crystal MFA–NIC co-crystallization results by the GAS method were also characterized by SEM. Observations showed that the form of MFA–NIC was different from the parent compound and the co-crystal produced by the conventional method . From the dissolution test, the co-crystals from both methods provided an increase in the dissolution rate . The advantage of this method is a uniformly shaped co-crystal with high purity due to the filtration process with a particular size and a washing process with CO 2 to remove the remaining solvent in the MEF–NIC co-crystal . In addition, this SAS technique depends on the composition of the fluid used and can cause heterogeneity . As the development of technology in pharmaceutical engineering increases, the co-crystallization technique continues to be developed to improve solubility and permeability, one of the premises of applying nanotechnology in co-crystallization. Nanotechnology is the most established technology for increasing the therapeutic index and overcoming the challenges of formulating compounds with poor solubility. From several studies, the nano co-crystal approach has been shown to increase the solubility and dissolution of drugs with poor solubility. This result is related to the size of the co-crystal being reduced to nano-size (less than 100 nm), increasing the surface area and the dissolution rate. Increased dissolution will be very useful to enhance bioavailability. Moreover, nano co-crystal engineering can reduce toxic solvents and surfactants and develop a formulation for various routes of administration where size is a critical factor (injection, ophthalmics, and topicals) . The nanocrystal mineral has limitations, including that it can only be used for class II BCS drugs, requiring high-cost instruments, with formation and stability depending on the drug molecule. Only certain compound groups meet the requirements. A large nano-co-crystal surface with high free energy or differences in surface charge can cause aggregation. However, an increase in solubility that exceeds the saturation point will cause recrystallization into larger particles, called Ostwald refining. Selecting a stabilizer in the formulation of a nano-co-crystal preparation to protect the particle surface in order to reduce the free energy of the system and the interface voltage of the nano-co-crystal overcame this occurrence . The nano–diclofenac–proline co-crystal (NDPC) is formed by combining two techniques, i.e., top-down with the NG method and bottom-up with microwave-assisted rapid evaporation. The production of NDPC with NG produced pure nano-co-crystals with a particle size of around 857.9 nm with a polydispersity index (PI) of 0.353 after 6 h. However, there are limitations to this method related to co-crystal instability. The best sized NDPC at 598.2 nm and PI 0.278 was successfully obtained by the fast evaporation method with ethanol as the solvent and 776 W microwave energy in 8–12 min. Microwave energy provides molecular rotation to form intermolecular bonding interactions in the co-crystal. In stabilizing NDPC, sodium lauryl sulfate (SLS) was added as a stabilizer, giving a zeta potential value of −660 mV, indicating the particles are stable without forming agglomeration. In addition, SLS provides a negative charge on the zeta potential to disperse the nano co-crystal solution . A scale-up process was carried out to produce pure crystals on the scale of 10 g using the refrigerant crystallization method without seeding (adding crystal seeds), equipped with a temperature sensor. This method used a heating and reflux technique to dissolve the co-crystal in ethyl acetate, slowly lowering the temperature with an optimized stirring cycle. Based on PXRD and DSC analysis, the scaled-up indomethacin–saccharine (INC–SAC) co-crystal product has the same purity with the yields of a small scale as the solvent evaporation method using ethyl acetate . Apart from these methods, a combination method (SAS and cooling co-crystallization) could scale up the INC–SAC co-crystal. Cooling time is a factor for optimal production conditions, as cooling will accelerate the precipitation of the co-crystal. The post-nucleation cooling time resulted in a greater amount of INC–SAC co-crystal with smaller particle sizes . Based on studies conducted by Skorupska et al. to increase solubility and prevent degradation, modified drug delivery was studied by inserting the naproxen–picolinamide (NPX–PA) co-crystal into mesoporous silica particles (MSP) using the thermal solvent-free method, by heating a mixture of co-crystal and MSP at 100°C for 2h. The complex NPX–PA co-crystal in MSP as shown in was prepared to protect the API from environmental effects, carry the drug across the cell membrane, accelerate the drug’s action, increase treatment efficiency, and deliver the drug to specific target organs. Two MSPs with different pore sizes were used; NPX–PA was successfully inserted into SBA-15 (100 Å), while MCM-41 (37 Å) acted as a separating medium. NPX stuck to the outer wall of MSP since the pore size of MCM-41 is smaller than NPX, so SBA-15 is more suitable for the NPX–PA co-crystal . Sohrab et al. conducted a study to observe the effect of a water-soluble polymer on aceclofenac–nicotinamide (ACF–NIC) co-crystal 1:1 obtained from the solvent evaporation and NG methods. PVP K30, hydroxypropyl methylcellulose (HPMC), sodium starch glycolate, and carboxymethylcellulose sodium (CMC-S) were mixed with the co-crystals, and the mixture was then tableted by the wet granulation method. The addition of water-soluble polymers lowered the melting point of the co-crystals, decreased the lattice energy, and thus increased the dissolution rate of ACF. Based on dissolution testing with the USP type I method in phosphate buffer pH 7.5 medium, the ACF–NIC co-crystal without the addition of 3% water-soluble polymer showed 78.9% and 78.5% drug release, while the co-crystal formulations MUNG01 (3% PVPK-30), MUNG02 (3% HPMCE5), MUSE03 (3% SSG) and MUSE04 (3% CMC-S) showed maximum release of 99.1%, 97.51%, 99%, and 98.25% . Next, ACF was also developed into a topical formulation, which was expected to have a high penetration rate through each layer of the skin to effectively and safely treat pain locally. Sharma et al. developed a nanoliposome formulation of ACF with amino acid lysine (LYS) to increase the co-crystal penetration to the skin. ACF–LYS was encapsulated using liposomes formed from lipids and cholesterol in a 70:30 ratio, then dissolved in chloroform and evaporated. ACF–LYS was suspended in the liposome and then inserted into a vehicle from carbopol 940 hydrogel to increase viscosity and form a translucent gel. The co-crystal-loaded liposome gel (COC–LG) formulation, with an average size of 120.4 ± 1.03 nm, showed greater drug penetration into the deeper skin layers. The co-crystal had a spherical shape based on imaging studies with confocal laser scanning microscopy and transmission electron microscopy. The COC–LG formulation had a lower viscosity than the gel on the market, which indicated good contact time and dispersion, resulting in maximum therapeutic effect. Based on ex vivo testing using the back skin of Wistar rats, the formula penetrated better and remained in the skin 2.31 times longer than the marketed gel. The penetration increase was associated with the interaction of the biocompatible component (phospholipon) with the skin. The COC–LG formulation increased the drug concentration in a short time, did not interfere with skin integrity, did not cause inflammation of the dermal tissue (as did other gels), and was more effective as an analgesic and anti-inflammatory for the treatment of arthritis and osteoarthritis . Co-crystallization can also consist of APIs, metals and organic materials to accelerate NSAID action and prolong the half-life, thereby increasing the drug’s duration of action. Hartlieb et al. formed an IBU–metal–organic co-crystal called a metal–organic framework (MOF). MOF is a γ-cyclodextrin (γ-CD) metal used in the form of cation metals such as potassium (K + ) or sodium (Na + ). CD–MOF was used to create an organic porous framework filled with IBU. The co-crystals produced by the diffusion of ethanol vapor in a CD solution and ibuprofen potassium salt formed a cyclodextrin metal–organic framework (CD–MOF-1) co-crystal with an IBU loading rate of 23% in the CD–MOF-1 co-crystal. One problem with IBU is the active S-enantiomer and less active R-enantiomer. However, IBU was absorbed by CD–MOF-1 as the less active enantiomer, but MOF succeeded in separating the enantiomer from IBU. Co-crystal CD–MOF-1 had better stability in the atmosphere than IBU salt and was not hygroscopic, as seen from the PXRD pattern, where it did not show co-crystal degradation in ambient humidity. Based on bioavailability testing with female rats, the pharmacokinetic data showed that the CD–MOF-1 co-crystal had a Cmax, area under the curve (AUC), and a half-life that was two times higher than that of the IBU salt. This result indicated that IBU absorption was fast with an onset of 10–20 min . Co-crystallization of two APIs was also arranged to improve the physicochemical properties of the drug combination. This strategy benefits NSAID drugs, where their therapeutic use is often connected to treat mild to moderate pain or for continued treatment of gout if single therapy cannot be used. Pathak et al. performed co-crystallization techniques on PCM, INC, and MFA drugs with various methods such as solvent evaporation, grinding, the addition of antisolvents, and ultrasound-assisted techniques. Co-crystals were formed from solvent evaporation, which worked best compared to other techniques. The PCM–INC and PCM–MFA co-crystals created hetero-synthon supermolecular synthons with strong hydrogen bonds between COOH–N and COOH–O . The formation of NSAID drug–drug co-crystals can improve drug stability, i.e., the piroxicam (PXC) co-crystal with an aniline–nicotinic acid derivative NSAID, namely clonixin (CNX). Based on stability testing, there was no change in color and transformation into a hydrate at a storage temperature of 25 °C and humidity up to 95% for 4 weeks. Co-crystallization of the PXC–CNX aimed to inhibit the phase transition from the anhydrous to hydrated form, changing the physicochemical and pharmacokinetic properties. CNX was chosen because it has a synergistic therapeutic effect with PXC and CNX structure and has many carboxyl groups to be a co-former. Moreover, it can form both neutral and zwitterionic molecules. In the formation of the PXC–CNX co-crystal, three parameters need to be considered, i.e., the hydrogen bond donor and acceptor’s ability and the polarity of the solvent used. The moderate polar solvents can be used to produce the PXC–CNX co-crystal. Moreover, thermal analysis with DSC showed that the solvent molecule played an essential role in stabilizing the crystal structure. The calculation of the interaction energy with DFT showed that homo-molecular interaction has superior energy than hetero-molecules. Homo-molecular bonds act as a driving force for the decomposition of PXC–CNX compared to a solvent-free co-crystal. Based on screening with the slurry method, there was only a difference in the ethyl acetate (EA) solvent. The PXC–CNX-EA had a different C–H···C distance EA molecule, longer than other solvent molecules. In addition, EA is a non-toxic compound, so that it is safe for food and drug formulations. The crystal structure of PXC–CNX was obtained after the solvent was evaporated and the two molecules were linked via O–H···O hydrogen bonds between the COOH group on CNX and the OH group on the deprotonated PXC. The PXC zwitterion formed hydrogen bonds with the CNX zwitterion via N–H···O, comprising a protonated pyridine donor and a carbonyl amide acceptor . Combined NSAID have better anti-inflammatory activity than single drugs, such as DFA-ethyl diclofenac (ED) co-crystalline, which is more effective than diclofenac acid because only a small amount of ED is bound to plasma. However, the esterification process between the ester and DFA can produce unwanted products. Therefore, a co-crystal between ED and DFA was made, and crystallographic studies were carried out. From the results of thermal analysis with DSC, the melting point of the co-crystal was greater than that of ED (67.7 °C) and lower than that of DFA (173.1 °C), i.e., around 103–104.3 °C. The results showed that the FTIR spectrum did not change. Thin layer chromatography analysis showed the same pattern for the two accelerated stability tests, heated with microwave energy and stored in high humidity for seven days. Thus, the ED and DFA co-crystal were chemically stable. Moreover, anti-inflammatory activity was studied using five groups of mice, which showed that the co-crystals between ED and DFA increased anti-inflammatory activity more than the single components . SCXRD, PXRD, and FTIR characterized crystal meloxicam (MLX) and aspirin (ASP), showing different spectra from the starting material. Based on the rules of pKa, the meloxicam–aspirin (MLX–ASP) co-crystal has not been defined, with a pKa value of 0.68, and a neutral phase with a CNC angle of 110.2°. In a comparable solubility study in the testing medium of phosphate buffer solution pH 7.4 at 37 °C, the solubility of MLX in MLX–ASP co-crystals increased to 0.22 mg/mL from 0.001 mg/mL. Pharmacokinetic studies using male Sprague-Dawley rats at an oral dose of 1 mg/kg showed an increase in absorption of the MLX–ASP co-crystal with a bioavailability of 69%. At the same time, that of MLX was only 16%. The MLX–ASP co-crystal could also achieve tmax four times faster than the single MLX, so it reached the concentration of 0.51 g/mL MLX in 10 min. This rapid onset is an advantage of the MLX–ASP co-crystal, which is indicated as a mild to moderate acute pain reliever. In this study, ASP in the MLX–ASP co-crystal was only 7.7 mg and MLX 10 mg/kg, so it did not cause significant side effects . The multi-APIs co-crystal was also constructed between celecoxib (CEL) and tramadol HCl (TML···HCl), named CTC. Initially, the co-crystal was formed by grinding method using isopropyl alcohol. This new phase is an ionic co-crystal, where the chloride ion interacts with an adjacent drug molecule by involving –N + H···Cl − and –OH···Cl hydrogen bonds. The maximum concentration of CEL in CTC was higher than the maximum concentration of a single CEL, thus extending drug action by slowing drug release. Additionally, this combination increased efficacy, preventing pain by four mechanisms of action and lower therapeutic doses, as the CTC dose was 100 mg (44 mg TML · HCl and 56 mg CEL). In contrast, the dose of CEL and TML · HCl is around 200–400 mg/day . Because of its very high efficacy, the co-crystal (TML · HCl–CEL) was tested in a phase I clinical trial in 2017 by Esteve (E-58425) and Mundiphara Research (MR308). In stage I, a single pharmacokinetic (PK) dose of CTC and its reference products were compared singly and in combination. Samples were randomly assigned with an initial dose of 200 mg of CTC, equivalent to 88 mg of tramadol and 112 mg of CEL. The results of the AUC value were comparable to the Cmax, but lower than that of tramadol. However, the t-max was longer than that of tramadol. This result is consistent with previous research where C-max reduction was proportional to the slow dissolution rate . Next, they continued the phase II clinical trial in patients. There were six doses of CTC tested (CTC 50, 100, 150, 200 mg; tramadol 100 mg; and placebo). The initial efficacy of this test was based on differences in pain intensity across the population. Based on the results of phase II, the potential for CTC was more significant than the risk. Furthermore, CTCs at 100, 150, and 200 mg doses were more effective than tramadol 100 mg and placebo for treating moderate to severe acute pain . CTC development is currently awaiting the results of a phase III trial , and this is the first co-crystal NSAID to enter clinical trials, while the MLX–ASP co-crystal is still in the in vivo testing phase . For gout therapy, NSAID are the first-line drugs to reduce pain, which need to minimize uric acid levels in the body by increasing excretion or inhibiting the formation of uric acid using uricosurics and xanthine oxidase (XO) inhibitors . To achieve optimal therapy, Modani et al. utilized co-crystallization in combining PXC with febuxostat (FBX). FBX is a novel non-purine xanthine oxidase (XO) used to treat hyperuricemia in gouty patients. Based on several studies, FBX is effective at inhibiting lung inflammation in animals caused by oxidative stress , has been shown to accelerate restoration of the pulmonary endothelial barrier , and is potent in treating mild to moderate COVID-19 infection by overcoming the early-phase pneumonia caused by the coronavirus . A single crystal of the piroxicam–febuxostat (PXC–FBX) co-crystal with a stoichiometric ratio of 1:1 resulted from the crystallization technique using acetonitrile solvent. The formation of co-crystals was confirmed by a decrease in the melting point, no stretching of the OH from carboxylic acids indicating no proton transfer, the specific shape, and differences in the diffractogram. A supramolecule synthon connects the carboxylate and azole groups with hydrogen bonds NH···O and 2D packaging, stabilized by the interaction of NH···O, CH···N, OH···O, and CH···O hydrogen bonds. It becomes interesting to learn when both APIs come from the same BCS class. Solubility testing at pH 1.2–7.4 resulted in 2.5 times higher solubility for FBX at pH 1.2 and continued to increase as pH increased; however, at pH 4.5, the solubility decreased. At pH 6.8 and 7.4, the solubility of PXC significantly increased without affecting FBX, so that at pH 1.2 PXC was a co-former for FBX but at pH 6.8 and 7.4, FBX was a co-former for PXC. In vitro, the FBX–PXC co-crystal improved the dissolution time by up to 2.8-fold compared to pure PXC without affecting the dissolution of FBX. In addition, the FBX–PXC co-crystal had a stable crystal structure and altered mechanical properties such as an increase in the flow rate and the formation of capping and laminating at the time of compression, showing good compressibility . NSAID, especially the anthranilic acid group, also can construct multi-APIs co-crystal with antibacterial drugs. This combination aimed to provide a unique combination drug or pain relief therapy and prevent postoperative infections. Hence, it minimizes or even replaces the use of opioids, reduces adverse effects, and improves the physicochemical properties of each API . For example, niflumic acid (NFA)–caprolactam (CPR) (1:1) is an example of an anthranilic acid class of NSAID that selectively inhibits COX-2 and is widely used in patients with rheumatoid arthritis. The co-formers used to form NFA co-crystals are CPR and 2-hydroxy pyridine (2HP). An analysis of the crystal structure showed the type of crystals that form between the salt and the co-crystal together by bonding through NH···O and OH···O hydrogen bonds, which form dimers, and CH···F and CH···O. The NFA–CPR co-crystal has a low melting point of 83 °C. In contrast, the NFA-2HP co-crystal has a melting point of 135 °C . Bhattacharya et al. performed a screening of the hydrate salts and multi-API co-crystals of anthranilic acid with an antibacterial. There are two co-crystals from APIs, namely flufenamic acid (FFA) with sulfamethazine (SFZ) and niflumic acid (NFA) and SFZ. In the crystal structure, hydrogen bonds are formed between the carboxylic acid groups of NFA/FFA with the sulfonamide (NH) groups and N atoms from the pyrimidine ring on the SFZ to form synthon III. Then, each dimer unit is connected by four pairs of identical NH···O hydrogen bonds to form synthon IV. This structure is stabilized by CH···interactions and π···π interactions . Machado et al. conducted a study on an NSAID co-crystal formed from propionic acid with levetiracetam (LEV), which was expected to treat epilepsy accompanied by inflammation. LEV is an etiracetam enantiomer and acts as an oral anti-epilepsy drug classified as BCS class I (good stability and permeability), so it is expected to improve the physicochemical properties of the NSAID. The eutectic mixture and co-crystal dissolution test with aryl propionic acid (IBU, naproxen (NPX), Flurbiprofen), and LEV showed an increased dissolution rate compared to the pure NSAID. LEV + (S)-IBU crystals were stable in accelerated stability tests over six months. However, the eutectic mixture of propionic acid and LEV melted at temperatures below that of the pure NSAID. This suggests that the crystalline form of propionic acid and LEV can also increase propionic acid derived NSAID pharmacokinetic parameters . Surov et al. conducted a study on the formation of co-crystal diclofenac (DFA), diflunisal (DIF) with theophylline (THP), as well as a DFA–THP co-crystal with a DIF-THP co-crystal; the co-crystal synthesis was carried out by grinding with the addition of mixed solvent droplets (acetonitrile, methanol, and water). The synthon that formed as a hetero synthon connected by O···HN involves carboxylic acid from API and an unsaturated N atom from the imidazole ring on theophylline. The DFA–THP co-crystal provides a lower energy grid than DIF–THP because the packing consists only of dispersion energy. Co-crystal DIF–THP had a lower melting point than pure DIF, causing the co-crystal to be less stable. The small co-crystal enthalpy indicated that the energy in the hydrogen bond was proportional to the parent compound. The packaging was strengthened by the weak interactions of van der Waals forces. Based on the intrinsic dissolution results, co-crystallization increased the solubility of DFA by up to 1.3 times. In contrast, in DIF, the solubility was comparable to that of pure DIF. In this study, DIF–THP dissolution illustrated the classic concept of “spring and parachute”. Apart from increasing the solubility of co-crystallization, it also increased the stability in different humidity . Aitipamula et al. conducted a study of oxaprozin (OXP) co-crystal formation with the co-former 4,4 bipyridine (4.4 BP), 1,2 BPE and salts with piperazine, 2-amino-3-picoline, and anti-asthma drugs such as salbutamol . OXP is included in the propionic acid group, which is used to relieve rheumatoid arthritis. It is thought to be able to inhibit urate reabsorption by selectively inhibiting glucuronidation . There have been very few publications on the co-crystallization of OXP. OXP salt formation showed a decrease in intrinsic solubility and dissolution rate. Therefore, OXP co-crystal can be utilized in tablet formulation of extended-release SAL to overcome the short half-life problem and lower the frequency of SAL use. The co-crystals arrangement occurs at a stoichiometric ratio (1:0.5) with hydrogen interactions through the interactions CH···O and CH···π. The OXP-4,4 BP co-crystal had a monoclinic crystal form with one OXP molecule and half a molecule of 4.4 BP in asymmetrical units. In comparison, the OXP–1,2 BPE co-crystal produced a triclinic crystal form with one OXP molecule and half a molecule of 1,2 BPE. The co-crystal and the salt were relatively stable and not hygroscopic . Drug–drug NSAID co-crystals can influence clinical practice, especially for pain management, gout therapy, and osteoarthritis. This phenomenon is related to the synergistic effect of analgesics without the potential to cause side effects, improving the physicochemical properties and clinical profile of drug release so that it is expected to reduce dosages, patient complaints related to drug use, and economic burden of treatment . In addition, based on the literature description of NSAID co-crystal drugs, the combination of drugs with the same BCS class can improve drugs’ stability and mechanical properties, such as the flow rate and compressibility. The development of NSAID co-crystals is a complicated and lengthy process involving prediction, screening, synthesis, characterization, pre-formulation, and studies of pharmacokinetic profiles, including adsorption, distribution, metabolism, and excretion, followed by formulation, process development, preparation, declaration of an Investigational New Drug, and clinical trials . The number of stages that must be passed certainly causes many significant challenges in the development of co-crystals. 4.1. Co-Former Selection Choosing a co-former compatible with API is one of the challenges of co-crystal formation. Until now, the selection of co-formers was done by filtering the co-crystal by “tackles.” In this approach, crystals with the best physicochemical and pharmacological properties are selected, while the selection of co-formers is performed directly by trial and error. An alternative method that is often used is the supramolecular synthons approach, where the priority of the co-former is selected based on data analysis from the CSD. In determining co-formers, several parameters should be considered. The strength of the hydrogen bonds and Hansen’s solubility parameter are used to predict the theoretical solubility of drugs and co-formers. These are based on calculating partial solubility parameters with the Van Krevelen-Hoftyzer, Bagley, and Greenhalgh approach . Cheney et al. conducted a study regarding the selection of co-formers in meloxicam (MLX) co-crystal formation to increase solubility and pharmacokinetics . In this study, the co-former selection was carried out using the supramolecular synthons approach by analyzing the CSD to ensure the reliability of hetero-synthons or homo-synthon supramolecular formation between azole groups in MLX with carboxylic acids, primary amides, or alcohol. In the shape of the supramolecules, only hydrogen bonding interactions were considered. Based on the CSD search of 450 hits containing azole and a carboxylic acid, only 102 entries formed hetero synthon supramolecules with carboxylic acids. These results indicate that hetero-synthon supramolecules are predominantly formed, compared to homo-synthons. This result was also shown for azole–carboxylic and azole–alcohol consistently. The exception was primary amides, which contain more homo-synthon supramolecules. From these results, aspirin (ASP), which is an aromatic carboxylic acid, was selected as a co-former in co-crystal formation. However, based on the FDA approval base document, oral administration of MLX with ASP at a 1000 mg/day dose is not recommended because it can increase the AUC and Cmax of MLX, although until now, there have been no reported side effects with its use . Furthermore, in selecting the co-former, the safety of the co-former is the most crucial factor. The co-former established in the co-crystal formation must be safe or non-toxic in the amount required as the therapeutic drug dose. Most co-crystal development is done using co-formers that have been registered as chemical additives deemed safe for human consumption, known as GRAS. This list includes various chemicals, including aldehydes, alcohols, carboxylic acids, amides, and sweeteners. Therefore, the structural diversity and the physicochemical properties of the substances in the GRAS list provide an additional means of selecting co-formers . 4.2. Solubility Solubility enhancement is the main background of NSAID co-crystal development. However, some co-crystals showed no solubility improvement, i.e., niflumic acid co-crystals with sulfamethazine (SFZ) . Next, ethenzamide (ET) co-crystallization with nutraceutical ingredients from hydroxybenzoic acid derivatives, namely sinapic acid (SNP), an antioxidant, antimicrobial, anti-inflammatory, and anti-cancer agent, led to lower solubility than single ET due to the poor solubility SNP. This finding indicates that the solubility of the co-former greatly influences solubility . The solubility of some NSAID co-crystals depends on environmental conditions, such as pH and temperature. It was found that ET was soluble at an acidic pH of 1.2 due to chloride salt formation from protonated ET to form strong hydrogen bonds with an increase in dissolved molecules. At the same time, co-crystallization of this NSAID with 3,5 dihydroxybenzoic acid (DHBA) showed higher solubility at pH 7.4 than at pH 1.2. This shows that pH affects solubility. In addition, the conformation of a drug during the packing of the co-crystal lattice with co-former can cause molecular planarity, which leads to an increase in solubility . As a ratio of the total concentration of the co-former to the whole drug in equilibrium, the crystal eutectic constant can predict the solubility at different pH . Thermodynamic forms of high-energy drugs such as ionic salts and drug co-crystals with a highly soluble co-former provide the driving force to achieve drug supersaturation, called the “spring”. Unfortunately, this phenomenon carries a high risk of accelerating precipitation, causing worse solubility than before. Therefore, a combination of excipients that can inhibit or slow down the precipitation or formation of “parachute” crystals is required . This approach has been applied by Guzman et al. to increase the oral absorption of celecoxib (CEL) salt co-crystal in solid preparations . In this study, they conducted experiments with several excipients in the form of a surfactant mixture to obtain a parachute effect, namely the parameters of deposition time and critical micelle concentration (CMC). The deposition time was the longest, with the lowest CMC concentration. Moreover, the excipient must also be able to increase a saturated concentration of CEL in the solution. A study was carried out in vivo using beagles to determine the drug formulation with the best pharmacokinetics compared to marketed CEL, namely Celebrex. This study found that the Pluronic F127 excipient showed the best parachute effect with the inhibition of precipitation at 37 °C and a CMC concentration of 0.07 mg/mL and slowed down the dissolution or change of the CEL salt co-crystal into acidic CEL crystals. Pluronic F127 can prevent the direct wetting of CEL solids and form cohesive clots related to its flower-like form. Based on in vivo and in vitro testing results, CEL co-crystals formulation consists of vitamin E tocopherol polyethylene glycol succinate, hydroxypropyl cellulose, and Pluronic F127 and provided 100% bioavailability with faster absorption compared to Celebrex. A linear increase in the AUC with dose indicated that the formulation had a quicker onset and a lower dose. This formulation has also been tested for stability for three years. In addition, the optimal arrangement of CEL co-crystal with appropriate excipient help can increase the drug’s solubility . Spring and parachute effects also occur in INC co-crystals with SAC as the co-former. The co-crystal induces rapid dissolution followed by sudden drug precipitation within 60 min. This phenomenon is certainly a concern in NSAID co-crystals . Using appropriate excipients to inhibit co-crystal precipitation is one way of dealing with this phenomenon . In addition, the formation of a co-crystal significantly affects the solubility in the construction of a supramolecular matter where the absence of a synthon with few hydrogen bonds can cause a decrease in solubility. A study of the co-crystals of meloxicam (MLX) with glutaric acid (GLU), MLX with l -malic acid, and MLX with fumaric acid showed decrease in solubility, especially for MLX–GLU co-crystal, i.e., 14% lower than that of pure MLX. In this study, the high solubility of the co-former did not translate into increased solubility of the co-crystal. The absence of NH···O = S interactions between MLX can increase solubility, but this interaction was not correlated with equilibrium Cmax changes. The poor correlation between the melting point analysis and Cmax shows that the co-crystal lattice’s supramolecular arrangement was inconsistent. Intermolecular interactions between MLX molecules linked in a crystal lattice play only a limited role in crystal dissolution thermodynamics . 4.3. Permeability A drug’s ability to penetrate biological membranes is crucial to increasing absorption and distribution to achieve its therapeutic effect. This factor is closely related to the bioavailability of a drug , so that decreased permeability is a problem in the formation of NSAID co-crystals for both oral and transdermal/topical preparations. The development of NSAID co-crystals for topical preparations requires very high permeability and suitable dosage formulations. The permeability will be strongly influenced by the viscosity of the pharmaceutical preparation, as well as its pH and lipophilic/hydrophilic properties. For example, MLX–salicylic acid (SA) co-crystal in a gel was administered transdermally and showed a significant decrease in permeability, i.e., up to 5.2-fold lower than the co-crystal suspension. The formation of supersaturation due to a reduction in pH can prevent the ionization of acidic medicinal compounds, thereby increasing free drugs in the medium and increasing the permeation rate . Permeability is related to lipophilic and hydrophilic properties, which depend on the density of hydrogen bonds formed on the carboxylate and hydroxy groups in the co-crystal. An increase in hydrogen bonding will increase solubility; this was observed as a rise in the ET and 3.5 DHBA co-crystal permeation rate . In the other cases, γ-indomethacin-2-hydroxy-4-methylpyridine and INC–SAC co-crystals increased permeability by two-fold compared to that of ordinary INC. The two co-crystals decreased the transepithelial electrical resistance value as a function of cell membrane integrity, i.e., 8.5-fold. Loss of the cell membrane monolayer (NCM460) as a barrier to drug absorption in the body can increase the drug’s solubility and bioavailability and speed up the time to reach the therapeutic dose of INC . Apart from INC, some co-crystal NSAID drugs have been reported to increase permeability, such as the ethenzamide–saccharine (ET-SAC) co-crystal . 4.4. Intrinsic Dissolution Rate (IDR) IDR in an aqueous medium is a crucial parameter in determining the solubility of a co-crystal. IDR is the dissolution rate of a pure substance under constant temperature, pH, and surface area conditions. This parameter provides a more significant correlation with in vivo dynamic dissolution than the solubility test. IDR is a tool for evaluating drug solubility in the BCS . IDR is described as the cumulative amount of dissolved per unit surface of the drug preparation plotted against time in units (unit: mg/cm 2 /min). For example, co-crystallization of IBU with NIC could increase the dissolution rate by up to 2.5 times that of the single IBU . The tramadol–celecoxib co-crystal (CTC) also increased dissolution rate three times more than CEL, which can accelerate absorption and increase bioavailability. The maximum concentration of CEL in CTC increased, compared to CEL alone . Different co-crystal preparation methods result in various particle sizes and shape distributions, affecting the IDR. Based on previous research, co-crystallization between PCM and caffeine (CAF) produces co-crystals A, B, C, D, E, which showed increased IDR values 1.72, 1.88, 2.42, 2.19, and 2.84 times higher than that of IDR from PCM (5.06 mg/cm/min) . The IDR can be comparable to or even less than the dissolution rate of a single NSAID drug, such as the IBU-NIC and flurbiprofen (FLU)–NIC co-crystals, which prolonged IDR around eight times lower than IBU and five times lower than FLU . 4.5. PH Microenvironment The pH microenvironment plays a vital role in the solubility and dissolution of the co-crystal. This is shown by the initial pH (pHint) relationship representing the bulk pH, with equilibrium pH representing the microenvironment pH . In a co-crystal, pHint will reach equilibrium at the eutectic time, in which the solution is highly saturated with the co-crystal and drug. In contrast to drugs, where pHint = bulk pH, for co-crystals pHint <bulk pH. This statement shows that the co-crystal is highly dependent on pHint, especially for acidic co-formers, lowering the interface pH. Machado et al. demonstrated that the formation of MLX–MLA co-crystals and MLX–SA could reduce the microenvironment pH to 1.6 and 4.5, respectively . This decrease occurs due to the nature of co-crystal ionization, which causes changes in solubility and dissolution in the MLX–SA co-crystal. In this case, reducing the microenvironment pH of the MLX–MLA co-crystal can reduce the solubility dependence of the MLX–SA co-crystal on pH . The effect of pH on solubility also occurred in PXC–SAT co-crystal. PXC formed a sigmoid curve, showed that solubility of PXC did not change until pH 5, then increased rapidly due to the ionization process at alkaline pH caused by the weak acid nature of PXC . In another publication, LNX solubility concerning pKa increased at pH 4.5, whereas the LNX co-crystal increased at pH 7.4, indicating pH-dependent ionization . Hereafter, it is essential to know the pKa of the drugs and co-formers in a co-crystal to predict the effect of the microenvironment pH . 4.6. Stability Co-crystal stability is a parameter that determines the potential for further development of pharmaceutical products. NSAID co-crystal development will significantly depend on chemical and physical stability. Hereafter, stability properties are undoubtedly the big challenge in co-crystallization. Practically, stability testing is carried out on various aspects, namely moisture stress, thermal stress, and chemical stability . 4.6.1. Moisture Stability The stability test results at a temperature of 40 °C and a humidity of 75% RH, 82% RH, and 96% RH for two weeks showed no change in the mefenamic acid-N-methyl- d -glucamine (MFA–MG) co-crystal, indicating that the co-crystal was stable in high humidity . This test provides information that the water content caused molecular damage to determine co-crystal shelf life and storage . Moisture can cause co-crystal distortion, conversion to a hydrate form, or dissociation . Different methods conferred differences in stability, such as in the moisture stability study of the PCM co-crystal with the co-former OXA formed by the grinding method and the PCM–MLA co-crystal formed by solvent evaporation. CM–MLA co-crystal was more stable than PCM–OXA, because, under wet conditions OXA tended to be converted to the dihydrate form. This study also concluded that co-crystal stability is related to solvent polarity, whereas PCM–OXA is more stable in aprotic solvents . Another study showed that the diclofenac–proline co-crystal (DFA–PRO) could also dissociate in the humidity levels above 80–90%. It was represented by low intensity on the diffractogram and diffraction peaks and a high-intensity PRO signal after 24 h of storage at 80% RH and 12 h of storage at 90% RH. Moreover, the results showed that the co-crystal was stable at 75% RH . This data suggested DFA co-crystal stability was influenced by humidity. In this case, the unstable co-crystal at high humidity must be modified and retested into other products . 4.6.2. Chemical Stability Accelerated stability testing was carried out under a temperature of 40 °C and 75% humidity. Chemical stability is related to co-crystal component degradation due to an API incompatibility with the co-former. The PXC–sodium acetate (SAT) co-crystal in tablet form showed no changes in color, odor, hardness, brittleness, drug content, disintegration time, or percentage dissolution after being stored at 40 °C/75% RH for three months . 4.6.3. Thermal Stability This stability test must be carried out at a high temperature and pressure to assess co-crystal stability against temperature increases . In several studies, stability testing was carried out with differential scanning calorimetry (DSC) to observe co-crystal thermal stability. For example, Oswald et al. prepared PCM co-crystals with several co-formers, i.e., 4-dioxane, N -methyl morpholine, morpholine, N, N -dimethyl piperazine, piperazine, and 4,4-bipyridine. Of the five co-crystals formed, only PCM-4,4 bipyridine did not decompose at a high temperature due to the high boiling point of 4,4-bipyridine (578 K). Thus, the boiling point of each co-former would significantly influence the co-crystal’s thermal stability . There are unexpected occurrences when co-crystals have a lower melting point than the parent component. This case occurred in ketoprofen (KET) and MAL co-crystals in the ratios 1:1, 1:2, and 2:1. Based on the KET–MAL co-crystal DSC analysis, the endothermic peak showed the melting point of each co-crystal proportion at 86.6, 79.5, and 86.2 °C, while the KET melting point was 96.1 °C; MAL had two endothermic peaks at 104.2 and 135.6 °C . The instability due to co-crystallization was also shown by the CEL–NIC system, in which the co-crystals dissociate at 25 °C into a more stable single form . Lower endothermic temperature peak also presented in the ET–SAC co-crystal with Tonset = 123.68 °C . In addition to DSC and PXRD analysis, Guerain et al. also used low-frequency Raman spectroscopy to investigate findings that could not be explained by PXRD, especially for molecular systems in low-electron atoms. This test was carried out under thermal conditions similar to those used in DSC analysis. It produced a shallow frequency spectrum (<50 cm −1 ), indicating the status of the S-IBU–NIC co-crystal, which showed limited physical stability over a relatively narrow temperature range. shows the occurrence of a glass transition state followed by a recrystallization process characterized by a curve reduction and reveals a transformation at 50 °C that was not detected in the DSC analysis. This result is because the low-energy phase transition is difficult for DSC to observe, which has a relatively weak scan rate (0.5 °C/min). The recrystallization of S-IBU–NIC occurred in two successive phase transitions: form A underwent a polymorph transformation into a unique and stable form B. This fact indicates the difficulty of achieving a steady state by the recrystallization method, so it is crucial to consider the preparation method for obtaining stable crystals . 4.7. Mechanical Properties of NSAID API External forces are essential in drug development because the drug undergoes grinding, filling, molding, and compacting of the powder, which can cause physical deformation. Therefore, the co-crystallization technique can be used for alternative crystal packaging and improving APIs’ mechanical properties . The mechanical properties can be assessed by the nanoindentation, elasticity (E), and hardness (H) of the measured material load transfer. E and H are measures of the resistance of the material to plastic elasticity and deformation. High E and H values indicate that the material is resistant to deformation and is brittle. However, based on several studies, co-crystallization does not continually improve mechanical properties, such as the co-crystal ibuprofen–lysine (IL) with polyvinylpyrrolidone K25 (PVP K25) and polyvinylpyrrolidone K30 (PVP K30) as co-formers, which have compressibility properties comparable to IL . Wicaksono et al. synthesized KET–MLA co-crystals by forming a C=O ketone group interaction between KET and MLA. Based on hot stage microscopic analysis and SEM, the KET–MLA co-crystal produced a needle-shaped, bigger particle size than KET, with a multi-shaped rough surface on the co-crystal . Additionally, Karki et al. applied a co-crystallization technique to increase PCM tabletability with several co-formers, namely OXA, THP, naphthalene, and phenazine. The PCM co-crystal with the lowest shear stress value was obtained with low tensile strength based on testing . Chattoraj et al. showed that the co-crystallization of PXC–SAC decreased plasticity and increased elasticity, which significantly reduced the compacting properties of API and its co-former, thought to be caused by suboptimal packaging of the co-crystal . A naproxen–nicotinamide co-crystal (NPX–NIC), produced with low tensile strength (<2 MPa), caused lamination and clumping in tablets . Conversely, several studies have shown that co-crystallization can improve the mechanical properties of NSAID co-crystals such as PCM–CAF. The liquid-assisted grinding (LAG) method is the best method to modulate the physical, mechanical, and pharmacokinetic properties, which resulted in a more delicate PCM–CAF co-crystal powder with a high compressibility index (31.12%) and high tablet hardness. Furthermore, the ratio of the API to the co-former can affect the mechanical properties, as shown in a PCM–CAF 1:1 co-crystal (A, B, and C) and a PCM–CAF 2:1 co-crystal (D and E) prepared by the solvent evaporation method with different solvents. The Heckel plot of the co-crystal showed increased compacting due to high plasticity. The plasticity was indicated by the ratio of the mean yield pressure, which is the stress of particle deformation that occurs during compression. The plasticity of the crystals showed different values due to the differences in the molecular packaging features of each co-crystal. Co-crystals A, B, and C had better plastic properties and higher tensile strength than co-crystals D and E . Improved mechanical properties also occurred in flufenamic acid (FFA)–NIC co-crystals, which showed better tablet properties than FFA . 4.8. Polymorphism Polymorphs of drug substances have different physical and chemical properties, affecting drug products’ safety, quality, and effectiveness . Co-crystallization can exhibit polymorphisms in various crystal structures. Conformation changes in co-crystallization to form efficient hydrogen bonds can lead to polymorphism, i.e., by rotating the arrangement of C–C–C–O bonds. For example, there are three polymorphic forms of DIC co-crystallization with pyridine-based co-formers. The flexibility of DIC in salts and co-crystals allows the molecules to have different solid-state conformations . Several studies have reported co-crystal polymorphs, such as an experiment conducted by Child et al. . The co-crystallization of PXC/4-hydroxybenzoate (1:1) produced two polymorphs. Polymorph I had an un-ionized PXC tautomer with the co-former and polymorph II with a zwitterionic tautomer with the co-former. The co-crystals formed in this study included 2:1 piroxicam/succinic acid, 1:1 piroxicam/1-hydroxy-2-naphthoic acid and 1:1 piroxicam/caprylic acid, 1:1 piroxicam/malonic acid, 4:1 piroxicam/fumaric acid, and 1:1 piroxicam/benzoic acid . Hereafter, the formation of polymorphs is a considerable challenge because it can transform during the pharmaceutical preparations and changes the drug’s performance. Humidity and mechanical stress are the main factors leading to the formation of polymorphs, i.e., NPX-picolinamide (PA) produced two polymorphs, namely α and β. The α polymorph was unstable and tended to undergo a phase transition to the β form at 95 °C. Identification of the polymorph by very fast mass nuclear magnetic resonance, an advancement in MAS NMR technology, showed that the two polymorphs had different structural properties, based on the results of the heteronuclear correlation (HETCOR) test. It concluded that the α and β co-crystals had different hydrogen patterns, where the α co-crystal had a type I synthon (NPX as the acceptor, the amide group of PA as the donor) and the β co-crystal had a type II synthon due to the rearrangement of the structure during thermal processing below the melting point . The co-crystallization of ET and resorsilic acid, namely 2,4-DHBA, produced three polymorphs with different conformations on the amide groups, which provided other physicochemical properties. Polymorph I from ET and 2,4-DHBA (1:1) was yielded from a solution without formic acid, while the formic acid addition produced polymorph II. Lastly, polymorph III was formed from a mixture of ET and 2,4-DHBA (2:1) with formic acid in the mixture solution. Polymorphs I and II had good stability, non-hygroscopic character, and higher solubility at pH 7.4. In this study, the solubility increase was related to the decrease in melting point, reflecting lower lattice energy. Besides, the co-crystal polymorphs had a smaller particle size than ET, so the solubility was better than ET . 4.9. Development of NSAID Formulations There are three main stages in producing a co-crystal product: formulation, process, and packaging to maintain co-crystal stability. These steps create many new challenges for different environments. Hence, it is first necessary to perform a preformulation. An essential preformulation step is to ensure that the excipient is compatible with the co-crystal. Incompatibility of the excipient against the co-crystal, especially its co-former, can lead to co-crystal destruction and hydrolysis, which cause physical instability and chemical incompatibility . Crucially, the addition of a suitable excipient can increase solubility, stability, dissolution, and oral absorption. Remenar et al. conducted a study to expand PVP-K30 and sodium dodecyl sulfate (SDS) excipients for CEL–NIC co-crystal formulas. The results showed that low surfactant concentrations converted large aggregates of CEL-III, which reduced the dissolution rate. However, the addition of 1–10% solid SDS and PVP converted the crystalline form into amorphous and crystalline micron forms (CEL-IV), which increased the bioavailability up to four times more than CEL in Celebrex. All co-crystals dissolved in 2 min in 1% SDS solution, showing that this co-crystal formulation has the potential to overcome the “spring and parachute” effect . Lactose monohydrate (LA), potato starch (PS), and potassium bromide (PR) are known to prevent changes in the polymorphisms of ET and gentisinic acid (ETGA) 2, which was less stable at high pressure compared to ETGA 1 in the tableting process indicated by the absence of peak shift and split in solid-state nuclear magnetic resonance (SSNMR) 13C and 15N and FTIR. ETGA 1 was produced by slow evaporation with acetonitrile as the solvent at room temperature, while ETGA 2 was made using a solvent mixture of toluene–acetonitrile (1:1). The dissolution test showed that almost all co-crystal formulas (ETGA 1 and ETGA 2) with excipients (LA, PS, and PR), except for ETGA with LA, had a faster and greater dissolution than ET . Panzade et al. optimized piroxicam–sodium acetate (SAT) co-crystal formulation to obtain several orodispersible tablets. A preliminary study was carried out with a 32 factorial design to optimize the super-disintegrant (sodium croscarmellose) concentration and the binder (PVP K-30), with formulation factors for evaluation. Various pre-compression and post-compression parameters indicated that all formulated tablets had a uniform weight with acceptable weight variations and thicknesses. All formula hardness was 3.2–3.6 kg/cm 2 , and the friability was found between 0.72–0.86%, indicating that the tablets had good mechanical resistance. The API content in the oro-dispersible tablets was around 98.04–99.48%, which is within the acceptable limits. F1 was the optimal formula with a disintegration time of 29 ± 0.12 s, a wetting time of 21 ± 0.58 s, a maximum water absorption ratio of 97.65 ± 0.25%, and a maximum concentration of the tablet dissolution test of 93.69 ± 0.12%. Apart from that, F1 was also stable in accelerated stability testing. ANOVA supported these results on the variable disintegration and dissolution times of the super-disintegrant and binder used. The super-disintegrant and binder concentration optimization had a significant effect ( p < 0.05) on increasing the disintegration and dissolution time . Of the nine polymorphs of flufenamic acid (FFA), the most widely used in the formulation were flufenamic acid form 1 (FFA 1) and flufenamic acid form 3 (FFA 3). Guo et al. constructed co-crystal from FFA 1 with the co-formers NIC and THP. The polymers used were polyethylene glycol (PEG), PVP, and polyvinylpyrrolidone–vinyl acetate. The formation of co-crystals was confirmed by the presence of shifts and new peaks in the PXRD diffraction pattern and FTIR spectrum and the difference in melting points of the co-crystal compared to the parent compound. The solubility of the FFA–NIC co-crystal in ethanol:water (1:4) was higher than FFA and FFA–THP. Moreover, the presence of polymers in the solvent did not change the solubility properties. In residue testing, insoluble compounds showed that the co-crystal changed into FFA 3. DSC results depicted that FFA 3 melted at 123.1 °C and, following recrystallization into FFA, melted at 134.4 °C . The development process requires identifying parameters and factors that can affect the formation of both co-crystal and pharmaceutical preparation products. During the co-crystal formation process, knowledge of the physical and chemical properties of the starting materials is undoubtedly needed , as shown in the development process for sodium naproxen–lactose-tetrahydrate (S-NPX–LT) co-crystals. During the heating process, the S-NPX–LT co-crystal lost water molecules at a temperature of 60–120 °C, so co-crystal transformation occurred into a co-amorphous system that could change back to the co-crystalline form at high humidity. This study showed that water molecules are critical to stabilizing the co-crystal packaging. This case illustrates the challenges in process development for co-crystallization . Choosing a co-former compatible with API is one of the challenges of co-crystal formation. Until now, the selection of co-formers was done by filtering the co-crystal by “tackles.” In this approach, crystals with the best physicochemical and pharmacological properties are selected, while the selection of co-formers is performed directly by trial and error. An alternative method that is often used is the supramolecular synthons approach, where the priority of the co-former is selected based on data analysis from the CSD. In determining co-formers, several parameters should be considered. The strength of the hydrogen bonds and Hansen’s solubility parameter are used to predict the theoretical solubility of drugs and co-formers. These are based on calculating partial solubility parameters with the Van Krevelen-Hoftyzer, Bagley, and Greenhalgh approach . Cheney et al. conducted a study regarding the selection of co-formers in meloxicam (MLX) co-crystal formation to increase solubility and pharmacokinetics . In this study, the co-former selection was carried out using the supramolecular synthons approach by analyzing the CSD to ensure the reliability of hetero-synthons or homo-synthon supramolecular formation between azole groups in MLX with carboxylic acids, primary amides, or alcohol. In the shape of the supramolecules, only hydrogen bonding interactions were considered. Based on the CSD search of 450 hits containing azole and a carboxylic acid, only 102 entries formed hetero synthon supramolecules with carboxylic acids. These results indicate that hetero-synthon supramolecules are predominantly formed, compared to homo-synthons. This result was also shown for azole–carboxylic and azole–alcohol consistently. The exception was primary amides, which contain more homo-synthon supramolecules. From these results, aspirin (ASP), which is an aromatic carboxylic acid, was selected as a co-former in co-crystal formation. However, based on the FDA approval base document, oral administration of MLX with ASP at a 1000 mg/day dose is not recommended because it can increase the AUC and Cmax of MLX, although until now, there have been no reported side effects with its use . Furthermore, in selecting the co-former, the safety of the co-former is the most crucial factor. The co-former established in the co-crystal formation must be safe or non-toxic in the amount required as the therapeutic drug dose. Most co-crystal development is done using co-formers that have been registered as chemical additives deemed safe for human consumption, known as GRAS. This list includes various chemicals, including aldehydes, alcohols, carboxylic acids, amides, and sweeteners. Therefore, the structural diversity and the physicochemical properties of the substances in the GRAS list provide an additional means of selecting co-formers . Solubility enhancement is the main background of NSAID co-crystal development. However, some co-crystals showed no solubility improvement, i.e., niflumic acid co-crystals with sulfamethazine (SFZ) . Next, ethenzamide (ET) co-crystallization with nutraceutical ingredients from hydroxybenzoic acid derivatives, namely sinapic acid (SNP), an antioxidant, antimicrobial, anti-inflammatory, and anti-cancer agent, led to lower solubility than single ET due to the poor solubility SNP. This finding indicates that the solubility of the co-former greatly influences solubility . The solubility of some NSAID co-crystals depends on environmental conditions, such as pH and temperature. It was found that ET was soluble at an acidic pH of 1.2 due to chloride salt formation from protonated ET to form strong hydrogen bonds with an increase in dissolved molecules. At the same time, co-crystallization of this NSAID with 3,5 dihydroxybenzoic acid (DHBA) showed higher solubility at pH 7.4 than at pH 1.2. This shows that pH affects solubility. In addition, the conformation of a drug during the packing of the co-crystal lattice with co-former can cause molecular planarity, which leads to an increase in solubility . As a ratio of the total concentration of the co-former to the whole drug in equilibrium, the crystal eutectic constant can predict the solubility at different pH . Thermodynamic forms of high-energy drugs such as ionic salts and drug co-crystals with a highly soluble co-former provide the driving force to achieve drug supersaturation, called the “spring”. Unfortunately, this phenomenon carries a high risk of accelerating precipitation, causing worse solubility than before. Therefore, a combination of excipients that can inhibit or slow down the precipitation or formation of “parachute” crystals is required . This approach has been applied by Guzman et al. to increase the oral absorption of celecoxib (CEL) salt co-crystal in solid preparations . In this study, they conducted experiments with several excipients in the form of a surfactant mixture to obtain a parachute effect, namely the parameters of deposition time and critical micelle concentration (CMC). The deposition time was the longest, with the lowest CMC concentration. Moreover, the excipient must also be able to increase a saturated concentration of CEL in the solution. A study was carried out in vivo using beagles to determine the drug formulation with the best pharmacokinetics compared to marketed CEL, namely Celebrex. This study found that the Pluronic F127 excipient showed the best parachute effect with the inhibition of precipitation at 37 °C and a CMC concentration of 0.07 mg/mL and slowed down the dissolution or change of the CEL salt co-crystal into acidic CEL crystals. Pluronic F127 can prevent the direct wetting of CEL solids and form cohesive clots related to its flower-like form. Based on in vivo and in vitro testing results, CEL co-crystals formulation consists of vitamin E tocopherol polyethylene glycol succinate, hydroxypropyl cellulose, and Pluronic F127 and provided 100% bioavailability with faster absorption compared to Celebrex. A linear increase in the AUC with dose indicated that the formulation had a quicker onset and a lower dose. This formulation has also been tested for stability for three years. In addition, the optimal arrangement of CEL co-crystal with appropriate excipient help can increase the drug’s solubility . Spring and parachute effects also occur in INC co-crystals with SAC as the co-former. The co-crystal induces rapid dissolution followed by sudden drug precipitation within 60 min. This phenomenon is certainly a concern in NSAID co-crystals . Using appropriate excipients to inhibit co-crystal precipitation is one way of dealing with this phenomenon . In addition, the formation of a co-crystal significantly affects the solubility in the construction of a supramolecular matter where the absence of a synthon with few hydrogen bonds can cause a decrease in solubility. A study of the co-crystals of meloxicam (MLX) with glutaric acid (GLU), MLX with l -malic acid, and MLX with fumaric acid showed decrease in solubility, especially for MLX–GLU co-crystal, i.e., 14% lower than that of pure MLX. In this study, the high solubility of the co-former did not translate into increased solubility of the co-crystal. The absence of NH···O = S interactions between MLX can increase solubility, but this interaction was not correlated with equilibrium Cmax changes. The poor correlation between the melting point analysis and Cmax shows that the co-crystal lattice’s supramolecular arrangement was inconsistent. Intermolecular interactions between MLX molecules linked in a crystal lattice play only a limited role in crystal dissolution thermodynamics . A drug’s ability to penetrate biological membranes is crucial to increasing absorption and distribution to achieve its therapeutic effect. This factor is closely related to the bioavailability of a drug , so that decreased permeability is a problem in the formation of NSAID co-crystals for both oral and transdermal/topical preparations. The development of NSAID co-crystals for topical preparations requires very high permeability and suitable dosage formulations. The permeability will be strongly influenced by the viscosity of the pharmaceutical preparation, as well as its pH and lipophilic/hydrophilic properties. For example, MLX–salicylic acid (SA) co-crystal in a gel was administered transdermally and showed a significant decrease in permeability, i.e., up to 5.2-fold lower than the co-crystal suspension. The formation of supersaturation due to a reduction in pH can prevent the ionization of acidic medicinal compounds, thereby increasing free drugs in the medium and increasing the permeation rate . Permeability is related to lipophilic and hydrophilic properties, which depend on the density of hydrogen bonds formed on the carboxylate and hydroxy groups in the co-crystal. An increase in hydrogen bonding will increase solubility; this was observed as a rise in the ET and 3.5 DHBA co-crystal permeation rate . In the other cases, γ-indomethacin-2-hydroxy-4-methylpyridine and INC–SAC co-crystals increased permeability by two-fold compared to that of ordinary INC. The two co-crystals decreased the transepithelial electrical resistance value as a function of cell membrane integrity, i.e., 8.5-fold. Loss of the cell membrane monolayer (NCM460) as a barrier to drug absorption in the body can increase the drug’s solubility and bioavailability and speed up the time to reach the therapeutic dose of INC . Apart from INC, some co-crystal NSAID drugs have been reported to increase permeability, such as the ethenzamide–saccharine (ET-SAC) co-crystal . IDR in an aqueous medium is a crucial parameter in determining the solubility of a co-crystal. IDR is the dissolution rate of a pure substance under constant temperature, pH, and surface area conditions. This parameter provides a more significant correlation with in vivo dynamic dissolution than the solubility test. IDR is a tool for evaluating drug solubility in the BCS . IDR is described as the cumulative amount of dissolved per unit surface of the drug preparation plotted against time in units (unit: mg/cm 2 /min). For example, co-crystallization of IBU with NIC could increase the dissolution rate by up to 2.5 times that of the single IBU . The tramadol–celecoxib co-crystal (CTC) also increased dissolution rate three times more than CEL, which can accelerate absorption and increase bioavailability. The maximum concentration of CEL in CTC increased, compared to CEL alone . Different co-crystal preparation methods result in various particle sizes and shape distributions, affecting the IDR. Based on previous research, co-crystallization between PCM and caffeine (CAF) produces co-crystals A, B, C, D, E, which showed increased IDR values 1.72, 1.88, 2.42, 2.19, and 2.84 times higher than that of IDR from PCM (5.06 mg/cm/min) . The IDR can be comparable to or even less than the dissolution rate of a single NSAID drug, such as the IBU-NIC and flurbiprofen (FLU)–NIC co-crystals, which prolonged IDR around eight times lower than IBU and five times lower than FLU . The pH microenvironment plays a vital role in the solubility and dissolution of the co-crystal. This is shown by the initial pH (pHint) relationship representing the bulk pH, with equilibrium pH representing the microenvironment pH . In a co-crystal, pHint will reach equilibrium at the eutectic time, in which the solution is highly saturated with the co-crystal and drug. In contrast to drugs, where pHint = bulk pH, for co-crystals pHint <bulk pH. This statement shows that the co-crystal is highly dependent on pHint, especially for acidic co-formers, lowering the interface pH. Machado et al. demonstrated that the formation of MLX–MLA co-crystals and MLX–SA could reduce the microenvironment pH to 1.6 and 4.5, respectively . This decrease occurs due to the nature of co-crystal ionization, which causes changes in solubility and dissolution in the MLX–SA co-crystal. In this case, reducing the microenvironment pH of the MLX–MLA co-crystal can reduce the solubility dependence of the MLX–SA co-crystal on pH . The effect of pH on solubility also occurred in PXC–SAT co-crystal. PXC formed a sigmoid curve, showed that solubility of PXC did not change until pH 5, then increased rapidly due to the ionization process at alkaline pH caused by the weak acid nature of PXC . In another publication, LNX solubility concerning pKa increased at pH 4.5, whereas the LNX co-crystal increased at pH 7.4, indicating pH-dependent ionization . Hereafter, it is essential to know the pKa of the drugs and co-formers in a co-crystal to predict the effect of the microenvironment pH . Co-crystal stability is a parameter that determines the potential for further development of pharmaceutical products. NSAID co-crystal development will significantly depend on chemical and physical stability. Hereafter, stability properties are undoubtedly the big challenge in co-crystallization. Practically, stability testing is carried out on various aspects, namely moisture stress, thermal stress, and chemical stability . 4.6.1. Moisture Stability The stability test results at a temperature of 40 °C and a humidity of 75% RH, 82% RH, and 96% RH for two weeks showed no change in the mefenamic acid-N-methyl- d -glucamine (MFA–MG) co-crystal, indicating that the co-crystal was stable in high humidity . This test provides information that the water content caused molecular damage to determine co-crystal shelf life and storage . Moisture can cause co-crystal distortion, conversion to a hydrate form, or dissociation . Different methods conferred differences in stability, such as in the moisture stability study of the PCM co-crystal with the co-former OXA formed by the grinding method and the PCM–MLA co-crystal formed by solvent evaporation. CM–MLA co-crystal was more stable than PCM–OXA, because, under wet conditions OXA tended to be converted to the dihydrate form. This study also concluded that co-crystal stability is related to solvent polarity, whereas PCM–OXA is more stable in aprotic solvents . Another study showed that the diclofenac–proline co-crystal (DFA–PRO) could also dissociate in the humidity levels above 80–90%. It was represented by low intensity on the diffractogram and diffraction peaks and a high-intensity PRO signal after 24 h of storage at 80% RH and 12 h of storage at 90% RH. Moreover, the results showed that the co-crystal was stable at 75% RH . This data suggested DFA co-crystal stability was influenced by humidity. In this case, the unstable co-crystal at high humidity must be modified and retested into other products . 4.6.2. Chemical Stability Accelerated stability testing was carried out under a temperature of 40 °C and 75% humidity. Chemical stability is related to co-crystal component degradation due to an API incompatibility with the co-former. The PXC–sodium acetate (SAT) co-crystal in tablet form showed no changes in color, odor, hardness, brittleness, drug content, disintegration time, or percentage dissolution after being stored at 40 °C/75% RH for three months . 4.6.3. Thermal Stability This stability test must be carried out at a high temperature and pressure to assess co-crystal stability against temperature increases . In several studies, stability testing was carried out with differential scanning calorimetry (DSC) to observe co-crystal thermal stability. For example, Oswald et al. prepared PCM co-crystals with several co-formers, i.e., 4-dioxane, N -methyl morpholine, morpholine, N, N -dimethyl piperazine, piperazine, and 4,4-bipyridine. Of the five co-crystals formed, only PCM-4,4 bipyridine did not decompose at a high temperature due to the high boiling point of 4,4-bipyridine (578 K). Thus, the boiling point of each co-former would significantly influence the co-crystal’s thermal stability . There are unexpected occurrences when co-crystals have a lower melting point than the parent component. This case occurred in ketoprofen (KET) and MAL co-crystals in the ratios 1:1, 1:2, and 2:1. Based on the KET–MAL co-crystal DSC analysis, the endothermic peak showed the melting point of each co-crystal proportion at 86.6, 79.5, and 86.2 °C, while the KET melting point was 96.1 °C; MAL had two endothermic peaks at 104.2 and 135.6 °C . The instability due to co-crystallization was also shown by the CEL–NIC system, in which the co-crystals dissociate at 25 °C into a more stable single form . Lower endothermic temperature peak also presented in the ET–SAC co-crystal with Tonset = 123.68 °C . In addition to DSC and PXRD analysis, Guerain et al. also used low-frequency Raman spectroscopy to investigate findings that could not be explained by PXRD, especially for molecular systems in low-electron atoms. This test was carried out under thermal conditions similar to those used in DSC analysis. It produced a shallow frequency spectrum (<50 cm −1 ), indicating the status of the S-IBU–NIC co-crystal, which showed limited physical stability over a relatively narrow temperature range. shows the occurrence of a glass transition state followed by a recrystallization process characterized by a curve reduction and reveals a transformation at 50 °C that was not detected in the DSC analysis. This result is because the low-energy phase transition is difficult for DSC to observe, which has a relatively weak scan rate (0.5 °C/min). The recrystallization of S-IBU–NIC occurred in two successive phase transitions: form A underwent a polymorph transformation into a unique and stable form B. This fact indicates the difficulty of achieving a steady state by the recrystallization method, so it is crucial to consider the preparation method for obtaining stable crystals . The stability test results at a temperature of 40 °C and a humidity of 75% RH, 82% RH, and 96% RH for two weeks showed no change in the mefenamic acid-N-methyl- d -glucamine (MFA–MG) co-crystal, indicating that the co-crystal was stable in high humidity . This test provides information that the water content caused molecular damage to determine co-crystal shelf life and storage . Moisture can cause co-crystal distortion, conversion to a hydrate form, or dissociation . Different methods conferred differences in stability, such as in the moisture stability study of the PCM co-crystal with the co-former OXA formed by the grinding method and the PCM–MLA co-crystal formed by solvent evaporation. CM–MLA co-crystal was more stable than PCM–OXA, because, under wet conditions OXA tended to be converted to the dihydrate form. This study also concluded that co-crystal stability is related to solvent polarity, whereas PCM–OXA is more stable in aprotic solvents . Another study showed that the diclofenac–proline co-crystal (DFA–PRO) could also dissociate in the humidity levels above 80–90%. It was represented by low intensity on the diffractogram and diffraction peaks and a high-intensity PRO signal after 24 h of storage at 80% RH and 12 h of storage at 90% RH. Moreover, the results showed that the co-crystal was stable at 75% RH . This data suggested DFA co-crystal stability was influenced by humidity. In this case, the unstable co-crystal at high humidity must be modified and retested into other products . Accelerated stability testing was carried out under a temperature of 40 °C and 75% humidity. Chemical stability is related to co-crystal component degradation due to an API incompatibility with the co-former. The PXC–sodium acetate (SAT) co-crystal in tablet form showed no changes in color, odor, hardness, brittleness, drug content, disintegration time, or percentage dissolution after being stored at 40 °C/75% RH for three months . This stability test must be carried out at a high temperature and pressure to assess co-crystal stability against temperature increases . In several studies, stability testing was carried out with differential scanning calorimetry (DSC) to observe co-crystal thermal stability. For example, Oswald et al. prepared PCM co-crystals with several co-formers, i.e., 4-dioxane, N -methyl morpholine, morpholine, N, N -dimethyl piperazine, piperazine, and 4,4-bipyridine. Of the five co-crystals formed, only PCM-4,4 bipyridine did not decompose at a high temperature due to the high boiling point of 4,4-bipyridine (578 K). Thus, the boiling point of each co-former would significantly influence the co-crystal’s thermal stability . There are unexpected occurrences when co-crystals have a lower melting point than the parent component. This case occurred in ketoprofen (KET) and MAL co-crystals in the ratios 1:1, 1:2, and 2:1. Based on the KET–MAL co-crystal DSC analysis, the endothermic peak showed the melting point of each co-crystal proportion at 86.6, 79.5, and 86.2 °C, while the KET melting point was 96.1 °C; MAL had two endothermic peaks at 104.2 and 135.6 °C . The instability due to co-crystallization was also shown by the CEL–NIC system, in which the co-crystals dissociate at 25 °C into a more stable single form . Lower endothermic temperature peak also presented in the ET–SAC co-crystal with Tonset = 123.68 °C . In addition to DSC and PXRD analysis, Guerain et al. also used low-frequency Raman spectroscopy to investigate findings that could not be explained by PXRD, especially for molecular systems in low-electron atoms. This test was carried out under thermal conditions similar to those used in DSC analysis. It produced a shallow frequency spectrum (<50 cm −1 ), indicating the status of the S-IBU–NIC co-crystal, which showed limited physical stability over a relatively narrow temperature range. shows the occurrence of a glass transition state followed by a recrystallization process characterized by a curve reduction and reveals a transformation at 50 °C that was not detected in the DSC analysis. This result is because the low-energy phase transition is difficult for DSC to observe, which has a relatively weak scan rate (0.5 °C/min). The recrystallization of S-IBU–NIC occurred in two successive phase transitions: form A underwent a polymorph transformation into a unique and stable form B. This fact indicates the difficulty of achieving a steady state by the recrystallization method, so it is crucial to consider the preparation method for obtaining stable crystals . External forces are essential in drug development because the drug undergoes grinding, filling, molding, and compacting of the powder, which can cause physical deformation. Therefore, the co-crystallization technique can be used for alternative crystal packaging and improving APIs’ mechanical properties . The mechanical properties can be assessed by the nanoindentation, elasticity (E), and hardness (H) of the measured material load transfer. E and H are measures of the resistance of the material to plastic elasticity and deformation. High E and H values indicate that the material is resistant to deformation and is brittle. However, based on several studies, co-crystallization does not continually improve mechanical properties, such as the co-crystal ibuprofen–lysine (IL) with polyvinylpyrrolidone K25 (PVP K25) and polyvinylpyrrolidone K30 (PVP K30) as co-formers, which have compressibility properties comparable to IL . Wicaksono et al. synthesized KET–MLA co-crystals by forming a C=O ketone group interaction between KET and MLA. Based on hot stage microscopic analysis and SEM, the KET–MLA co-crystal produced a needle-shaped, bigger particle size than KET, with a multi-shaped rough surface on the co-crystal . Additionally, Karki et al. applied a co-crystallization technique to increase PCM tabletability with several co-formers, namely OXA, THP, naphthalene, and phenazine. The PCM co-crystal with the lowest shear stress value was obtained with low tensile strength based on testing . Chattoraj et al. showed that the co-crystallization of PXC–SAC decreased plasticity and increased elasticity, which significantly reduced the compacting properties of API and its co-former, thought to be caused by suboptimal packaging of the co-crystal . A naproxen–nicotinamide co-crystal (NPX–NIC), produced with low tensile strength (<2 MPa), caused lamination and clumping in tablets . Conversely, several studies have shown that co-crystallization can improve the mechanical properties of NSAID co-crystals such as PCM–CAF. The liquid-assisted grinding (LAG) method is the best method to modulate the physical, mechanical, and pharmacokinetic properties, which resulted in a more delicate PCM–CAF co-crystal powder with a high compressibility index (31.12%) and high tablet hardness. Furthermore, the ratio of the API to the co-former can affect the mechanical properties, as shown in a PCM–CAF 1:1 co-crystal (A, B, and C) and a PCM–CAF 2:1 co-crystal (D and E) prepared by the solvent evaporation method with different solvents. The Heckel plot of the co-crystal showed increased compacting due to high plasticity. The plasticity was indicated by the ratio of the mean yield pressure, which is the stress of particle deformation that occurs during compression. The plasticity of the crystals showed different values due to the differences in the molecular packaging features of each co-crystal. Co-crystals A, B, and C had better plastic properties and higher tensile strength than co-crystals D and E . Improved mechanical properties also occurred in flufenamic acid (FFA)–NIC co-crystals, which showed better tablet properties than FFA . Polymorphs of drug substances have different physical and chemical properties, affecting drug products’ safety, quality, and effectiveness . Co-crystallization can exhibit polymorphisms in various crystal structures. Conformation changes in co-crystallization to form efficient hydrogen bonds can lead to polymorphism, i.e., by rotating the arrangement of C–C–C–O bonds. For example, there are three polymorphic forms of DIC co-crystallization with pyridine-based co-formers. The flexibility of DIC in salts and co-crystals allows the molecules to have different solid-state conformations . Several studies have reported co-crystal polymorphs, such as an experiment conducted by Child et al. . The co-crystallization of PXC/4-hydroxybenzoate (1:1) produced two polymorphs. Polymorph I had an un-ionized PXC tautomer with the co-former and polymorph II with a zwitterionic tautomer with the co-former. The co-crystals formed in this study included 2:1 piroxicam/succinic acid, 1:1 piroxicam/1-hydroxy-2-naphthoic acid and 1:1 piroxicam/caprylic acid, 1:1 piroxicam/malonic acid, 4:1 piroxicam/fumaric acid, and 1:1 piroxicam/benzoic acid . Hereafter, the formation of polymorphs is a considerable challenge because it can transform during the pharmaceutical preparations and changes the drug’s performance. Humidity and mechanical stress are the main factors leading to the formation of polymorphs, i.e., NPX-picolinamide (PA) produced two polymorphs, namely α and β. The α polymorph was unstable and tended to undergo a phase transition to the β form at 95 °C. Identification of the polymorph by very fast mass nuclear magnetic resonance, an advancement in MAS NMR technology, showed that the two polymorphs had different structural properties, based on the results of the heteronuclear correlation (HETCOR) test. It concluded that the α and β co-crystals had different hydrogen patterns, where the α co-crystal had a type I synthon (NPX as the acceptor, the amide group of PA as the donor) and the β co-crystal had a type II synthon due to the rearrangement of the structure during thermal processing below the melting point . The co-crystallization of ET and resorsilic acid, namely 2,4-DHBA, produced three polymorphs with different conformations on the amide groups, which provided other physicochemical properties. Polymorph I from ET and 2,4-DHBA (1:1) was yielded from a solution without formic acid, while the formic acid addition produced polymorph II. Lastly, polymorph III was formed from a mixture of ET and 2,4-DHBA (2:1) with formic acid in the mixture solution. Polymorphs I and II had good stability, non-hygroscopic character, and higher solubility at pH 7.4. In this study, the solubility increase was related to the decrease in melting point, reflecting lower lattice energy. Besides, the co-crystal polymorphs had a smaller particle size than ET, so the solubility was better than ET . There are three main stages in producing a co-crystal product: formulation, process, and packaging to maintain co-crystal stability. These steps create many new challenges for different environments. Hence, it is first necessary to perform a preformulation. An essential preformulation step is to ensure that the excipient is compatible with the co-crystal. Incompatibility of the excipient against the co-crystal, especially its co-former, can lead to co-crystal destruction and hydrolysis, which cause physical instability and chemical incompatibility . Crucially, the addition of a suitable excipient can increase solubility, stability, dissolution, and oral absorption. Remenar et al. conducted a study to expand PVP-K30 and sodium dodecyl sulfate (SDS) excipients for CEL–NIC co-crystal formulas. The results showed that low surfactant concentrations converted large aggregates of CEL-III, which reduced the dissolution rate. However, the addition of 1–10% solid SDS and PVP converted the crystalline form into amorphous and crystalline micron forms (CEL-IV), which increased the bioavailability up to four times more than CEL in Celebrex. All co-crystals dissolved in 2 min in 1% SDS solution, showing that this co-crystal formulation has the potential to overcome the “spring and parachute” effect . Lactose monohydrate (LA), potato starch (PS), and potassium bromide (PR) are known to prevent changes in the polymorphisms of ET and gentisinic acid (ETGA) 2, which was less stable at high pressure compared to ETGA 1 in the tableting process indicated by the absence of peak shift and split in solid-state nuclear magnetic resonance (SSNMR) 13C and 15N and FTIR. ETGA 1 was produced by slow evaporation with acetonitrile as the solvent at room temperature, while ETGA 2 was made using a solvent mixture of toluene–acetonitrile (1:1). The dissolution test showed that almost all co-crystal formulas (ETGA 1 and ETGA 2) with excipients (LA, PS, and PR), except for ETGA with LA, had a faster and greater dissolution than ET . Panzade et al. optimized piroxicam–sodium acetate (SAT) co-crystal formulation to obtain several orodispersible tablets. A preliminary study was carried out with a 32 factorial design to optimize the super-disintegrant (sodium croscarmellose) concentration and the binder (PVP K-30), with formulation factors for evaluation. Various pre-compression and post-compression parameters indicated that all formulated tablets had a uniform weight with acceptable weight variations and thicknesses. All formula hardness was 3.2–3.6 kg/cm 2 , and the friability was found between 0.72–0.86%, indicating that the tablets had good mechanical resistance. The API content in the oro-dispersible tablets was around 98.04–99.48%, which is within the acceptable limits. F1 was the optimal formula with a disintegration time of 29 ± 0.12 s, a wetting time of 21 ± 0.58 s, a maximum water absorption ratio of 97.65 ± 0.25%, and a maximum concentration of the tablet dissolution test of 93.69 ± 0.12%. Apart from that, F1 was also stable in accelerated stability testing. ANOVA supported these results on the variable disintegration and dissolution times of the super-disintegrant and binder used. The super-disintegrant and binder concentration optimization had a significant effect ( p < 0.05) on increasing the disintegration and dissolution time . Of the nine polymorphs of flufenamic acid (FFA), the most widely used in the formulation were flufenamic acid form 1 (FFA 1) and flufenamic acid form 3 (FFA 3). Guo et al. constructed co-crystal from FFA 1 with the co-formers NIC and THP. The polymers used were polyethylene glycol (PEG), PVP, and polyvinylpyrrolidone–vinyl acetate. The formation of co-crystals was confirmed by the presence of shifts and new peaks in the PXRD diffraction pattern and FTIR spectrum and the difference in melting points of the co-crystal compared to the parent compound. The solubility of the FFA–NIC co-crystal in ethanol:water (1:4) was higher than FFA and FFA–THP. Moreover, the presence of polymers in the solvent did not change the solubility properties. In residue testing, insoluble compounds showed that the co-crystal changed into FFA 3. DSC results depicted that FFA 3 melted at 123.1 °C and, following recrystallization into FFA, melted at 134.4 °C . The development process requires identifying parameters and factors that can affect the formation of both co-crystal and pharmaceutical preparation products. During the co-crystal formation process, knowledge of the physical and chemical properties of the starting materials is undoubtedly needed , as shown in the development process for sodium naproxen–lactose-tetrahydrate (S-NPX–LT) co-crystals. During the heating process, the S-NPX–LT co-crystal lost water molecules at a temperature of 60–120 °C, so co-crystal transformation occurred into a co-amorphous system that could change back to the co-crystalline form at high humidity. This study showed that water molecules are critical to stabilizing the co-crystal packaging. This case illustrates the challenges in process development for co-crystallization . Co-crystallization is a solid formation process in crystalline form, which is very promising for NSAID development due to its ability to improve physicochemical and mechanical properties. Safer and more effective API produced by co-crystallization could update old drugs and encourage the evergreening drug patents. Co-crystals have been successfully constructed from various NSAID drugs with GRAS co-former, with the different hydrogen bond synthon motifs. Practically, the solid-state-based technique is the most effective method for forming NSAID co-crystals. NSAID co-crystallization renewal continues by selecting a co-former based on the CSD database, optimizing the scale-up procedure, and improving physicochemical properties by co-crystal drug delivery modification. The NSAID co-crystals also have been shown to accelerate the onset of action, extend the drug’s duration of action, and decrease the tolerance doses. NSAID co-crystal development for medicinal products faces many challenges, mainly in selecting safe co-formers and unpredictable conditions. The “spring and parachute” phenomenon, Ostwald purification, the decreased solubility and dissolution rate due to the influence of environmental pH, co-crystalline transformation caused by temperature, humidity, chemical changes, polymorphism, and the in-compatible excipients in the formulation, may exist in co-crystal development. Detailed scientific understanding of the ingredients’ properties in the preformulation, co-crystallization techniques, supramolecular interactions formed, and their manifestations to the biopharmaceuticals profile will provide the opportunities for the new NSAID product development.
Effect of feedback-integrated reflection, on deep learning of undergraduate medical students in a clinical setting
eb2aa819-555b-4ce4-8d36-82354b809a78
11731358
Gynaecology[mh]
Learning is a multifaceted process that extends beyond the mere acquisition of knowledge, to include the ability to critically evaluate, reflect, and apply that knowledge effectively. In this context, reflection and feedback emerge as two crucial metacognitive strategies to enhance the learning process . Both have substantial evidence supporting their role in promoting deep or meaningful learning. Meaningful learning, as described by David Ausubel, takes place when learners connect new information to their existing cognitive structures. Unlike rote memorization, it emphasizes understanding concepts, making connections, and applying knowledge in diverse contexts . This is especially vital in medical education, as it fosters the development of clinical reasoning and problem-solving abilities, which are essential for professional competence and delivering effective patient care . Reflection fosters self-regulated learning by enabling learners to critically evaluate their performance, identify gaps, and make plans to improve . Feedback, in turn, provides external insights that complement reflection, helping learners recognize their strengths and weaknesses, adjust their learning strategies, and enhance clinical reasoning and decision-making skills . Together, these tools form a powerful combination for fostering meaningful learning, as they help students connect theoretical knowledge with practical application, enabling them to deepen their understanding . In medical education, feedback is traditionally provided post-assessment as a part of formative assessment, focusing on performance evaluation. However, integrating feedback into the learning process synchronously, combined with reflective practices, can significantly enhance self-regulated learning . Most educational literature focuses on feedback and reflection as separate metacognitive processes and only few studied their combined benefits. For instance, the U.K. Foundation Programme integrates reflective practice with structured feedback, demonstrating improvements in clinical reasoning and the development of lifelong learning habits among trainees . Similarly, reflective portfolios in the U.S., when paired with personalized feedback, have been shown to enhance self-regulated learning and critical assessment skills in medical students . Statement of the problem Despite the recognized importance of reflection and feedback in medical education globally, their combined implementation remains limited. Most studies on these strategies rely on qualitative approaches, lacking robust quantitative evidence to evaluate their effectiveness. Moreover, the use of feedback combined with reflection as a metacognitive learning strategy in undergraduate medical education particularly in clinical settings, is underexplored . This gap in evidence necessitates an investigation to determine the combined effectiveness of reflection and feedback in enhancing deep or meaningful learning. Such an evidence-based approach can provide critical insights for improving clinical education in resource-limited settings. Conceptual framework This study is grounded in Self-Regulated Learning Theory, which highlights learners’ active monitoring, evaluation, and regulation of their learning for improved outcomes . Reflection enables critical analysis of experiences, while feedback addresses knowledge gaps and guides future learning strategies. Together, these iterative mechanisms promote structured engagement and enhanced learning as illustrated in Fig. . Rooted in constructivism and pragmatism, the framework views reflection as a tool for deeper learning and feedback as a practical means to refine learning approaches. Research question Does the integration of feedback with reflection significantly enhance meaningful learning, as measured by higher-order MCQ scores, among undergraduate medical students compared to reflection alone? Objective of the study To evaluate the impact of feedback-integrated reflection versus reflection alone on higher-order MCQ scores among undergraduate medical students in a gynecology clinical setting. We hypothesize that the integration of feedback with reflection significantly improves higher-order MCQ scores among undergraduate medical students compared to reflection alone, fostering deeper learning and better clinical reasoning. The findings of this study are particularly relevant for curriculum designers and educators in medical and health professions education. These findings will contribute to the growing body of evidence supporting the integration of feedback into reflective practices to enhance learning outcomes in medical education. Despite the recognized importance of reflection and feedback in medical education globally, their combined implementation remains limited. Most studies on these strategies rely on qualitative approaches, lacking robust quantitative evidence to evaluate their effectiveness. Moreover, the use of feedback combined with reflection as a metacognitive learning strategy in undergraduate medical education particularly in clinical settings, is underexplored . This gap in evidence necessitates an investigation to determine the combined effectiveness of reflection and feedback in enhancing deep or meaningful learning. Such an evidence-based approach can provide critical insights for improving clinical education in resource-limited settings. This study is grounded in Self-Regulated Learning Theory, which highlights learners’ active monitoring, evaluation, and regulation of their learning for improved outcomes . Reflection enables critical analysis of experiences, while feedback addresses knowledge gaps and guides future learning strategies. Together, these iterative mechanisms promote structured engagement and enhanced learning as illustrated in Fig. . Rooted in constructivism and pragmatism, the framework views reflection as a tool for deeper learning and feedback as a practical means to refine learning approaches. Does the integration of feedback with reflection significantly enhance meaningful learning, as measured by higher-order MCQ scores, among undergraduate medical students compared to reflection alone? To evaluate the impact of feedback-integrated reflection versus reflection alone on higher-order MCQ scores among undergraduate medical students in a gynecology clinical setting. We hypothesize that the integration of feedback with reflection significantly improves higher-order MCQ scores among undergraduate medical students compared to reflection alone, fostering deeper learning and better clinical reasoning. The findings of this study are particularly relevant for curriculum designers and educators in medical and health professions education. These findings will contribute to the growing body of evidence supporting the integration of feedback into reflective practices to enhance learning outcomes in medical education. Study design This study employed an experimental study design to determine the impact of combining feedback with reflection versus reflection alone on higher-order MCQ scores, representing learning outcomes, among undergraduate medical students in a clinical gynecology setting. Ethical approval for the study was obtained from the Institutional Review Committees of Riphah University and Rawalpindi Medical University (App #Riphah/IRC/23/3034; Ref #341/IREF/RMU/2023). Study setting The study was conducted at Rawalpindi Medical University during an 8-week clinical rotation in gynecology and obstetrics, which included ward, outpatient, and operation theater activities. The structured intervention occurred over six consecutive days as part of this rotation. Sample/participants Participants included fifth-year undergraduate medical students from Rawalpindi Medical University. Students were recruited after providing informed consent. Participation in the study was voluntary, and it was explicitly stated that the study results would not influence students’ academic grades. A total sample size of 68 students (34 per group) was determined using the G*Power sample size calculator (effect size = 0.7, α = 0.05, power = 0.80), based on similar previous studies. All participants attended a refresher training session on reflective writing, which revisited Gibbs Reflective Cycle. A simple randomization method was used to assign students into two groups: Study Group (feedback + reflection): 34 participants. Control Group (reflection only): 34 participants. This process was done manually by assigning each participant a number written on identical slips of paper, which were folded and placed into an opaque container. Slips were drawn one at a time, alternately assigning participants to the study group and the control group. The process was overseen by an independent individual to minimize selection bias. Procedure All participants attended a refresher training session on reflective writing, which revisited Gibbs Reflective Cycle. After randomized assignment of students into two groups, baseline equivalence of knowledge between the two groups was confirmed through a pre-test. Both groups participated in identical teaching sessions over six days, facilitated by the same instructor. These sessions included ward-based learning and interactive small group discussions, focusing on six clinical cases—one case per day (antepartum hemorrhage, postpartum hemorrhage, pregnancy-induced hypertension, gestational diabetes mellitus, anemia, and intrauterine growth restriction). Each case was approached holistically, encompassing history taking, physical examination, diagnostic investigations, differential diagnosis, and management strategies. At the end of each session both groups submitted written reflections based on Gibbs Reflective Cycle, guided by pre-designed prompts to facilitate structured reflections. These prompts included. What have you learned from the clinical ward class? What are your thoughts and feelings about the topic? What were you thinking during the topic discussion? Any previous experience of the situation/topic? How did this session help you in making differential diagnoses? Were there any ambiguous or difficult to understand concepts. What is your plan to improve learning based on this experience? These prompts encouraged participants to evaluate their learning, identify challenges, and plan improvements. Intervention Verbal feedback was given by the same facilitator to the study group after each reflective activity. The facilitator read the students’ reflections, identified and listed key points, both positive for re-enforcement and areas of concern, highlighted through their reflections. Verbal feedback was provided by the same facilitator, based on these key points, the following day before the start of the next activity. The duration was up to 1 h. Feedback was provided in a small group setting, focusing specifically on these identified points. Utilizing the “Ask-Tell-Ask” model, the facilitator first posed questions to the respective students to gauge their understanding and perspectives on their reflections (“Ask”). After listening to their responses, the facilitator offered targeted feedback and insights based on the reflections (“Tell”). For example, a student who struggled with the diagnosis of preeclampsia received feedback as, for instance if a student reflect that he/she is unable to understand pathophysiology behind fetal complications cause by hypertension. Teacher can give feedback by making a link of hypertension with placental blood circulation (which is compromise in hypertensive patient). Finally, the facilitator asked follow-up questions to encourage further exploration and ensure students could articulate how they would apply the feedback to their future reflective practices (“Ask”). This allowed for a more immediate and relevant discussion of the reflections . This structured approach not only personalized the feedback but also made it feasible for facilitators to engage with multiple students effectively within a limited timeframe. Feedback sessions lasted five minutes per student, with approximately one hour allocated per group session and were tailored to their individual needs. The control group participated in the same teaching and reflection activities but did not receive feedback on their reflections. The summary of the methodology used is given in Fig. . Data collection Pre-Test and Post-Test Scores : Quantitative data was collected using scores from the validated MCQs administered to both groups before and after the intervention. A set of 30 validated multiple-choice questions (MCQs), covering the six key topics in gynecology taught over six days, was used for evaluation. Each topic contributed 5 MCQs, aligned with higher-order cognitive levels i.e. application (C3), analysis (C4), and evaluation (C5). The MCQs were reviewed and validated by two experts in obstetrics and gynecology, one of whom had expertise in medical education. The pre-test served as a baseline measure for students’ knowledge before the study, while the post-test assessed learning outcomes following the intervention. Both tests were designed with comparable levels of difficulty to ensure consistency and reliability. Descriptive Feedback : Informal student perceptions of the feedback process were recorded but not formally analyzed in this study. Data analysis Data were analyzed using SPSS version 26 Descriptive statistics, including frequency, percentage, mean, and standard deviation, were calculated. Paired Sample T-Test was conducted to compare pre-test and post-test scores within each group to measure knowledge improvement. Independent Sample T-Test was done to compare the post-test scores between the study and control groups to evaluate the effect of feedback. A p-value < 0.05 was considered statistically significant. To evaluate the effectiveness of the intervention, normalized learning gain (NLG) was calculated for both the intervention (study) and control groups. The formula used for this calculation is: [12pt]{minimal} $$\:\:\:\:=\:-\:-\:$$ [12pt]{minimal} $$\:\:\:--\:$$ The net learning gain was then calculated as the difference in normalized learning gains between the study and control groups to quantify the additional impact of the intervention. Control of bias Bias was minimized through randomization, consistent facilitation by a single instructor, and assurance that study grades would not impact final scores. The reflection and feedback process were also standardized, ensuring consistency and validity in the intervention. This study employed an experimental study design to determine the impact of combining feedback with reflection versus reflection alone on higher-order MCQ scores, representing learning outcomes, among undergraduate medical students in a clinical gynecology setting. Ethical approval for the study was obtained from the Institutional Review Committees of Riphah University and Rawalpindi Medical University (App #Riphah/IRC/23/3034; Ref #341/IREF/RMU/2023). The study was conducted at Rawalpindi Medical University during an 8-week clinical rotation in gynecology and obstetrics, which included ward, outpatient, and operation theater activities. The structured intervention occurred over six consecutive days as part of this rotation. Participants included fifth-year undergraduate medical students from Rawalpindi Medical University. Students were recruited after providing informed consent. Participation in the study was voluntary, and it was explicitly stated that the study results would not influence students’ academic grades. A total sample size of 68 students (34 per group) was determined using the G*Power sample size calculator (effect size = 0.7, α = 0.05, power = 0.80), based on similar previous studies. All participants attended a refresher training session on reflective writing, which revisited Gibbs Reflective Cycle. A simple randomization method was used to assign students into two groups: Study Group (feedback + reflection): 34 participants. Control Group (reflection only): 34 participants. This process was done manually by assigning each participant a number written on identical slips of paper, which were folded and placed into an opaque container. Slips were drawn one at a time, alternately assigning participants to the study group and the control group. The process was overseen by an independent individual to minimize selection bias. was used to assign students into two groups: Study Group (feedback + reflection): 34 participants. Control Group (reflection only): 34 participants. This process was done manually by assigning each participant a number written on identical slips of paper, which were folded and placed into an opaque container. Slips were drawn one at a time, alternately assigning participants to the study group and the control group. The process was overseen by an independent individual to minimize selection bias. All participants attended a refresher training session on reflective writing, which revisited Gibbs Reflective Cycle. After randomized assignment of students into two groups, baseline equivalence of knowledge between the two groups was confirmed through a pre-test. Both groups participated in identical teaching sessions over six days, facilitated by the same instructor. These sessions included ward-based learning and interactive small group discussions, focusing on six clinical cases—one case per day (antepartum hemorrhage, postpartum hemorrhage, pregnancy-induced hypertension, gestational diabetes mellitus, anemia, and intrauterine growth restriction). Each case was approached holistically, encompassing history taking, physical examination, diagnostic investigations, differential diagnosis, and management strategies. At the end of each session both groups submitted written reflections based on Gibbs Reflective Cycle, guided by pre-designed prompts to facilitate structured reflections. These prompts included. What have you learned from the clinical ward class? What are your thoughts and feelings about the topic? What were you thinking during the topic discussion? Any previous experience of the situation/topic? How did this session help you in making differential diagnoses? Were there any ambiguous or difficult to understand concepts. What is your plan to improve learning based on this experience? These prompts encouraged participants to evaluate their learning, identify challenges, and plan improvements. Verbal feedback was given by the same facilitator to the study group after each reflective activity. The facilitator read the students’ reflections, identified and listed key points, both positive for re-enforcement and areas of concern, highlighted through their reflections. Verbal feedback was provided by the same facilitator, based on these key points, the following day before the start of the next activity. The duration was up to 1 h. Feedback was provided in a small group setting, focusing specifically on these identified points. Utilizing the “Ask-Tell-Ask” model, the facilitator first posed questions to the respective students to gauge their understanding and perspectives on their reflections (“Ask”). After listening to their responses, the facilitator offered targeted feedback and insights based on the reflections (“Tell”). For example, a student who struggled with the diagnosis of preeclampsia received feedback as, for instance if a student reflect that he/she is unable to understand pathophysiology behind fetal complications cause by hypertension. Teacher can give feedback by making a link of hypertension with placental blood circulation (which is compromise in hypertensive patient). Finally, the facilitator asked follow-up questions to encourage further exploration and ensure students could articulate how they would apply the feedback to their future reflective practices (“Ask”). This allowed for a more immediate and relevant discussion of the reflections . This structured approach not only personalized the feedback but also made it feasible for facilitators to engage with multiple students effectively within a limited timeframe. Feedback sessions lasted five minutes per student, with approximately one hour allocated per group session and were tailored to their individual needs. The control group participated in the same teaching and reflection activities but did not receive feedback on their reflections. The summary of the methodology used is given in Fig. . Pre-Test and Post-Test Scores : Quantitative data was collected using scores from the validated MCQs administered to both groups before and after the intervention. A set of 30 validated multiple-choice questions (MCQs), covering the six key topics in gynecology taught over six days, was used for evaluation. Each topic contributed 5 MCQs, aligned with higher-order cognitive levels i.e. application (C3), analysis (C4), and evaluation (C5). The MCQs were reviewed and validated by two experts in obstetrics and gynecology, one of whom had expertise in medical education. The pre-test served as a baseline measure for students’ knowledge before the study, while the post-test assessed learning outcomes following the intervention. Both tests were designed with comparable levels of difficulty to ensure consistency and reliability. Descriptive Feedback : Informal student perceptions of the feedback process were recorded but not formally analyzed in this study. Data were analyzed using SPSS version 26 Descriptive statistics, including frequency, percentage, mean, and standard deviation, were calculated. Paired Sample T-Test was conducted to compare pre-test and post-test scores within each group to measure knowledge improvement. Independent Sample T-Test was done to compare the post-test scores between the study and control groups to evaluate the effect of feedback. A p-value < 0.05 was considered statistically significant. To evaluate the effectiveness of the intervention, normalized learning gain (NLG) was calculated for both the intervention (study) and control groups. The formula used for this calculation is: [12pt]{minimal} $$\:\:\:\:=\:-\:-\:$$ [12pt]{minimal} $$\:\:\:--\:$$ The net learning gain was then calculated as the difference in normalized learning gains between the study and control groups to quantify the additional impact of the intervention. Descriptive statistics, including frequency, percentage, mean, and standard deviation, were calculated. Paired Sample T-Test was conducted to compare pre-test and post-test scores within each group to measure knowledge improvement. Independent Sample T-Test was done to compare the post-test scores between the study and control groups to evaluate the effect of feedback. A p-value < 0.05 was considered statistically significant. To evaluate the effectiveness of the intervention, normalized learning gain (NLG) was calculated for both the intervention (study) and control groups. The formula used for this calculation is: [12pt]{minimal} $$\:\:\:\:=\:-\:-\:$$ [12pt]{minimal} $$\:\:\:--\:$$ The net learning gain was then calculated as the difference in normalized learning gains between the study and control groups to quantify the additional impact of the intervention. Bias was minimized through randomization, consistent facilitation by a single instructor, and assurance that study grades would not impact final scores. The reflection and feedback process were also standardized, ensuring consistency and validity in the intervention. This randomized controlled trial included 68 final-year medical students of either gender. Gender distribution between the control and study groups showed no statistically significant difference (M: F ratio; M = 22, F = 44, P = 0.380). Pre-test scores Baseline knowledge was assessed through pre-test scores before the intervention. The study group had a mean pre-test score of 11.68 ± 2.60 (38.93%), while the control group scored 11.29 ± 2.38 (37.15%). There was no statistically significant difference in baseline knowledge between the two groups ( P = 0.52, independent sample t-test). Comparison between pre and post-test scores of each group Within-group comparisons using paired sample t-tests showed significant improvements in post-test scores for both groups ( P = 0.0001) as shown in Table . Comparison between Post-test scores Post-test scores, conducted on the 7th day after the intervention, demonstrated a significant difference between the two groups. The study group scored a mean of 20.88 ± 2.98 (69.32%), compared to the control group, which scored a mean of 15.29 ± 2.66 (51.00%). This difference was statistically significant ( P = 0.0001, independent sample t-test), as shown in Table . Learning gain The difference in learning gain, as evidenced by mean scores of the study and control groups is shown in Fig. . The percentage gain in learning was calculated for both groups to evaluate the effectiveness of the intervention. The control group, which engaged in reflection alone, demonstrated a percentage gain of 35.43% from pre-test to post-test scores. In comparison, the study group, which received feedback integrated with reflection, achieved a significantly higher percentage gain of 78.77% . Normalized Learning Gain The normalized learning gain (NLG) was calculated to compare the effectiveness of the intervention (feedback-integrated reflection) with that of the control (reflection alone). The study group demonstrated a mean normalized learning gain of 69.07%, compared to 29.18% in the control group. Net learning gain The net learning gain, calculated as the difference in normalized learning gains between the study and control groups, was found to be 39.89%. Baseline knowledge was assessed through pre-test scores before the intervention. The study group had a mean pre-test score of 11.68 ± 2.60 (38.93%), while the control group scored 11.29 ± 2.38 (37.15%). There was no statistically significant difference in baseline knowledge between the two groups ( P = 0.52, independent sample t-test). Within-group comparisons using paired sample t-tests showed significant improvements in post-test scores for both groups ( P = 0.0001) as shown in Table . Post-test scores, conducted on the 7th day after the intervention, demonstrated a significant difference between the two groups. The study group scored a mean of 20.88 ± 2.98 (69.32%), compared to the control group, which scored a mean of 15.29 ± 2.66 (51.00%). This difference was statistically significant ( P = 0.0001, independent sample t-test), as shown in Table . The difference in learning gain, as evidenced by mean scores of the study and control groups is shown in Fig. . The percentage gain in learning was calculated for both groups to evaluate the effectiveness of the intervention. The control group, which engaged in reflection alone, demonstrated a percentage gain of 35.43% from pre-test to post-test scores. In comparison, the study group, which received feedback integrated with reflection, achieved a significantly higher percentage gain of 78.77% . The normalized learning gain (NLG) was calculated to compare the effectiveness of the intervention (feedback-integrated reflection) with that of the control (reflection alone). The study group demonstrated a mean normalized learning gain of 69.07%, compared to 29.18% in the control group. The net learning gain, calculated as the difference in normalized learning gains between the study and control groups, was found to be 39.89%. This randomized controlled study assessed the effectiveness of integrating feedback with reflection compared to reflection alone in enhancing deep learning among undergraduate medical students based on their post-intervention MCQ scores. The findings demonstrate that feedback-integrated reflection significantly improved the post-test scores in the study group suggesting improved meaningful or higher-order learning. Unlike previous studies that relied on qualitative measures or peer feedback, this study employed a quantitative approach to measure the impact of metacognitive support on student scores by employing individualized, instructor-led feedback on written reflections. The structured feedback process ensured consistency and provided actionable guidance, resulting in significant learning gains. This approach aligns with the principles of self-regulated learning, where feedback serves as a scaffold, guiding learners to set goals, monitor progress, and adjust strategies to achieve desired outcomes . Research indicates that feedback during the learning process promotes metacognitive engagement. Learners need feedback to focus on valid indicators of competency development and remain motivated to reflect on learning. Additionally, they require awareness of the value of metacognitive activities and the autonomy to shape their own learning paths . The study’s results are consistent with earlier findings in the literature demonstrated that practicing clinical cases with integrated feedback and reflection led to improved diagnostic accuracy among dermatology trainees . Similarly, reflective practice combined with feedback was found to be an effective approach for helping fourth-year medical students enhance their reflective skills and deepen their medical knowledge and skills in a Thai study . Another study by Larsen et al. showed that reflection followed by feedback enhanced students’ history-taking skills, medical knowledge, and reflective writing abilities . These findings reinforce the conclusion that feedback, when integrated with reflective practices, plays a pivotal role in improving medical students’ learning outcomes . By addressing gaps in performance that reflection alone may not identify, feedback-integrated reflection has proven to improve self-regulation, deepen understanding, and enhance clinical competency across various healthcare settings. According to two studies conducted in Saudi Arabia and Taiwan, the findings revealed a notable and meaningful association between reflection performance and critical thinking disposition. Furthermore, the study demonstrated that reflection performance could effectively predict the variability in critical thinking disposition. These results suggest that nursing students who actively engage in reflective practices are more likely to possess stronger critical thinking abilities . The findings are strongly supported by Self-Regulated Learning (SRL) Theory, which provides a theoretical foundation for understanding the mechanisms through which feedback enhances learning. In our study, integrating feedback with reflection during the learning process provided a scaffold for self-regulated learning, enabling students to actively engage in their educational journey. Reflection, as a metacognitive tool, enables learners to critically evaluate their experiences, identify areas for improvement, and formulate strategies to address knowledge gaps. Feedback plays a complementary role by guiding learners to refine their reflective process, providing targeted insights into their strengths and weaknesses . This iterative cycle of reflection and feedback allows students to adjust their learning strategies, fostering self-regulation, deeper understanding, and critical thinking . In this study, students who received feedback integrated with reflection demonstrated enhanced metacognitive engagement, as reflected in their significantly higher post-test scores compared to those who engaged in reflection alone. This approach not only improved immediate learning outcomes but also equipped students with metacognitive learning skills essential for medical practice . This study has multiple strengths, one of which is its acknowledgment and mitigation of potential confounders. Variability in baseline knowledge levels was addressed through pre-testing and randomization, ensuring balanced groups. Instructor variability was minimized by having a single facilitator conduct all teaching and feedback sessions. Motivation and engagement levels, which could independently influence outcomes, were controlled by standardizing the learning environment and reassuring students that participation would not impact their final grades. Peer interactions, a potential confounding factor, were minimized by structuring sessions to focus on individual reflections and feedback. Additionally, variability in the quality of written reflections was reduced by providing participants with standardized training in reflective writing prior to the intervention. These measures ensured that the observed differences in learning outcomes could be reliably attributed to the intervention itself. Furthermore, the intervention’s adaptability across disciplines and resource-constrained settings highlights its scalability. The results not only confirm the effectiveness of the intervention but also provide actionable strategies for educators to enhance teaching and learning practices. In conclusion, this study demonstrates that feedback-integrated reflection is a powerful tool for enhancing meaningful learning among medical students compared to reflection alone. The intervention’s success in improving learning outcomes supports its inclusion as a core component of educational strategies in medical and health professions education. Limitations The study also has a few limitations. Its short duration may not reflect the long-term effects of feedback-integrated reflection on retention or clinical application, and the focus on a limited number of gynecology cases restricts generalizability. While feedback was standardized, its subjective nature could introduce variability. The study assessed only cognitive outcomes, excluding practical skills and professionalism, and lacked longitudinal follow-up to evaluate the sustainability of learning gains. Contributions and future implications By demonstrating the significant improvement in higher-order cognitive skills among students who received feedback-integrated reflection, the findings highlight the added value of feedback in fostering self-regulation, critical thinking, and clinical reasoning. Future research may explore the application of this approach across diverse clinical disciplines and include student perceptions to provide a more comprehensive understanding of its impact. The study also has a few limitations. Its short duration may not reflect the long-term effects of feedback-integrated reflection on retention or clinical application, and the focus on a limited number of gynecology cases restricts generalizability. While feedback was standardized, its subjective nature could introduce variability. The study assessed only cognitive outcomes, excluding practical skills and professionalism, and lacked longitudinal follow-up to evaluate the sustainability of learning gains. By demonstrating the significant improvement in higher-order cognitive skills among students who received feedback-integrated reflection, the findings highlight the added value of feedback in fostering self-regulation, critical thinking, and clinical reasoning. Future research may explore the application of this approach across diverse clinical disciplines and include student perceptions to provide a more comprehensive understanding of its impact. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2
Comparison of the effectiveness of two different concentrations of ropivacaine for intrapleural analgesia in reducing stimulatory pain caused by chest tubes after uniportal video-assisted thoracoscopic surgery: a randomised controlled study
1866075f-e329-4eb0-913c-b7b31d74a99b
11899763
Thoracic Surgery[mh]
The rapid advancement of thoracoscopic surgery has led to an increased focus on minimally invasive techniques, with uniportal thoracoscopy emerging as the leading approach. This method significantly reduces patient discomfort at incision sites compared to traditional thoracotomy . However, both acute and chronic pain can persist following video-assisted thoracoscopic surgery (VATS), with reported incidence rates as high as 90% for acute pain and between 40% and 60% for chronic pain . A major contributor to postoperative pain is the presence of an indwelling chest drainage tube, which has been shown to impede the process of enhanced recovery after surgery . Postoperative indwelling chest tubes following uniportal video-assisted thoracoscopic surgery (UVATS) serve a dual purpose: draining pleural effusion and facilitating lung re-expansion, thereby reducing the risk of postoperative complications . Traditionally, after a lobectomy, the standard practice has been to use two chest tubes: one positioned superiorly to evacuate air and the other inferiorly to remove fluid. However, the discomfort caused by the contact of chest tubes with the pleura often leads to patients being reluctant to engage in coughing and deep breathing exercises, which can negatively impact postoperative oxygenation. This reluctance may result in prolonged hospital stays, increased pain levels, and a higher risk of postoperative complications, such as atelectasis and lung infections . Pain is recognized as the most prevalent direct complication associated with chest tube insertion, with an incidence rate of 4.1%, and a delayed complication incidence of 18%. Generally, smaller-caliber chest tubes (10–14 Fr) are associated with less pain compared to larger-caliber tubes (> 20 Fr) . While numerous studies have focused on pain reduction following chest tube removal [ – ], there is a paucity of data addressing pain management during the period of chest tube drainage. Modifying chest tube management protocols appears to be a promising strategy for alleviating pain associated with drainage tubes. One study demonstrated that the use of a 7-Fr central venous catheter instead of a conventional chest tube significantly reduced perioperative pain in patients undergoing VATS . Additionally, another investigation revealed that employing bi-pigtail catheter drainage (two 8-Fr pigtail catheters) resulted in significantly lower Numeric Pain Rating Scale scores for patients three days post-surgery when compared to traditional chest tube drainage using a single 28-Fr chest tube . The concept of intrapleural analgesia (IPA) was first proposed by Reiestad and Stromskag in 1986 . The analgesic efficacy of IPA has long been a subject of debate. In 1987, Rosenberg et al. demonstrated that a single administration of 0.50% bupivacaine in a volume of 15-20 ml, or a continuous infusion of 0.25% bupivacaine at a rate of 5–10 ml/h, could effectively manage post-thoracotomy pain. However, subsequent investigations have proposed that the administration of 40 ml of 0.25% bupivacaine or 20 ml of 0.5% bupivacaine may be inadequate for achieving sufficient analgesia . The principle of IPA entails the infiltration of a local anesthetic into the tissue plane situated between the visceral and parietal pleura, subsequently diffusing into the subpleural space and accompanied by a multisegmental intercostal nerve block. Hence, IPA is also termed interpleural analgesia . Upon entering the pleural space, the local anesthetic initially diffuses to the intercostal spaces situated both above and below the catheter insertion point, and then spreads inwardly towards the paravertebral space. Consequently, the analgesic coverage extends across multiple dermatomes . For thoracoscopic minimally invasive surgery, IPA is not an adventurous analgesic technique; on the contrary, it is even safer and easier to manage than thoracic epidural analgesia (TEA) . The administration of IPA through the chest drainage tube does not result in any serious complications, and additionally improves the patient’s ventilation function [ – ]. The safety and efficacy of intrapleural ropivacaine administration have been thoroughly investigated . However, there is limited information regarding the effective concentration of ropivacaine for alleviating chest tube pain after UVATS. The commonly used intrathoracic analgesic concentrations of ropivacaine in clinical anesthesia range from 0.2 to 0.75% [ , , , ]. Previous studies have shown that a single intrapleural administration of 0.75% ropivacaine (15 ml or 20 ml) and four consecutive doses of ropivacaine (15 ml administered every four hours) are effective in reducing cough-related pain after thoracoscopic procedures . Furthermore, patients who underwent ultrasound-guided continuous paravertebral catheterization with a continuous infusion of 0.2% ropivacaine reported greater satisfaction with their pain management compared to those receiving single-shot intercostal blocks following VATS . This study aims to evaluate the effectiveness of intermittent intrapleural infusion of varying concentrations of ropivacaine (0.25% and 0.50%) in alleviating pain and discomfort associated with chest tubes. We hypothesized that in patients experiencing UVATS, the use of ropivacaine IPA was more effective in alleviating chest tube-related pain than the use of PCIA alone, and that the analgesic effects of 0.5% and 0.25% ropivacaine were comparable. Registration and ethical approval This prospective, randomized, unblinded trial was approved by the the Ethics Committee of the First Affiliated Hospital of Chongqing Medical University (Ethics ID.2022 − 151, Chairman Dr. PingXu) on the 29th of June 2022. This study was registered in the Chinese Clinical Trial Registration Center (Registration ID: ChiCTR2200065184, https://www.chictr.org.cn/showproj.html?proj=182214 ) and performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments. All study subjects provided written signed informed consent. The CONSORT guidelines for reporting randomised control trials were followed (Fig. ). Recruitment and patient involvement We recruited participants who were scheduled to undergo UVATS at the First Affiliated Hospital of Chongqing Medical University from 2022/11/01 to 2023/05/31. The inclusion criteria include: (1) age ≥ 18 years; (2) Grade I-III according to the ASA; (3) no contraindications to general anesthesia; (4) BMI 18.5–24 kg/m 2 . The exclusion criteria include: (1) pregnant or breastfeeding patients; (2) history of chronic pain; (3) history of alcohol or opioid dependence; (4) cardiopulmonary insufficiency or heart failure; (5) central nervous system diseases; (6) hepatorenal insufficiency; (7) allergic to amide local anesthetics or opioid medications; (8) intraoperative conversion to open-heart surgery; (9) abnormal sensation of thoracic skin or the presence of infections and ulcers at the incision site of thoracic surgery; (10) participation in other clinical trials at the same time; 11. inability to score the VAS Scale; 12. the patients suffered serious complications or accidents before the end of the trial; 13. the patients requested to withdraw from the clinical trial; 14. a postoperative indwelling closed chest drain was not required; 15. postoperative transfer to the ICU was required. In this study, we recruited 93 patients, yet three were excluded from the final analysis due to not meeting the inclusion criteria. Trial procedures Ninety subjects were evenly divided into three groups ( n = 30). All three groups of patients received ultrasound-guided thoracic paravertebral block (TPVB) on the surgical side before surgery, the dosage administered is 40 ml of 0.25% ropivacaine. Ropivacaine injection (10 ml:100 mg, Aspen, batch no.NBPS) was preservative-free, free of additives and chemical stabilizers. Control group (Group N): routine use of PCIA after surgery; 0.25% Group (Group L): Routine use of PCIA after surgery combined with intrathoracic infusion of 0.25% ropivacaine (total volume 200 ml); 0.5% Group (Group M): Routine use of PCIA after surgery combined with intrathoracic infusion of 0.5% ropivacaine (total volume 200 ml). The randomization sequence was independently created by team member YG-H using EXCEL’s random number generator. YG-H was not involved in data collection or patient care. The codes were sealed in sequentially numbered opaque envelopes and were only opened by GM-W before analyzing the results. The data collection was conducted by WJ-T, who remained blinded to the group assignments. Preoperative preparations During the preoperative evaluation, the researchers fully educated the patients and their families about the detailed use and advantages and disadvantages of the pulse self-control analgesic pump (Apon ® , Jiangsu, China). The patients were instructed to press the analgesic device when they perceived their conscious VAS score > 3 for patient-controlled analgesia. The VAS scores were recorded at rest and while coughing at 6 h, 12 h, 24 h, and 48 h postoperatively. Anesthesia protocol All subjects were fasting for 8 h and water deprivation for 2 h before surgery. Prior to the surgery, ultrasound-guided paravertebral nerve block (0.25% ropivacaine, 40 ml) was performed on the surgical side. Anesthesia induction was performed with intravenous propofol (2 mg/kg), sufentanil (0.5ug/kg), vecuronium bromide (0.1 mg/kg), and midazolam (0.05 mg/kg). For maintenance of anesthesia, propofol (4 mg/kg·h pumped), sevoflurane (1%~2% inhalation), remifentanil (10ug/kg·h pumped), sufentanil (10ug/h), and intermittent administration of vecuronium bromide and sufentanil were used to maintain a bispectral index (BIS) of 40–60. For two-lung ventilation, the maintenance tidal volume at 8 ml/kg, respiratory rate at 12 breaths/min, and inspiratory/expiratory ratio at 1:2. During the surgery, the patient’s heart rate, arterial blood pressure, and arterial blood CO 2 partial pressure are continuously monitored. During mechanical ventilation, parameters such as tidal volume, respiratory rate, and PEEP are dynamically adjusted based on blood gas analysis and PETCO 2 . For single-lung ventilation, tidal volume is set at 8–10 ml/kg with a respiratory rate of 15 breaths per minute. PaO 2 is maintained above 70 mmHg, and PaCO 2 is kept at 37–40 mmHg, while peak inspiratory pressure is limited to 30 cmH 2 O. If PaO 2 drops or hypoxemia occurs, PEEP is applied to the ventilated lung, with the PEEP value not surpassing 5 cmH 2 O. Two-lung ventilation is preferred whenever possible. The dosage of anesthesia medication is adjusted according to the BIS value. Vasoactive drugs and fluid replacement are used based on the patient’s circulatory condition. A double-lumend endotracheal tube or bronchial blocker is inserted according to the surgical side of the patient. After the end of surgery, all patients were routinely placed on intravenous self-control analgesic pumps for intravenous self-control administration of analgesia (sufentanil 50 µg, flurbiprofen ester 100 mg, saline 69 ml), with a background infusion volume of 2 ml/h, a lockout time of 20 min, and a single dose of 2 ml of bolus, with a maximum dose of 20 ml per hour. Two closed chest tubes of different diameters are routinely placed in the pleural cavity. The 24-Fr (Cat No.#4242, QINGZE, Jiangsu, China) chest tube is placed at the incision, while the 10-Fr (Cat No.#YB-A-I-3.3/235, YUBANG, Jiangsu, China) chest tube is placed in the second intercostal space below the incision. The anesthesiologist immediately connects the prepared ropivacaine pulse pump to the side channel of the 10-Fr chest tube and administers the first dose of medication into the pleural cavity (Fig. ). A patient-controlled analgesia pump intermittently injected 0.25% or 0.50% of ropivacaine into the pleural cavity. The initial loading dose was 15 ml, the background infusion rate was 1.0 ml/h (to avoid catheter blockage), the lockout time was 4 h, a single bolus of 15 ml was administered, and the maximum dosage was 250 mg (total volume 200 ml). The medication in the patient-controlled analgesia pump should be infused within 48 h after surgery. Patients were asked to report rest and cough pain on a standardized VAS after completion of administration for 6 h, 12 h, 24 h, and 48 h. The rest VAS score ≤ 3 was considered effective analgesia, otherwise it was deemed ineffective. For patients identified as ineffective, tramadol injection (2 ml:100 mg) will be administered intramuscularly for analgesic remediation. Outcome measures The primary outcome was the effective analgesia for drainage discomfort, defined as the rest and cough VAS scores for chest tube pain within 48 h after ropivacaine administration. Cough VAS was defined as VAS when coughing. Scores ranged from 0 to 10, with 0 denoting no pain; 1–3 denoting mild pain; 4–6 denoting moderate pain; and 7–10 denoting intractable pain. Secondary outcomes in this study included incision pain (defined as the rest and cough VAS scores); incidence of hypotension (defined as a decrease in blood pressure of more than 20% from baseline, or an absolute value of < 90mmHg); nausea and vomiting; bradycardia (defined as a heart rate of < 60 bpm); and respiratory depression (defined as an oxygen saturation of < 90). Sample size estimation The sample size for this study was determined based on the outcomes of a preliminary experiment. In this experiment, thirty qualified patients were randomly and equally assigned to three groups. At the end of UVATS, each receiving intrathoracic analgesia with ropivacaine at concentrations of 0.125%, 0.25%, and 0.5%, respectively. The cough VAS scores for drainage tube pain at 6 h were analyzed , and the results were as follows: 3.8 ± 1.03 (group N), 2.5 ± 0.84 (group L), 1.9 ± 0.57 (group M) (Supplemental File ). For a one-way ANOVA comparing three groups, calculate the sample size needed in each group to obtain a power of 0.90, and a significance level of 0.05 is employed. Sample size calculation was performed using PASS software (NCSS, LLC), which determined that a total of 81 patients needed to be recruited. Considering a 10% dropout rate, a minimum of 30 patients were required in each group. Statistical analysis Normally distributed data are expressed as the mean ± SD. Date with a skewed distribution were expressed as the median (interquartile interval, Q1∼Q3). Discrete data are expressed as numbers and percentages. For continuous data that met the assumptions of normality and homogeneity of variance, one-way ANOVA with LSD post-hoc tests were used for multiple comparison corrections. For continuous data with skewed distribution, Kruskal-Wallis Test and Bonferrion correction were applied for multiple comparison corrections. Counting data were compared by the χ 2 test. A two-tailed P -value < 0.05 was considered statistically significant. All data were analyzed using SPSS software version 26.0 (SPSS, Inc., USA). This prospective, randomized, unblinded trial was approved by the the Ethics Committee of the First Affiliated Hospital of Chongqing Medical University (Ethics ID.2022 − 151, Chairman Dr. PingXu) on the 29th of June 2022. This study was registered in the Chinese Clinical Trial Registration Center (Registration ID: ChiCTR2200065184, https://www.chictr.org.cn/showproj.html?proj=182214 ) and performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments. All study subjects provided written signed informed consent. The CONSORT guidelines for reporting randomised control trials were followed (Fig. ). We recruited participants who were scheduled to undergo UVATS at the First Affiliated Hospital of Chongqing Medical University from 2022/11/01 to 2023/05/31. The inclusion criteria include: (1) age ≥ 18 years; (2) Grade I-III according to the ASA; (3) no contraindications to general anesthesia; (4) BMI 18.5–24 kg/m 2 . The exclusion criteria include: (1) pregnant or breastfeeding patients; (2) history of chronic pain; (3) history of alcohol or opioid dependence; (4) cardiopulmonary insufficiency or heart failure; (5) central nervous system diseases; (6) hepatorenal insufficiency; (7) allergic to amide local anesthetics or opioid medications; (8) intraoperative conversion to open-heart surgery; (9) abnormal sensation of thoracic skin or the presence of infections and ulcers at the incision site of thoracic surgery; (10) participation in other clinical trials at the same time; 11. inability to score the VAS Scale; 12. the patients suffered serious complications or accidents before the end of the trial; 13. the patients requested to withdraw from the clinical trial; 14. a postoperative indwelling closed chest drain was not required; 15. postoperative transfer to the ICU was required. In this study, we recruited 93 patients, yet three were excluded from the final analysis due to not meeting the inclusion criteria. Ninety subjects were evenly divided into three groups ( n = 30). All three groups of patients received ultrasound-guided thoracic paravertebral block (TPVB) on the surgical side before surgery, the dosage administered is 40 ml of 0.25% ropivacaine. Ropivacaine injection (10 ml:100 mg, Aspen, batch no.NBPS) was preservative-free, free of additives and chemical stabilizers. Control group (Group N): routine use of PCIA after surgery; 0.25% Group (Group L): Routine use of PCIA after surgery combined with intrathoracic infusion of 0.25% ropivacaine (total volume 200 ml); 0.5% Group (Group M): Routine use of PCIA after surgery combined with intrathoracic infusion of 0.5% ropivacaine (total volume 200 ml). The randomization sequence was independently created by team member YG-H using EXCEL’s random number generator. YG-H was not involved in data collection or patient care. The codes were sealed in sequentially numbered opaque envelopes and were only opened by GM-W before analyzing the results. The data collection was conducted by WJ-T, who remained blinded to the group assignments. During the preoperative evaluation, the researchers fully educated the patients and their families about the detailed use and advantages and disadvantages of the pulse self-control analgesic pump (Apon ® , Jiangsu, China). The patients were instructed to press the analgesic device when they perceived their conscious VAS score > 3 for patient-controlled analgesia. The VAS scores were recorded at rest and while coughing at 6 h, 12 h, 24 h, and 48 h postoperatively. All subjects were fasting for 8 h and water deprivation for 2 h before surgery. Prior to the surgery, ultrasound-guided paravertebral nerve block (0.25% ropivacaine, 40 ml) was performed on the surgical side. Anesthesia induction was performed with intravenous propofol (2 mg/kg), sufentanil (0.5ug/kg), vecuronium bromide (0.1 mg/kg), and midazolam (0.05 mg/kg). For maintenance of anesthesia, propofol (4 mg/kg·h pumped), sevoflurane (1%~2% inhalation), remifentanil (10ug/kg·h pumped), sufentanil (10ug/h), and intermittent administration of vecuronium bromide and sufentanil were used to maintain a bispectral index (BIS) of 40–60. For two-lung ventilation, the maintenance tidal volume at 8 ml/kg, respiratory rate at 12 breaths/min, and inspiratory/expiratory ratio at 1:2. During the surgery, the patient’s heart rate, arterial blood pressure, and arterial blood CO 2 partial pressure are continuously monitored. During mechanical ventilation, parameters such as tidal volume, respiratory rate, and PEEP are dynamically adjusted based on blood gas analysis and PETCO 2 . For single-lung ventilation, tidal volume is set at 8–10 ml/kg with a respiratory rate of 15 breaths per minute. PaO 2 is maintained above 70 mmHg, and PaCO 2 is kept at 37–40 mmHg, while peak inspiratory pressure is limited to 30 cmH 2 O. If PaO 2 drops or hypoxemia occurs, PEEP is applied to the ventilated lung, with the PEEP value not surpassing 5 cmH 2 O. Two-lung ventilation is preferred whenever possible. The dosage of anesthesia medication is adjusted according to the BIS value. Vasoactive drugs and fluid replacement are used based on the patient’s circulatory condition. A double-lumend endotracheal tube or bronchial blocker is inserted according to the surgical side of the patient. After the end of surgery, all patients were routinely placed on intravenous self-control analgesic pumps for intravenous self-control administration of analgesia (sufentanil 50 µg, flurbiprofen ester 100 mg, saline 69 ml), with a background infusion volume of 2 ml/h, a lockout time of 20 min, and a single dose of 2 ml of bolus, with a maximum dose of 20 ml per hour. Two closed chest tubes of different diameters are routinely placed in the pleural cavity. The 24-Fr (Cat No.#4242, QINGZE, Jiangsu, China) chest tube is placed at the incision, while the 10-Fr (Cat No.#YB-A-I-3.3/235, YUBANG, Jiangsu, China) chest tube is placed in the second intercostal space below the incision. The anesthesiologist immediately connects the prepared ropivacaine pulse pump to the side channel of the 10-Fr chest tube and administers the first dose of medication into the pleural cavity (Fig. ). A patient-controlled analgesia pump intermittently injected 0.25% or 0.50% of ropivacaine into the pleural cavity. The initial loading dose was 15 ml, the background infusion rate was 1.0 ml/h (to avoid catheter blockage), the lockout time was 4 h, a single bolus of 15 ml was administered, and the maximum dosage was 250 mg (total volume 200 ml). The medication in the patient-controlled analgesia pump should be infused within 48 h after surgery. Patients were asked to report rest and cough pain on a standardized VAS after completion of administration for 6 h, 12 h, 24 h, and 48 h. The rest VAS score ≤ 3 was considered effective analgesia, otherwise it was deemed ineffective. For patients identified as ineffective, tramadol injection (2 ml:100 mg) will be administered intramuscularly for analgesic remediation. The primary outcome was the effective analgesia for drainage discomfort, defined as the rest and cough VAS scores for chest tube pain within 48 h after ropivacaine administration. Cough VAS was defined as VAS when coughing. Scores ranged from 0 to 10, with 0 denoting no pain; 1–3 denoting mild pain; 4–6 denoting moderate pain; and 7–10 denoting intractable pain. Secondary outcomes in this study included incision pain (defined as the rest and cough VAS scores); incidence of hypotension (defined as a decrease in blood pressure of more than 20% from baseline, or an absolute value of < 90mmHg); nausea and vomiting; bradycardia (defined as a heart rate of < 60 bpm); and respiratory depression (defined as an oxygen saturation of < 90). The sample size for this study was determined based on the outcomes of a preliminary experiment. In this experiment, thirty qualified patients were randomly and equally assigned to three groups. At the end of UVATS, each receiving intrathoracic analgesia with ropivacaine at concentrations of 0.125%, 0.25%, and 0.5%, respectively. The cough VAS scores for drainage tube pain at 6 h were analyzed , and the results were as follows: 3.8 ± 1.03 (group N), 2.5 ± 0.84 (group L), 1.9 ± 0.57 (group M) (Supplemental File ). For a one-way ANOVA comparing three groups, calculate the sample size needed in each group to obtain a power of 0.90, and a significance level of 0.05 is employed. Sample size calculation was performed using PASS software (NCSS, LLC), which determined that a total of 81 patients needed to be recruited. Considering a 10% dropout rate, a minimum of 30 patients were required in each group. Normally distributed data are expressed as the mean ± SD. Date with a skewed distribution were expressed as the median (interquartile interval, Q1∼Q3). Discrete data are expressed as numbers and percentages. For continuous data that met the assumptions of normality and homogeneity of variance, one-way ANOVA with LSD post-hoc tests were used for multiple comparison corrections. For continuous data with skewed distribution, Kruskal-Wallis Test and Bonferrion correction were applied for multiple comparison corrections. Counting data were compared by the χ 2 test. A two-tailed P -value < 0.05 was considered statistically significant. All data were analyzed using SPSS software version 26.0 (SPSS, Inc., USA). Comparison of demographic information This study included a total of 90 valid samples, with 30 cases in each group. All patients received the first intrapleural administration of ropivacaine pulse pump immediately after surgery (Fig. ). There were no statistically significant differences among the three groups in terms of age ( p = 0.857), sex ( p = 0.730), BMI ( p = 0.817), operation time ( p = 0.277), disease type ( p = 0.600), ASA classification ( p = 378), and other aspects, as shown in Table . Comparison of postoperative chest tube pain scores by time period To evaluate the efficacy of two concentrations of ropivacaine on chest drainage tube pain, the rest and cough VAS scores were recorded at 6, 12, 24, and 48 h postoperatively. For the rest VAS scores, no significant disparities in drainage tube pain intensity were observed among the three patient cohorts at the 6-hour ( p = 0.062), 24-hour ( p = 0.234), and 48-hour ( p = 0.687) post-operative marks. Notably, at the 12-hour mark, Group M exhibited a significantly lower pain level compared to both Group N ( p = 0.028) and Group L ( p = 0.011). To elaborate, the mean VAS score for Group M at this juncture was 0.83, contrasting with 2.17 for Group N and 1.80 for Group L. Regarding the cough VAS scores, at the 6-hours post-operative timepoint, both Group M ( p < 0.001) and Group L ( p = 0.009) reported significantly reduced drainage tube pain levels compared to Group N. However, no statistically significant difference was found between Group M and Group L in terms of pain alleviation ( p = 0.570). At the 12-hour, 24-hour, and 48-hour marks, both Group M ( p < 0.001) and Group L ( p < 0.001) continued to experience significantly lower drainage tube pain levels compared to Group N. Nonetheless, the analgesic efficacy between Group M and Group L remained comparable at these timepoints, with no significant variations observed ( p = 0.263 at 12 h, p = 0.775 at 24 h, and p = 0.425 at 48 h) (Table ; Fig. ). Comparison of postoperative incision pain levels among the three groups of patients The analgesic effect of two concentrations of ropivacaine on postoperative surgical incisions was also evaluated, and rest and cough VAS scores were recorded at 6, 12, 24 and 48 h after surgery, respectively. Regarding the rest VAS, at the 12-hour postoperative mark, Group M exhibited significantly lower surgical incision pain scores compared to Group N ( p < 0.001), the mean VAS score for Group M was 0.87, while the mean VAS score for Group N was 2.17. However, no statistically significant difference was observed between Group M and Group L ( p = 0.055), nor was there a significant disparity between Group L and Group N ( p = 0.729). At the 6-hour, 24-hour, and 48-hour postoperative time points, the differences in surgical incision pain levels among the three groups were not statistically significant ( p = 0.840 at 6 h, p = 0.621 at 24 h, and p = 0.950 at 48 h). Regarding the cough VAS, at all postoperative time points evaluated, Group M demonstrated significantly lower surgical incision pain compared to Group N ( p < 0.001 for all time points). Compared to Group N, Group L showed a significant reduction in pain levels as well ( p = 0.024 at 6 h, p = 0.009 at 12 h, p = 0.006 at 24 h, and p = 0.014 at 48 h). Notably, there were no statistically significant differences in pain levels between Group M and Group L across all time points ( p = 0.438 at 6 h, p = 1.00 at 12 h, p = 1.00 at 24 h, and p = 0.667 at 48 h) (Table ; Fig. ). Secondary outcome of analgesia during 48 h after surgery Within 48 h after surgery, there were no significant differences in the incidence of adverse reactions such as respiratory depression, hypotension, nausea and vomiting, bradycardia, dizziness, and hypoxemia among the three groups, as shown in Table . This study included a total of 90 valid samples, with 30 cases in each group. All patients received the first intrapleural administration of ropivacaine pulse pump immediately after surgery (Fig. ). There were no statistically significant differences among the three groups in terms of age ( p = 0.857), sex ( p = 0.730), BMI ( p = 0.817), operation time ( p = 0.277), disease type ( p = 0.600), ASA classification ( p = 378), and other aspects, as shown in Table . To evaluate the efficacy of two concentrations of ropivacaine on chest drainage tube pain, the rest and cough VAS scores were recorded at 6, 12, 24, and 48 h postoperatively. For the rest VAS scores, no significant disparities in drainage tube pain intensity were observed among the three patient cohorts at the 6-hour ( p = 0.062), 24-hour ( p = 0.234), and 48-hour ( p = 0.687) post-operative marks. Notably, at the 12-hour mark, Group M exhibited a significantly lower pain level compared to both Group N ( p = 0.028) and Group L ( p = 0.011). To elaborate, the mean VAS score for Group M at this juncture was 0.83, contrasting with 2.17 for Group N and 1.80 for Group L. Regarding the cough VAS scores, at the 6-hours post-operative timepoint, both Group M ( p < 0.001) and Group L ( p = 0.009) reported significantly reduced drainage tube pain levels compared to Group N. However, no statistically significant difference was found between Group M and Group L in terms of pain alleviation ( p = 0.570). At the 12-hour, 24-hour, and 48-hour marks, both Group M ( p < 0.001) and Group L ( p < 0.001) continued to experience significantly lower drainage tube pain levels compared to Group N. Nonetheless, the analgesic efficacy between Group M and Group L remained comparable at these timepoints, with no significant variations observed ( p = 0.263 at 12 h, p = 0.775 at 24 h, and p = 0.425 at 48 h) (Table ; Fig. ). The analgesic effect of two concentrations of ropivacaine on postoperative surgical incisions was also evaluated, and rest and cough VAS scores were recorded at 6, 12, 24 and 48 h after surgery, respectively. Regarding the rest VAS, at the 12-hour postoperative mark, Group M exhibited significantly lower surgical incision pain scores compared to Group N ( p < 0.001), the mean VAS score for Group M was 0.87, while the mean VAS score for Group N was 2.17. However, no statistically significant difference was observed between Group M and Group L ( p = 0.055), nor was there a significant disparity between Group L and Group N ( p = 0.729). At the 6-hour, 24-hour, and 48-hour postoperative time points, the differences in surgical incision pain levels among the three groups were not statistically significant ( p = 0.840 at 6 h, p = 0.621 at 24 h, and p = 0.950 at 48 h). Regarding the cough VAS, at all postoperative time points evaluated, Group M demonstrated significantly lower surgical incision pain compared to Group N ( p < 0.001 for all time points). Compared to Group N, Group L showed a significant reduction in pain levels as well ( p = 0.024 at 6 h, p = 0.009 at 12 h, p = 0.006 at 24 h, and p = 0.014 at 48 h). Notably, there were no statistically significant differences in pain levels between Group M and Group L across all time points ( p = 0.438 at 6 h, p = 1.00 at 12 h, p = 1.00 at 24 h, and p = 0.667 at 48 h) (Table ; Fig. ). Within 48 h after surgery, there were no significant differences in the incidence of adverse reactions such as respiratory depression, hypotension, nausea and vomiting, bradycardia, dizziness, and hypoxemia among the three groups, as shown in Table . UVATS has emerged as a prominent minimally invasive surgical technique for lung procedures. Its increasing prevalence in lung surgeries can be attributed to several advantages, including minimal tissue trauma, expedited postoperative recovery, and a lower incidence of complications . In contrast to traditional three-port thoracotomy, single-port thoracoscopy is characterized by a single incision, typically utilizing a thick silicone tube for routine drainage within the thoracic cavity. However, this method does not effectively alleviate postoperative pain or promote faster healing of the incision . The presence of the chest tube within the incision site complicates the secure suturing of the chest wall musculature. After the tube is removed, a cavity that is difficult to heal remains within the chest wall muscle, and fluid accumulation in this space further hinders the healing process . Additionally, the insertion of the chest tube into the thoracic cavity may lead to complications such as intercostal nerve compression, pleural irritation, and diaphragmatic discomfort. Notably, the inner diameter of the chest tube has been identified as a significant independent risk factor influencing the severity of postoperative pain . The clinical presentations of local anesthetic systemic toxicity (LAST) encompass prodromal symptoms such as oral paresthesia, metallic taste, disorientation, dizziness, and somnolence, succeeded by central nervous system manifestations like epileptic seizures. The most severe consequence can lead to cardiac arrest, with toxic symptoms potentially emerging as early as 6 h post-administration of the local anesthetic . The maximum recommended single dose of ropivacaine is 3 mg/kg, with a total dose not exceeding 200 mg . To determine whether ropivacaine can effectively alleviate postoperative pain associated with drains while ensuring adequate analgesia, it is imperative to investigate the minimum concentration of ropivacaine that can provide sufficient analgesic effects while minimizing the volume of local anesthetic used and the associated side effects. A notable finding from this study is the equivalence in chest tube pain analgesic potency demonstrated by 0.5% and 0.25% concentrations of ropivacaine. This result may be due to the fact that the analgesic effect primarily depends on the dosage of the local anesthetic, rather than its concentration, as its mechanism of action requires the diffusion and infiltration of terminal nerves in the pleural cavity. Furthermore, administering TPVB to all patients before surgery helps reduce central sensitization and pain intensity, thereby potentially minimizing any differences in analgesic efficacy among the varying concentrations of ropivacaine used. The findings of this study also indicate that the incorporation of ropivacaine in conjunction with PCIA yields superior relief from surgical incision pain compared to PCIA alone. However, there was little difference in the analgesic effect of the two concentrations of ropivacaine on surgical incision pain at 6, 12, 24, and 48 h after surgery. Notably, at 12 h postoperatively, patients receiving 0.5% ropivacaine exhibited significantly lower rest VAS scores for chest tube pain compared to both the 0.25% ropivacaine group and the PCIA control group. Additionally, the rest VAS scores for surgical incision pain in the 0.5% ropivacaine group also being significantly lower than those in the PCIA control group. These findings are consistent with a prior study by Jian et al. , which reported that 0.50% ropivacaine provided superior analgesic effects compared to 0.33% ropivacaine in patients undergoing thoracoscopic lung wedge resection, particularly within the first 12 h post-surgery. Concerning the reasons for the analgesic difference observed at the 12-hour postoperative mark, we speculate that it may be associated with the plasma metabolic rate and the accumulating dose of ropivacaine. Preliminary experiment results showed that 0.125% ropivacaine failed to achieve effective drainage tube analgesia, while only 0.25% and 0.5% ropivacaine demonstrated relatively better analgesic effects. Consequently, only 0.25% and 0.5% concentrations of ropivacaine were selected for our formal experiments. We hypothesize that the unsatisfactory analgesic effect of 0.125% ropivacaine may be related to the mode of administration. IPA involves the deposition of the local anesthetic within the pleural cavity, a process that necessitates adequate time and space for diffusion and infiltration to reach the nerve endings of the chest wall and intercostal nerves. Moreover, the accumulation of blood and exudative fluids within the pleural cavity can further dilute the local anesthetic. Concurrently, a proportion of the local anesthetic is lost through the chest drainage tube [ , , ]. Collectively, these factors contribute to the suboptimal blocking effect of 0.125% ropivacaine. In this study, we utilized the auxiliary tube of the multi-channel thoracic drainage system for drug administration without the need for additional tubing. Ropivacaine was delivered through the auxiliary tube using patient-controlled analgesia in the postoperative period. This approach not only reduced patient discomfort but also lowered the risk of infection. Our findings further indicated that there were no statistically significant differences in adverse effects between the experimental and control groups, suggesting that both concentrations of ropivacaine can be safely employed for IPA. Due to the susceptibility of patients to postoperative infectious complications in the lungs, we consider it unethical to instill physiological saline into the pleural cavity of patients for two consecutive days following UVATS, as this practice offers no clinical benefits to the patients and may even pose clinical risks . Therefore, this study was unblinded. This study has several limitations. The investigation did not evaluate the median effective analgesic concentration of ropivacaine; instead, it concentrated solely on the analgesic effects of two lower concentrations. As a result, the precise minimum effective concentration requires further investigation. In summary, both 0.25% ( p < 0.001) and 0.50% ( p < 0.001) concentrations of ropivacaine are effective in alleviating chest tube pain within 48 h after UVATS during continuous IPA treatment. There is no significant difference in adverse drug reactions among the three groups ( p = 0.383). Considering the lower dosage and comparable efficacy, 0.25% ropivacaine may be deemed a superior choice. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2
New Targeted Therapies and Combinations of Treatments for Cervical, Endometrial, and Ovarian Cancers: A Year in Review
dba9f613-968c-4e63-88ac-67ae8616030b
9027198
Gynaecology[mh]
Recent preclinical and clinical research has led to impressive advances in genital cancer, from examining its cellular origins to obtaining an outlook into the mechanisms of DNA damage repair that can be used for various therapies. Moreover, studies have shown clinical benefits for the inhibition of PARP (poly (ADP-ribose) polymerase) and cell cycle modulation and have identified molecular features related to the therapeutic response. In 2020, the COVID-19 pandemic dominated the medical world, leading to public health policies and scientific research efforts. Nevertheless, for women whose lives are affected by gynecological cancers (mainly cervical, uterine, and ovarian cancers), the impact of these neoplasms on incidence and mortality was taken into account. Thus, clinical care and research were forced to adapt in response to the pandemic, and encouragingly, research in gynecological cancers has remained active. Cervical cancer remains one of the most common diagnoses of cancer in women, despite the spread of screening programs. Worldwide, it has a higher incidence and mortality rate than uterine and ovarian cancer, according to Globocan 2020 data, being at fourth place in the incidence in women (604,127 cases detected in 2020, cumulative risk of 1.8%), after breast, colorectal, and lung cancer . 2.1. Screening and Prevention In July 2021, the World Health Organization developed and published the second edition of the guide for screening and treatment of pre-cancerous cervical lesions for the prevention of cervical cancer . In addition to vaccination against human papillomavirus (HPV—Human papillomavirus), the main etiological factor for the development of cervical cancer, the implementation of a global screening strategy, could prevent more than 62 million deaths caused by cervical cancer in the next 100 years. The WHO recommends HPV DNA detection in a “screening, triage and treatment” approach from the age of 30, with regular screening every 5 to 10 years, for the general population and from the age of 25, with regular screening every 3 to 5 years, for the HIV-positive population. A recently published study showed that the COVID-19 pandemic dramatically reduced (by 84%) cervical cancer screening in the United States during April 2020 compared with the previous 5-year averages for that month . The results of a recent genomic-wide association study (GWAS) provide new evidence of genetic susceptibility to cervical cancer, especially gene variants of PAX8, CLPTM1L, and HLA, suggesting that their mutations disrupt the pathways of apoptotic and immune functions. Future studies that integrate the interaction between the host and the virus, genetics, and epigenetics, could further elucidate the complex interactions that predispose women to cervical cancer . 2.2. Surgery Regarding surgery, minimally invasive radical hysterectomy was associated with lower rates of disease-free survival and OS than open abdominal radical hysterectomy in women with early-stage cervical cancer, according to the LACC study (The Laparoscopic Approach to Cervical Cancer) . After the publication of this study, a recent assessment shows that the use of minimally invasive surgery decreased by 73% in academic centers and by 19% in non-academic centers ( p = 0.004) . 2.3. Radiotherapy, Chemotherapy, and Brachytherapy For locally advanced cervical cancer, radiotherapy concomitant with chemotherapy (cisplatin) and brachytherapy have been the standard of care since 1999. However, many patients relapse, and, in many cases, distant metastases occur. Over time, various studies have suggested that more adjuvant chemotherapy afterward could bring additional benefits. Despite the flaws of these studies—including short follow-up and treatment intolerance—they have changed medical practice in some centers. The International phase III OUTBACK study tested the effect of four cycles of adjuvant chemotherapy (carboplatin and paclitaxel) after concomitant radio-chemotherapy in women with locally advanced disease (FIGO stages IB1 with positive lymph nodes, IB2, II, IIIB, or IVA), the primary goal being overall survival (OS) at 5 years . After a median follow-up of 5 years, OS at 5 years was 71% in the group with radio-chemotherapy and 72% in the one with the addition of adjuvant chemotherapy (HR = 0.90; p = 0.8). The progression-free survival of the disease (PFS) was also similar between arms: 61% and 63%, respectively (HR = 0.86; p = 0.6). The study concluded that in women with locally advanced cervical cancer, the adjuvant chemotherapy does not add any benefit to standard concomitant radio-chemotherapy based on cisplatin, as reported at the annual ASCO 2021 conference. In the phase III INTERLACE study, additional induction chemotherapy is evaluated before radio-chemotherapy, which may induce a better response and increased tolerance from the patients . The chemotherapy and the image-guided adaptive brachytherapy (IGABT) based on MRI (magnetic resonance imaging) resulted in effective and long-term stable local control at all stages of locally advanced cervical cancer with tolerable side effects. These results, published in 2021 (prospective EMBRACE-I cohort study), are a positive discovery in the treatment of locally advanced cervical cancer, which could be used as a benchmark for clinical practice and all future studies. At a median follow-up of 51 months, overall 5-year disease control was 92%, being different depending on the FIGO stage: it ranged from 89% in IIA2 and IVB to 98% in IB1 and 100% in IIIA . The final analysis of the PARCER study showed that the incidence of late gastrointestinal toxicity of grade ≥2 was 21.1% with radiotherapy with image-guided intensity-modulated radiotherapy (IG-IMRT), at the end of 3 years, compared to 42.4% with 3D conformal radiotherapy (3D-CRT). However, there were no differences in disease outcomes, as the 3-year pelvic relapse-free survival and disease-free survival in the IG-IMRT versus the 3D-CRT arm were 81.8% versus 84% and 76.9% versus 81.2% . Endostar, the recombinant human (rh)-endostatin (a fragment derived from type XVIII collagen) with anti-angiogenic properties, was analyzed in combination with platinum-based chemotherapy in the first-line treatment of recurrent/metastatic cervical cancer through a single-arm, prospective phase II study. With a median PFS of 12 months, an overall response rate of 50.0%, and a disease control rate of 71.4%, the combination of platinum-based chemotherapy and endostar resulted in a high level of effectiveness . 2.4. Immunotherapy Encouraging data have become clear for checkpoint inhibitors as a second-line treatment for recurrent disease. The PD-L1 inhibitor cemiplimab-rwlc became the first immunotherapeutic agent to produce a statistically and clinically significant survival benefit in recurrent or metastatic cervical cancer that progressed after first-line platinum-based chemotherapy. Second-line treatment with cemiplimab resulted in a 27% decrease in the risk of death from chemotherapy in the squamous cell carcinoma population in the global phase III randomized study EMPOWER-Cervical 1/GOG-3016/ENGOT-cx9. The median OS in this group was 11.1 months with cemiplimab compared with 8.8 months with chemotherapy (HR = 0.73; p = 0.00306) . FDA (The Food and Drug Administration) has accepted a license application for accelerated approval (application for permission to place a biological product on the market) for balstilimab, an anti-PD-1 antibody, for the treatment of patients with recurrent or metastatic cervical cancer with the progression of disease during or after chemotherapy. Balstilimab is a fully-humanized G4 monoclonal immunoglobulin (IgG4), designed to block PD-1 interaction with its ligands, PD-L1 and PD-L2 . Balstilimab is currently being investigated in clinical trials as monotherapy and in combination with the anti-CTLA-4 antibody zalifrelimab. The findings of a large (155 patients) single-arm phase II study evaluating the safety and antitumor activity of balstilimab in combination with zalifrelimab for up to 2 years in previously treated patients with recurrent/metastatic cervical cancer showed impressive response rates (including complete remissions—8.8%), duration of response (9.3 months—not reached), and OS (69% at 6 months and 52.7% at 12 months), with manageable tolerability. The clinical benefit was highest in patients with PD-L1 positive tumors, but activity was present also in PD-L1 negative tumors . Regardless of PD-L1 expression or concurrent bevacizumab usage, pembrolizumab plus chemotherapy improved PFS and OS in patients with persistent, recurrent, or metastatic cervical cancer, according to the randomized, double-blind, phase III KEYNOTE-826 study, presented at ESMO 2021 as the first interim analysis. These findings show that pembrolizumab plus chemotherapy, with or without bevacizumab may be a new standard of care for this population, with a tolerable safety profile . In patients with recurrent/advanced cervical cancer, toripalimab (a humanized IgG4 antibody specific for human PD-1 receptor), in combination with concurrent chemoradiotherapy, showed promising anti-tumor effectiveness in a retrospective study presented at ESMO 2021: out of 25 patients included, 23 patients had objective responses (16 complete responses and 7 partial responses), with a 6-month duration of response rate of 92%. Moreover, toripalimab had a tolerable safety profile, suggesting that it might be a potential therapeutic option for this population . Tremelimumab (fully human monoclonal antibody against CTLA-4) plus durvalumab (anti-PD-L1 antibody) combined with metronomic oral vinorelbine in recurrent cervical cancer was investigated in the multi-cohort phase I/II MOVIE trial. Phase II of the study met its primary endpoint, the clinical benefit rate: the objective response rate was 41.4% with five complete responses, seven partial responses, and four stable diseases ≥24 weeks. There is further research required for the combination of chemotherapy and immunotherapy in this group of patients . SHR-1701, a new bifunctional fusion protein comprised of a monoclonal antibody against PD-L1 linked to the extracellular domain of TGF-β receptor II, was evaluated in a phase I study for patients with advanced cervical cancer who had progressed on one or two lines of platinum-based therapy (or were intolerant to it). Even though the median PFS was only 1.8 months, SHR-1701 holds promising antitumor activity and may prove to be a treatment option after further research . For many cancers, the combination of antiangiogenic therapy and immune checkpoint inhibitors has emerged as a viable treatment option. A phase II study was carried out to determine if anlotinib (a new multi-target tyrosine kinase inhibitor) combined with sintilimab (a PD-1 antibody) can improve the effectiveness and safety of patients with advanced cervical cancer. In the cohort of 42 patients enrolled, the overall response rate was 61.5% and the disease control rate was 94.9%, with a median PFS of 9.4 months, providing a good perspective of this treatment . Camrelizumab (an anti-PD-1 antibody), apatinib/rivoceranib (tyrosine kinase inhibitor, blocker of vascular endothelial growth factor receptor-2), and albumin-bound paclitaxel (nab-paclitaxel) were assessed in advanced cervical cancer, proving a good interaction in terms of effectiveness, with manageable adverse reactions: overall response rate was 71%, with five complete responses, median PFS was 15.0 months, while the median duration of response and median OS was not reached . 2.5. Antibody-Drug Conjugates and Vaccines In a recently published phase II study (innovaTV 204/GOG-3023/ENGOT-cx6), the authors found that tisotumab vedotin (a tissue factor-directed antibody-drug conjugate) produced lasting responses in patients previously treated with recurrent or metastatic cervical cancer . In the study, 101 patients with recurrent or metastatic cervical cancer (squamous cell, adenocarcinoma, or adenosquamous carcinoma) were enrolled between June 2018 and April 2019. The primary goal was the objective response rate, with a median follow-up at the time of the 10-month analysis. Objective response was observed in 24 patients (24%, 95% CI = 16–33%), including a complete response in 7 (7%). Another 49 patients (49%) had stable disease, resulting in a disease control rate of 72%. The average duration of response was 8.3 months, median PFS was 4.2 months, and median OS was 12.1 months . The ENGOT-Cx8/GOG-3024/innovaTV 205 study, reported as interim results at ESMO 2021, showed that both first-line tisotumab vedotin + carboplatin (55% objective response rate, 6% complete responses, and 48% partial responses, median PFS of 6.9 months) and second/third-line tisotumab vedotin + pembrolizumab (35% objective response rate, 6% complete responses, and 29% partial responses, median PFS of 5.6 months) had a promising antitumor activity with acceptable safety profiles in patients with recurrent or metastatic cervical cancer . The interim results of a Korean phase II study indicated the effectiveness of the combination of pembrolizumab with the GX-188E therapeutic DNA vaccine (tirvalimogen teraplasmid) in patients with advanced cervical cancer, HPV-16 or HPV-18 positive. The combination of pembrolizumab and GX-188E (which induces HPV E6- and E7-specific T-cell activation) had a response in 42% of the patients evaluated; 15% had a complete response and 27% had a partial response. Treatment-related adverse events were easily manageable . 2.6. Targeted Therapy BUL719 (alpelisib) was used in the treatment of PIK3CA-mutated advanced/recurrent cervical cancer where at least two lines of therapy have failed, in a small study from Istituto Nazionale dei Tumori (Milano, Italy). For the six patients included, the objective response rate was 33% but the disease control rate was 100%, with a mean duration of response of 6.6 months (two patients had a partial response and four patients had stable disease). More research is needed to determine alpelisib’s role in terms of efficacy and safety in PIK3CA-mutated advanced/recurrent cervical cancer . In July 2021, the World Health Organization developed and published the second edition of the guide for screening and treatment of pre-cancerous cervical lesions for the prevention of cervical cancer . In addition to vaccination against human papillomavirus (HPV—Human papillomavirus), the main etiological factor for the development of cervical cancer, the implementation of a global screening strategy, could prevent more than 62 million deaths caused by cervical cancer in the next 100 years. The WHO recommends HPV DNA detection in a “screening, triage and treatment” approach from the age of 30, with regular screening every 5 to 10 years, for the general population and from the age of 25, with regular screening every 3 to 5 years, for the HIV-positive population. A recently published study showed that the COVID-19 pandemic dramatically reduced (by 84%) cervical cancer screening in the United States during April 2020 compared with the previous 5-year averages for that month . The results of a recent genomic-wide association study (GWAS) provide new evidence of genetic susceptibility to cervical cancer, especially gene variants of PAX8, CLPTM1L, and HLA, suggesting that their mutations disrupt the pathways of apoptotic and immune functions. Future studies that integrate the interaction between the host and the virus, genetics, and epigenetics, could further elucidate the complex interactions that predispose women to cervical cancer . Regarding surgery, minimally invasive radical hysterectomy was associated with lower rates of disease-free survival and OS than open abdominal radical hysterectomy in women with early-stage cervical cancer, according to the LACC study (The Laparoscopic Approach to Cervical Cancer) . After the publication of this study, a recent assessment shows that the use of minimally invasive surgery decreased by 73% in academic centers and by 19% in non-academic centers ( p = 0.004) . For locally advanced cervical cancer, radiotherapy concomitant with chemotherapy (cisplatin) and brachytherapy have been the standard of care since 1999. However, many patients relapse, and, in many cases, distant metastases occur. Over time, various studies have suggested that more adjuvant chemotherapy afterward could bring additional benefits. Despite the flaws of these studies—including short follow-up and treatment intolerance—they have changed medical practice in some centers. The International phase III OUTBACK study tested the effect of four cycles of adjuvant chemotherapy (carboplatin and paclitaxel) after concomitant radio-chemotherapy in women with locally advanced disease (FIGO stages IB1 with positive lymph nodes, IB2, II, IIIB, or IVA), the primary goal being overall survival (OS) at 5 years . After a median follow-up of 5 years, OS at 5 years was 71% in the group with radio-chemotherapy and 72% in the one with the addition of adjuvant chemotherapy (HR = 0.90; p = 0.8). The progression-free survival of the disease (PFS) was also similar between arms: 61% and 63%, respectively (HR = 0.86; p = 0.6). The study concluded that in women with locally advanced cervical cancer, the adjuvant chemotherapy does not add any benefit to standard concomitant radio-chemotherapy based on cisplatin, as reported at the annual ASCO 2021 conference. In the phase III INTERLACE study, additional induction chemotherapy is evaluated before radio-chemotherapy, which may induce a better response and increased tolerance from the patients . The chemotherapy and the image-guided adaptive brachytherapy (IGABT) based on MRI (magnetic resonance imaging) resulted in effective and long-term stable local control at all stages of locally advanced cervical cancer with tolerable side effects. These results, published in 2021 (prospective EMBRACE-I cohort study), are a positive discovery in the treatment of locally advanced cervical cancer, which could be used as a benchmark for clinical practice and all future studies. At a median follow-up of 51 months, overall 5-year disease control was 92%, being different depending on the FIGO stage: it ranged from 89% in IIA2 and IVB to 98% in IB1 and 100% in IIIA . The final analysis of the PARCER study showed that the incidence of late gastrointestinal toxicity of grade ≥2 was 21.1% with radiotherapy with image-guided intensity-modulated radiotherapy (IG-IMRT), at the end of 3 years, compared to 42.4% with 3D conformal radiotherapy (3D-CRT). However, there were no differences in disease outcomes, as the 3-year pelvic relapse-free survival and disease-free survival in the IG-IMRT versus the 3D-CRT arm were 81.8% versus 84% and 76.9% versus 81.2% . Endostar, the recombinant human (rh)-endostatin (a fragment derived from type XVIII collagen) with anti-angiogenic properties, was analyzed in combination with platinum-based chemotherapy in the first-line treatment of recurrent/metastatic cervical cancer through a single-arm, prospective phase II study. With a median PFS of 12 months, an overall response rate of 50.0%, and a disease control rate of 71.4%, the combination of platinum-based chemotherapy and endostar resulted in a high level of effectiveness . Encouraging data have become clear for checkpoint inhibitors as a second-line treatment for recurrent disease. The PD-L1 inhibitor cemiplimab-rwlc became the first immunotherapeutic agent to produce a statistically and clinically significant survival benefit in recurrent or metastatic cervical cancer that progressed after first-line platinum-based chemotherapy. Second-line treatment with cemiplimab resulted in a 27% decrease in the risk of death from chemotherapy in the squamous cell carcinoma population in the global phase III randomized study EMPOWER-Cervical 1/GOG-3016/ENGOT-cx9. The median OS in this group was 11.1 months with cemiplimab compared with 8.8 months with chemotherapy (HR = 0.73; p = 0.00306) . FDA (The Food and Drug Administration) has accepted a license application for accelerated approval (application for permission to place a biological product on the market) for balstilimab, an anti-PD-1 antibody, for the treatment of patients with recurrent or metastatic cervical cancer with the progression of disease during or after chemotherapy. Balstilimab is a fully-humanized G4 monoclonal immunoglobulin (IgG4), designed to block PD-1 interaction with its ligands, PD-L1 and PD-L2 . Balstilimab is currently being investigated in clinical trials as monotherapy and in combination with the anti-CTLA-4 antibody zalifrelimab. The findings of a large (155 patients) single-arm phase II study evaluating the safety and antitumor activity of balstilimab in combination with zalifrelimab for up to 2 years in previously treated patients with recurrent/metastatic cervical cancer showed impressive response rates (including complete remissions—8.8%), duration of response (9.3 months—not reached), and OS (69% at 6 months and 52.7% at 12 months), with manageable tolerability. The clinical benefit was highest in patients with PD-L1 positive tumors, but activity was present also in PD-L1 negative tumors . Regardless of PD-L1 expression or concurrent bevacizumab usage, pembrolizumab plus chemotherapy improved PFS and OS in patients with persistent, recurrent, or metastatic cervical cancer, according to the randomized, double-blind, phase III KEYNOTE-826 study, presented at ESMO 2021 as the first interim analysis. These findings show that pembrolizumab plus chemotherapy, with or without bevacizumab may be a new standard of care for this population, with a tolerable safety profile . In patients with recurrent/advanced cervical cancer, toripalimab (a humanized IgG4 antibody specific for human PD-1 receptor), in combination with concurrent chemoradiotherapy, showed promising anti-tumor effectiveness in a retrospective study presented at ESMO 2021: out of 25 patients included, 23 patients had objective responses (16 complete responses and 7 partial responses), with a 6-month duration of response rate of 92%. Moreover, toripalimab had a tolerable safety profile, suggesting that it might be a potential therapeutic option for this population . Tremelimumab (fully human monoclonal antibody against CTLA-4) plus durvalumab (anti-PD-L1 antibody) combined with metronomic oral vinorelbine in recurrent cervical cancer was investigated in the multi-cohort phase I/II MOVIE trial. Phase II of the study met its primary endpoint, the clinical benefit rate: the objective response rate was 41.4% with five complete responses, seven partial responses, and four stable diseases ≥24 weeks. There is further research required for the combination of chemotherapy and immunotherapy in this group of patients . SHR-1701, a new bifunctional fusion protein comprised of a monoclonal antibody against PD-L1 linked to the extracellular domain of TGF-β receptor II, was evaluated in a phase I study for patients with advanced cervical cancer who had progressed on one or two lines of platinum-based therapy (or were intolerant to it). Even though the median PFS was only 1.8 months, SHR-1701 holds promising antitumor activity and may prove to be a treatment option after further research . For many cancers, the combination of antiangiogenic therapy and immune checkpoint inhibitors has emerged as a viable treatment option. A phase II study was carried out to determine if anlotinib (a new multi-target tyrosine kinase inhibitor) combined with sintilimab (a PD-1 antibody) can improve the effectiveness and safety of patients with advanced cervical cancer. In the cohort of 42 patients enrolled, the overall response rate was 61.5% and the disease control rate was 94.9%, with a median PFS of 9.4 months, providing a good perspective of this treatment . Camrelizumab (an anti-PD-1 antibody), apatinib/rivoceranib (tyrosine kinase inhibitor, blocker of vascular endothelial growth factor receptor-2), and albumin-bound paclitaxel (nab-paclitaxel) were assessed in advanced cervical cancer, proving a good interaction in terms of effectiveness, with manageable adverse reactions: overall response rate was 71%, with five complete responses, median PFS was 15.0 months, while the median duration of response and median OS was not reached . In a recently published phase II study (innovaTV 204/GOG-3023/ENGOT-cx6), the authors found that tisotumab vedotin (a tissue factor-directed antibody-drug conjugate) produced lasting responses in patients previously treated with recurrent or metastatic cervical cancer . In the study, 101 patients with recurrent or metastatic cervical cancer (squamous cell, adenocarcinoma, or adenosquamous carcinoma) were enrolled between June 2018 and April 2019. The primary goal was the objective response rate, with a median follow-up at the time of the 10-month analysis. Objective response was observed in 24 patients (24%, 95% CI = 16–33%), including a complete response in 7 (7%). Another 49 patients (49%) had stable disease, resulting in a disease control rate of 72%. The average duration of response was 8.3 months, median PFS was 4.2 months, and median OS was 12.1 months . The ENGOT-Cx8/GOG-3024/innovaTV 205 study, reported as interim results at ESMO 2021, showed that both first-line tisotumab vedotin + carboplatin (55% objective response rate, 6% complete responses, and 48% partial responses, median PFS of 6.9 months) and second/third-line tisotumab vedotin + pembrolizumab (35% objective response rate, 6% complete responses, and 29% partial responses, median PFS of 5.6 months) had a promising antitumor activity with acceptable safety profiles in patients with recurrent or metastatic cervical cancer . The interim results of a Korean phase II study indicated the effectiveness of the combination of pembrolizumab with the GX-188E therapeutic DNA vaccine (tirvalimogen teraplasmid) in patients with advanced cervical cancer, HPV-16 or HPV-18 positive. The combination of pembrolizumab and GX-188E (which induces HPV E6- and E7-specific T-cell activation) had a response in 42% of the patients evaluated; 15% had a complete response and 27% had a partial response. Treatment-related adverse events were easily manageable . BUL719 (alpelisib) was used in the treatment of PIK3CA-mutated advanced/recurrent cervical cancer where at least two lines of therapy have failed, in a small study from Istituto Nazionale dei Tumori (Milano, Italy). For the six patients included, the objective response rate was 33% but the disease control rate was 100%, with a mean duration of response of 6.6 months (two patients had a partial response and four patients had stable disease). More research is needed to determine alpelisib’s role in terms of efficacy and safety in PIK3CA-mutated advanced/recurrent cervical cancer . Endometrial cancer ranks sixth in incidence in women worldwide, according to GLOBOCAN 2020 data, with 417,367 new cases in 2020 and a cumulative risk of 1.6% . 3.1. Surgery Although primary debulking surgery is often considered standard for the treatment of stage IV endometrial cancer, this is associated with significant morbidity and low survival. Neoadjuvant chemotherapy (NACT) was proposed as an alternative treatment strategy. In a cohort study of 4890 women with metastatic endometrial cancer, 952 women (19.5%) were treated with neoadjuvant chemotherapy. Survival for women treated with neoadjuvant chemotherapy was superior to that of women treated with primary debulking surgery for 3 to 8 months after the initiation of treatment, after which survival was superior for those treated with primary debulking surgery. In contrast, results suggest that women treated with NACT, particularly if they ultimately undergo surgery, may have superior survival in the short term. Based on these findings, NACT may be appropriate for select patients with advanced uterine serous carcinoma . 3.2. Immunotherapy ± Targeted Therapy On 21 July 2021, the FDA approved pembrolizumab in combination with lenvatinib (a multikinase inhibitor of vascular endothelial growth factor receptors 1–3, fibroblast growth factor receptors 1–4, RET, KIT, and platelet-derived growth factor receptor-α) for patients with advanced endometrial carcinoma who do not have microsatellite instability-high (MSI-H) or mismatch repair deficiency of DNA (dMMR), according to the results of the KEYNOTE-775 Study/Study 309. These patients must have progressive disease after any previous systemic therapy and should not be candidates for curative surgery or radiation therapy . For patients with advanced endometrial cancer other than MSI-H or dMMR, the median PFS was 6.6 months (95% CI = 5.6–7.4 months) in patients receiving pembrolizumab/lenvatinib and 3.8 months (95% CI = 3.6–5.0 months) for those receiving chemotherapy of the investigator’s choice (HR = 0.60, 95% CI = 0.50–0.72, p < 0.0001). The average OS was 17.4 months (95% CI = 14.2–19.9 months) and 12.0 months (95% CI = 10.8–13.3 months)—HR = 0.68, 95% CI = 0.56–0.84, p = 0.0001), respectively. The objective response rate was 30% (95% CI = 26–36%) and 15% (95% CI = 12–19%), p < 0.0001), respectively. The average duration of the response was 9.2 months and 5.7 months, respectively, in the two arms . In addition, the FDA accepted a new application for an additional license for review, seeking the approval of pembrolizumab monotherapy for the treatment of patients with advanced endometrial carcinoma MSI-H or dMMR, after the progression of the disease following any previous systemic therapy and who are not candidates for curative surgery or radiation therapy . The application is based on the general response data of the KEYNOTE-158 study, presented at ESMO 2021. Pembrolizumab proved to have a durable overall response rate (48%), with 14% complete responses, improving survival in heavily pretreated patients with advanced MSI-H or dMMR endometrial cancer. This monotherapy also had manageable treatment-related adverse events . PD-1 inhibitor dostarlimab was granted accelerated approval for the treatment of adult patients with recurrent or advanced endometrial cancer, which progressed during or after a previous platinum-based therapeutic regimen by the FDA. The phase I GARNET study showed an objective response rate of 42.3% (95% CI = 30.6–54.6%), with a complete response in 12.7% of patients. At a median follow-up of 14.1 months, the mean duration of response was not reached, with 93.3% of responses being preserved for at least 6 months . The combination of immunotherapy and targeted therapy was also assessed in endometrial cancer. Anlotinib (novel oral tyrosine kinase inhibitor targeting c-kit, fibroblast growth factor receptor, platelet-derived growth factor receptors, and vascular endothelial growth factor receptor) plus sintilimab (anti-PD-1 immunoglobulin G4 monoclonal antibody) was studied in a prospective open-label, single-arm, phase II clinical trial, in patients with recurrent advanced endometrial cancer. The overall response rate was 77.3%, with a disease control rate of 91.7%, and a median PFS not reached. The median time of the first response was 1.5 months (0.7–12.8), showing a promising treatment alternative after more research in the future . Although primary debulking surgery is often considered standard for the treatment of stage IV endometrial cancer, this is associated with significant morbidity and low survival. Neoadjuvant chemotherapy (NACT) was proposed as an alternative treatment strategy. In a cohort study of 4890 women with metastatic endometrial cancer, 952 women (19.5%) were treated with neoadjuvant chemotherapy. Survival for women treated with neoadjuvant chemotherapy was superior to that of women treated with primary debulking surgery for 3 to 8 months after the initiation of treatment, after which survival was superior for those treated with primary debulking surgery. In contrast, results suggest that women treated with NACT, particularly if they ultimately undergo surgery, may have superior survival in the short term. Based on these findings, NACT may be appropriate for select patients with advanced uterine serous carcinoma . On 21 July 2021, the FDA approved pembrolizumab in combination with lenvatinib (a multikinase inhibitor of vascular endothelial growth factor receptors 1–3, fibroblast growth factor receptors 1–4, RET, KIT, and platelet-derived growth factor receptor-α) for patients with advanced endometrial carcinoma who do not have microsatellite instability-high (MSI-H) or mismatch repair deficiency of DNA (dMMR), according to the results of the KEYNOTE-775 Study/Study 309. These patients must have progressive disease after any previous systemic therapy and should not be candidates for curative surgery or radiation therapy . For patients with advanced endometrial cancer other than MSI-H or dMMR, the median PFS was 6.6 months (95% CI = 5.6–7.4 months) in patients receiving pembrolizumab/lenvatinib and 3.8 months (95% CI = 3.6–5.0 months) for those receiving chemotherapy of the investigator’s choice (HR = 0.60, 95% CI = 0.50–0.72, p < 0.0001). The average OS was 17.4 months (95% CI = 14.2–19.9 months) and 12.0 months (95% CI = 10.8–13.3 months)—HR = 0.68, 95% CI = 0.56–0.84, p = 0.0001), respectively. The objective response rate was 30% (95% CI = 26–36%) and 15% (95% CI = 12–19%), p < 0.0001), respectively. The average duration of the response was 9.2 months and 5.7 months, respectively, in the two arms . In addition, the FDA accepted a new application for an additional license for review, seeking the approval of pembrolizumab monotherapy for the treatment of patients with advanced endometrial carcinoma MSI-H or dMMR, after the progression of the disease following any previous systemic therapy and who are not candidates for curative surgery or radiation therapy . The application is based on the general response data of the KEYNOTE-158 study, presented at ESMO 2021. Pembrolizumab proved to have a durable overall response rate (48%), with 14% complete responses, improving survival in heavily pretreated patients with advanced MSI-H or dMMR endometrial cancer. This monotherapy also had manageable treatment-related adverse events . PD-1 inhibitor dostarlimab was granted accelerated approval for the treatment of adult patients with recurrent or advanced endometrial cancer, which progressed during or after a previous platinum-based therapeutic regimen by the FDA. The phase I GARNET study showed an objective response rate of 42.3% (95% CI = 30.6–54.6%), with a complete response in 12.7% of patients. At a median follow-up of 14.1 months, the mean duration of response was not reached, with 93.3% of responses being preserved for at least 6 months . The combination of immunotherapy and targeted therapy was also assessed in endometrial cancer. Anlotinib (novel oral tyrosine kinase inhibitor targeting c-kit, fibroblast growth factor receptor, platelet-derived growth factor receptors, and vascular endothelial growth factor receptor) plus sintilimab (anti-PD-1 immunoglobulin G4 monoclonal antibody) was studied in a prospective open-label, single-arm, phase II clinical trial, in patients with recurrent advanced endometrial cancer. The overall response rate was 77.3%, with a disease control rate of 91.7%, and a median PFS not reached. The median time of the first response was 1.5 months (0.7–12.8), showing a promising treatment alternative after more research in the future . There were 313,959 new recorded cases of ovarian cancer in women worldwide (eighth position), according to GLOBOCAN 2020 data, with a cumulative risk of 1.18% . In June 2021, ASCO has published a guide based on resources, which provides evidence-based recommendations for the evaluation of women with ovarian masses, as well as guidance on the treatment of epithelial ovarian cancer in regions that do not have adequate resources to provide high-level care . Assessment of symptomatic adult women includes assessment of symptoms, family history, abdominopelvic ultrasound, and dosing of the serum tumor marker CA-125, where possible. Additional imaging is recommended if CT/MRI resources are available. Diagnosis, staging, and/or treatment involve primarily surgery, before which it is necessary to investigate the presence of metastases. Treatment requires histological confirmation; the surgical goal is to stage the disease and perform complete cytoreduction until the absence of residual disease. In first-line therapy, platinum-based chemotherapy is recommended; in advanced stages, patients may receive neoadjuvant chemotherapy. After neoadjuvant chemotherapy, all patients should be assessed for interval debulking surgery (interval debulking surgery). Targeted therapy is not recommended in environments/countries with limited medical conditions. Specialized interventions are resource-dependent, for example, laparoscopy, fertility preservation surgery, genetic testing, and targeted therapy. Multidisciplinary care for ovarian cancer and palliative care should be provided, regardless of the environment or resources . 4.1. Surgery On 29 November 2021, the FDA authorized an adjuvant for the interoperative detection of malignant lesions in adult patients with ovarian cancer . Pafolacianine sodium injection (OTL38) is a fluorescent medication that operates by targeting the folate receptor, which is overexpressed in ovarian cancer, with the aid of near-infrared fluorescence (NIRF) imaging. The main objective was to achieve R0, which is known to be the strongest predictor of overall survival and was supported by the results of a single-arm, multicenter, open-label trial (NCT03180307), in which NIRF imaging with pafolacianine sodium identified extra lesions that were not scheduled for excision and were not discovered by standard white light or palpation in 33% of patients (36 of 109) . 4.2. Chemotherapy and Anti-Angiogenic Treatment The first-line therapeutic standard for epithelial ovarian cancer remained the combination of paclitaxel and carboplatin, along with cytoreductive surgery. Maintenance with bevacizumab has been approved since 2016, and more recently, “front-line” maintenance treatment with PARP inhibitors has become the standard of care in ovarian cancer. The studies GOG-218 and ICON7/AGO-OVAR 11 showed that early and continuous addition of bevacizumab for 15 months and 12 months to the carboplatin/paclitaxel standard, respectively, significantly improved the PFS of the disease. In both studies, the maximum benefit was seen at the time of the highest cumulative exposure of bevacizumab—immediately after the last cycle of bevacizumab . Nonetheless, the optimal duration of bevacizumab has never been clearly established, therefore, the recent randomized phase III ENGOT/GCIG study examined whether prolonging bevacizumab treatment up to 30 months would improve its efficiency . Treatment with bevacizumab for a longer period of time did not improve either PFS or OS in patients with epithelial ovarian cancer, fallopian tubes, or primary peritoneal cancer. Therefore, a bevacizumab treatment duration of 15 months remains the standard of care. Adding bevacizumab to ixabepilone (azaepothilone B), a microtubule stabilizer, could be a promising treatment strategy for a group of platinum-resistant or refractory ovarian cancer patients, who currently lack a wide range of treatment options, according to data presented at the virtual edition of the annual meeting of the Society of Gynecological Oncology (SGO) 2021. The combination of bevacizumab plus ixabepilone significantly improved the objective response rate, PFS, and OS compared to ixabepilone alone. The results of the randomized phase II study showed that 33% of patients responded to bevacizumab plus ixabepilone compared to 8% of those receiving ixabepilone alone, and median PFS doubled with the combination (5.5 vs. 2.2 months; HR = 0.33), while median OS improved (10.0 vs. 6.0 months; HR = 0.52) . 4.3. Antibody-Drug Conjugates In patients with recurrent ovarian cancer, the antibody-drug conjugate mirvetuximab soravtansine co-administered with bevacizumab, has shown anti-tumor activity that leads to lasting responses in platinum “agnostic” cases (resistant/sensitive), with strong folate receptor alpha expression (FR-α) . The combination led to a response rate of 64%, a mean response time of 11.8 months, and a median PFS of 10.6 months in patients with high FR-α expression in the phase I study FORWARD II . The FDA approved the accelerated review for STRO-002 in August 2021, an antibody-drug conjugate anti-FR-α, for the treatment of patients with epithelial ovarian cancer, fallopian tubes, or primary peritoneal cancer resistant to platinum, who have received one to three previous lines of systemic therapy, according to data from the phase I study STRO-001-GM1 . 4.4. Immunotherapy The phase III study IMagyn050/GOG 3015/ENGOT-OV39 showed that the addition of atezolizumab to bevacizumab and chemotherapy did not significantly improve PFS in newly diagnosed stage III or IV ovarian cancer patients, neither among all patients nor among those with positive PD-L1 expression . Current evidence does not support the use of immune checkpoint inhibitors in newly diagnosed ovarian cancer. The phase III study “JAVELIN Ovarian 100” (NCT02718417) was suspended because it did not show any benefit of PFS to the concomitant addition of avelumab and/or as a chemotherapy maintenance treatment (carboplatin/paclitaxel) in patients previously untreated with advanced epithelial ovarian cancer . In addition, the phase III study “JAVELIN Ovarian 200” did not show any significant improvement in PFS or OS with avelumab alone or in combination with pegylated liposomal doxorubicin (PLD) vs. PLD alone in patients with flat or refractory ovarian cancer . These results do not support the use of avelumab in the frontline treatment setting. Immunotherapy was also studied in the neoadjuvant setting for unresectable stage IIIC/IV ovarian cancer, in the phase Ib INEOV trial. It was proven to be feasible and safe for the administration of neoadjuvant durvalumab +/− tremelimumab with carboplatin and paclitaxel, prior to interval debulking surgery. However, further research is required on this topic . 4.5. Targeted Therapy In a phase II study, it was found that the addition of the oral inhibitor Wee1 kinase (adavosertib) to gemcitabine improved PFS and OS in platinum-resistant or refractory patients, with recurrent high-grade serous ovarian cancer. PFS was longer with adavosertib plus gemcitabine (4.6 months [95% CI = 3.6–6.4]) vs. 3.0 months (95% CI = 1.8–3.8) with placebo plus gemcitabine—HR = 0.55, 95% CI = 0.35–0.90. p = 0.015) . The novel multi-target tyrosine kinase inhibitor anlotinib was assessed for safety and effectiveness as monotherapy in patients with ovarian cancer that is recurrent or resistant, in a phase II prospective, single-arm, and single-center clinical study. For the 31 patients included, the median PFS was 5.32 months, while the median OS was not reached, with an overall response rate of 25.9% . Anlotinib was also studied in combination with pemetrexed in patients with ovarian cancer resistant to platinum, in a single-arm, open-label, phase II study, showing a median PFS of 9.3 months (95% CI = 5.5–13.2), an objective response rate of 36.4% (95% CI = 17.2–59.3), and a disease control rate of 100.0% (95% CI = 73.5–100) . The interim results of a study designed by adding a plasmid encoding p62/SQSTM1 (a multi-domain protein that regulates inflammation, apoptosis, and autophagy) to the standard gemcitabine chemotherapy proved that it may be effective for patients with platinum-resistant ovarian cancer, resulting in a PFS of 5.7 months (compared to 2.4 months in the control group, p = 0.08) . Relacorilant, a selective glucocorticoid receptor modulator, is studied for its capacity to restore sensitivity to chemotherapy. In a tree-arm, randomized, open-label, phase II trial on patients with recurrent platinum-resistant ovarian cancer or platinum-refractory ovarian cancer, relacorilant was assessed in combination with nab-paclitaxel. The study demonstrated that an intermittent regimen of relacorilant the day before, of, and after the administration of nab-paclitaxel (on days 1, 8, and 15, of a 28-day cycle) led to an improved PFS and duration of response. At a median follow-up of 11.07 months, the intermittent regimen significantly improved median PFS compared to nab-paclitaxel alone, 5.55 versus 3.76 months (95% CI = 0.44–0.98) . 4.6. PARP Inhibitors After the initial results were extremely positive, at the 5-year follow-up of the pivotal SOLO-1 study in women with newly diagnosed advanced ovarian cancer and BRCA1/2 mutation, the maintenance treatment with olaparib led to a doubling of the PFS, statistically significant, according to data presented at the SGO 2021 Annual Meeting. Median PFS for the general population was maintained well beyond the end of the treatment: 56.0 months with olaparib versus 13.8 months with placebo (HR = 0.33; 95% CI = 0.25–0.43). The 5-year PFS was 48% and 21%, respectively . The phase III SOLO2/ENGOT-Ov21 study showed a numerically but statistically insignificant improvement in the overall goal of survival with olaparib maintenance therapy compared to the placebo in patients with recurrent platinum-sensitive ovarian cancer and a BRCA1/2 mutation (51.7 months vs. 38.8 months) . The randomized phase II trial OCTOVA aimed to compare olaparib with weekly paclitaxel and the combination of olaparib plus cediranib in recurrent ovarian cancer, either after previous PARP inhibitors administration, or anti-angiogenic treatment. The combination of olaparib + cediranib had a higher PFS compared to olaparib in monotherapy (HR = 0.70; 60% CI: 0.57, 0.86; p = 0.08). However, there was no difference in terms of PFS between the cohorts that received olaparib and weekly paclitaxel (HR = 0.97, 60% CI: 0.79, 1.19; p = 0.55) . The addition of niraparib maintenance treatment after platinum-based first-line chemotherapy with bevacizumab has shown a clinical benefit in patients with advanced ovarian cancer, according to data from the OVARIO study, presented at SGO 2021 . The analysis of the phase II study by OVARIO showed that 62% of patients in the general population remained progression-free at 18 months, including 76% of patients in the homologous recombination deficit (HRD) subgroup and 47% of patients in the homologous recombinant proficiency (HRP) subgroup . In patients with positive, advanced, relapsed BRCA ovarian cancer, the treatment with the PARP inhibitor rucaparib led to a significant improvement in PFS, compared to standard chemotherapy, according to the results of the international phase III study ARIEL4 (7.4 months vs. 5.7 months—HR = 0.64, p = 0.001) . A new PARP inhibitor may soon join the treatment of ovarian cancer, according to the data presented at SGO 2021 . The results of the phase III study (NCT03863860) of fuzuloparib (previously called fluzoparib) as maintenance therapy in patients with recurrent platinum-sensitive ovarian cancer showed a 7.4-month improvement in median PFS (12.9 vs. 5.5 months; p < 0.0001) and a 75.5% reduced risk of disease progression or death compared to the placebo (HR = 0.25). The investigator-assessed objective response rate was 70.8% (95% CI = 61.5–79.0), while the median investigator-assessed progression-free survival was 10.3 months (95% CI = 9.2–12.0), and the 12-month survival rate was 93.7% (95% CI = 87.2–96.9) . In the ANNIE multicentre, single-arm, phase II trial, the safety and efficacy of niraparib combined with anlotinib were evaluated in patients with platinum-resistant recurrent ovarian epithelial, fallopian tube, or primary peritoneal cancer. The overall response rate was 48.0% (95% CI = 27.0–69.0%, 12 patients with partial responses, 12 with stable disease), while median PFS and median duration of response were not reached, therefore presenting antitumor activity that appears to be promising, but with a hand–foot skin reaction as a treatment-related side event (in 47.5% of patients) . On 29 November 2021, the FDA authorized an adjuvant for the interoperative detection of malignant lesions in adult patients with ovarian cancer . Pafolacianine sodium injection (OTL38) is a fluorescent medication that operates by targeting the folate receptor, which is overexpressed in ovarian cancer, with the aid of near-infrared fluorescence (NIRF) imaging. The main objective was to achieve R0, which is known to be the strongest predictor of overall survival and was supported by the results of a single-arm, multicenter, open-label trial (NCT03180307), in which NIRF imaging with pafolacianine sodium identified extra lesions that were not scheduled for excision and were not discovered by standard white light or palpation in 33% of patients (36 of 109) . The first-line therapeutic standard for epithelial ovarian cancer remained the combination of paclitaxel and carboplatin, along with cytoreductive surgery. Maintenance with bevacizumab has been approved since 2016, and more recently, “front-line” maintenance treatment with PARP inhibitors has become the standard of care in ovarian cancer. The studies GOG-218 and ICON7/AGO-OVAR 11 showed that early and continuous addition of bevacizumab for 15 months and 12 months to the carboplatin/paclitaxel standard, respectively, significantly improved the PFS of the disease. In both studies, the maximum benefit was seen at the time of the highest cumulative exposure of bevacizumab—immediately after the last cycle of bevacizumab . Nonetheless, the optimal duration of bevacizumab has never been clearly established, therefore, the recent randomized phase III ENGOT/GCIG study examined whether prolonging bevacizumab treatment up to 30 months would improve its efficiency . Treatment with bevacizumab for a longer period of time did not improve either PFS or OS in patients with epithelial ovarian cancer, fallopian tubes, or primary peritoneal cancer. Therefore, a bevacizumab treatment duration of 15 months remains the standard of care. Adding bevacizumab to ixabepilone (azaepothilone B), a microtubule stabilizer, could be a promising treatment strategy for a group of platinum-resistant or refractory ovarian cancer patients, who currently lack a wide range of treatment options, according to data presented at the virtual edition of the annual meeting of the Society of Gynecological Oncology (SGO) 2021. The combination of bevacizumab plus ixabepilone significantly improved the objective response rate, PFS, and OS compared to ixabepilone alone. The results of the randomized phase II study showed that 33% of patients responded to bevacizumab plus ixabepilone compared to 8% of those receiving ixabepilone alone, and median PFS doubled with the combination (5.5 vs. 2.2 months; HR = 0.33), while median OS improved (10.0 vs. 6.0 months; HR = 0.52) . In patients with recurrent ovarian cancer, the antibody-drug conjugate mirvetuximab soravtansine co-administered with bevacizumab, has shown anti-tumor activity that leads to lasting responses in platinum “agnostic” cases (resistant/sensitive), with strong folate receptor alpha expression (FR-α) . The combination led to a response rate of 64%, a mean response time of 11.8 months, and a median PFS of 10.6 months in patients with high FR-α expression in the phase I study FORWARD II . The FDA approved the accelerated review for STRO-002 in August 2021, an antibody-drug conjugate anti-FR-α, for the treatment of patients with epithelial ovarian cancer, fallopian tubes, or primary peritoneal cancer resistant to platinum, who have received one to three previous lines of systemic therapy, according to data from the phase I study STRO-001-GM1 . The phase III study IMagyn050/GOG 3015/ENGOT-OV39 showed that the addition of atezolizumab to bevacizumab and chemotherapy did not significantly improve PFS in newly diagnosed stage III or IV ovarian cancer patients, neither among all patients nor among those with positive PD-L1 expression . Current evidence does not support the use of immune checkpoint inhibitors in newly diagnosed ovarian cancer. The phase III study “JAVELIN Ovarian 100” (NCT02718417) was suspended because it did not show any benefit of PFS to the concomitant addition of avelumab and/or as a chemotherapy maintenance treatment (carboplatin/paclitaxel) in patients previously untreated with advanced epithelial ovarian cancer . In addition, the phase III study “JAVELIN Ovarian 200” did not show any significant improvement in PFS or OS with avelumab alone or in combination with pegylated liposomal doxorubicin (PLD) vs. PLD alone in patients with flat or refractory ovarian cancer . These results do not support the use of avelumab in the frontline treatment setting. Immunotherapy was also studied in the neoadjuvant setting for unresectable stage IIIC/IV ovarian cancer, in the phase Ib INEOV trial. It was proven to be feasible and safe for the administration of neoadjuvant durvalumab +/− tremelimumab with carboplatin and paclitaxel, prior to interval debulking surgery. However, further research is required on this topic . In a phase II study, it was found that the addition of the oral inhibitor Wee1 kinase (adavosertib) to gemcitabine improved PFS and OS in platinum-resistant or refractory patients, with recurrent high-grade serous ovarian cancer. PFS was longer with adavosertib plus gemcitabine (4.6 months [95% CI = 3.6–6.4]) vs. 3.0 months (95% CI = 1.8–3.8) with placebo plus gemcitabine—HR = 0.55, 95% CI = 0.35–0.90. p = 0.015) . The novel multi-target tyrosine kinase inhibitor anlotinib was assessed for safety and effectiveness as monotherapy in patients with ovarian cancer that is recurrent or resistant, in a phase II prospective, single-arm, and single-center clinical study. For the 31 patients included, the median PFS was 5.32 months, while the median OS was not reached, with an overall response rate of 25.9% . Anlotinib was also studied in combination with pemetrexed in patients with ovarian cancer resistant to platinum, in a single-arm, open-label, phase II study, showing a median PFS of 9.3 months (95% CI = 5.5–13.2), an objective response rate of 36.4% (95% CI = 17.2–59.3), and a disease control rate of 100.0% (95% CI = 73.5–100) . The interim results of a study designed by adding a plasmid encoding p62/SQSTM1 (a multi-domain protein that regulates inflammation, apoptosis, and autophagy) to the standard gemcitabine chemotherapy proved that it may be effective for patients with platinum-resistant ovarian cancer, resulting in a PFS of 5.7 months (compared to 2.4 months in the control group, p = 0.08) . Relacorilant, a selective glucocorticoid receptor modulator, is studied for its capacity to restore sensitivity to chemotherapy. In a tree-arm, randomized, open-label, phase II trial on patients with recurrent platinum-resistant ovarian cancer or platinum-refractory ovarian cancer, relacorilant was assessed in combination with nab-paclitaxel. The study demonstrated that an intermittent regimen of relacorilant the day before, of, and after the administration of nab-paclitaxel (on days 1, 8, and 15, of a 28-day cycle) led to an improved PFS and duration of response. At a median follow-up of 11.07 months, the intermittent regimen significantly improved median PFS compared to nab-paclitaxel alone, 5.55 versus 3.76 months (95% CI = 0.44–0.98) . After the initial results were extremely positive, at the 5-year follow-up of the pivotal SOLO-1 study in women with newly diagnosed advanced ovarian cancer and BRCA1/2 mutation, the maintenance treatment with olaparib led to a doubling of the PFS, statistically significant, according to data presented at the SGO 2021 Annual Meeting. Median PFS for the general population was maintained well beyond the end of the treatment: 56.0 months with olaparib versus 13.8 months with placebo (HR = 0.33; 95% CI = 0.25–0.43). The 5-year PFS was 48% and 21%, respectively . The phase III SOLO2/ENGOT-Ov21 study showed a numerically but statistically insignificant improvement in the overall goal of survival with olaparib maintenance therapy compared to the placebo in patients with recurrent platinum-sensitive ovarian cancer and a BRCA1/2 mutation (51.7 months vs. 38.8 months) . The randomized phase II trial OCTOVA aimed to compare olaparib with weekly paclitaxel and the combination of olaparib plus cediranib in recurrent ovarian cancer, either after previous PARP inhibitors administration, or anti-angiogenic treatment. The combination of olaparib + cediranib had a higher PFS compared to olaparib in monotherapy (HR = 0.70; 60% CI: 0.57, 0.86; p = 0.08). However, there was no difference in terms of PFS between the cohorts that received olaparib and weekly paclitaxel (HR = 0.97, 60% CI: 0.79, 1.19; p = 0.55) . The addition of niraparib maintenance treatment after platinum-based first-line chemotherapy with bevacizumab has shown a clinical benefit in patients with advanced ovarian cancer, according to data from the OVARIO study, presented at SGO 2021 . The analysis of the phase II study by OVARIO showed that 62% of patients in the general population remained progression-free at 18 months, including 76% of patients in the homologous recombination deficit (HRD) subgroup and 47% of patients in the homologous recombinant proficiency (HRP) subgroup . In patients with positive, advanced, relapsed BRCA ovarian cancer, the treatment with the PARP inhibitor rucaparib led to a significant improvement in PFS, compared to standard chemotherapy, according to the results of the international phase III study ARIEL4 (7.4 months vs. 5.7 months—HR = 0.64, p = 0.001) . A new PARP inhibitor may soon join the treatment of ovarian cancer, according to the data presented at SGO 2021 . The results of the phase III study (NCT03863860) of fuzuloparib (previously called fluzoparib) as maintenance therapy in patients with recurrent platinum-sensitive ovarian cancer showed a 7.4-month improvement in median PFS (12.9 vs. 5.5 months; p < 0.0001) and a 75.5% reduced risk of disease progression or death compared to the placebo (HR = 0.25). The investigator-assessed objective response rate was 70.8% (95% CI = 61.5–79.0), while the median investigator-assessed progression-free survival was 10.3 months (95% CI = 9.2–12.0), and the 12-month survival rate was 93.7% (95% CI = 87.2–96.9) . In the ANNIE multicentre, single-arm, phase II trial, the safety and efficacy of niraparib combined with anlotinib were evaluated in patients with platinum-resistant recurrent ovarian epithelial, fallopian tube, or primary peritoneal cancer. The overall response rate was 48.0% (95% CI = 27.0–69.0%, 12 patients with partial responses, 12 with stable disease), while median PFS and median duration of response were not reached, therefore presenting antitumor activity that appears to be promising, but with a hand–foot skin reaction as a treatment-related side event (in 47.5% of patients) . Despite the pandemic caused by COVID-19, the results presented here show the many therapeutic advances made in 2021 in the field of gynecological cancers (cervical, endometrial, and ovarian). summarizes the FDA approvals in gynecological cancers in 2021 . Translational research, focused on the results of preclinical studies, will further lead to the clinical integration of information obtained in the laboratory, in phase II and III studies, establishing an important basis and key research priorities for the future. By continuing the quest for the best treatments by targeting novel and exploitable genetic and biologic abnormalities in cervical, endometrial, and ovarian malignancies, oncologists must be prepared to confront the challenge of achieving clinically substantial improvements in gynecologic oncology patients’ outcomes.
Enhanced sensitivity and scalability with a Chip-Tip workflow enables deep single-cell proteomics
48336149-d3bb-4ea3-834e-c10a09be3615
11903336
Biochemistry[mh]
Multicellular organisms are composed of specialized tissues that consist of various cell types, each performing distinct functions. The unique properties of each cell type arise from the interaction of their genetic information with internal factors. Studying individual cells is critical since cells can exhibit diverse behaviors under identical external environments . Single-cell RNA sequencing has transformed cell biology by providing a granular view of gene expression patterns, spatial cell architecture, cellular heterogeneity and dynamic cellular responses , . However, messenger RNAs (mRNAs) serve as an intermediate step in gene expression and do not directly reflect cellular activity. Studying proteins provides a more direct and comprehensive understanding of cellular functions, regulatory mechanisms and disease processes compared to studying mRNA changes alone. Protein analysis captures post-translational modifications (PTMs), protein diversity and functional aspects that are not reflected in mRNA analyses. Single-cell proteomics (SCP) is emerging as the next frontier in proteomics and has already enhanced our understanding of cellular differentiation and diseases by allowing for the direct measurement of single-cell proteomes and their PTMs – . This capability is instrumental in delineating the functional phenotypes within cell populations, elucidating cellular and embryonic development, predicting disease trajectories and pinpointing specific surface markers and potential therapeutic targets unique to each cell type , . However, SCP is nascent and faces challenges including limited sequence depth, throughput and reproducibility, constraining its broader utility. This study introduces key methodological advances, which considerably improve the sensitivity, coverage and dependability of protein identification from single cells. SCP predominantly uses two main quantitative techniques: label-free , , or multiplexed , , analysis. Label-free quantification (LFQ) analysis presents the simplest SCP workflow by lysing cells to extract proteins, which are then digested into peptides, separated via liquid chromatography (LC) and analyzed using mass spectrometry (MS), thus identifying and quantifying peptides from single cells. In a pioneering study, Li et al. developed a nanoliter-scale oil-air-droplet chip and achieved increased analytical sensitivity for SCP using label-free shotgun proteomics. Multiplexed analysis, on the other hand, uses isobaric labeling, allowing for the simultaneous analysis of multiple samples by chemically tagging peptides with unique stable isotope encoded mass tags. The recent innovation of nonisobaric multiplexed data-independent acquisition (plexDIA or mDIA ) has combined the strengths of DIA with multiplexing, improving protein quantification rates and accuracy without the ratio compression problems associated with tandem mass tags – . State-of-the-art SCP, identifying around 1,000–2,000 protein groups per cell and 1,500–2,500 proteins across cells, required improvements in MS sensitivity and sample preparation . Yet, the loss of peptides during sample preparation and analysis due to protein adsorption loss, chemical modifications and ion manipulation remains a challenge . To address these challenges, we developed a nearly lossless LFQ-based SCP method, Chip-Tip, which identifies over 5,000 proteins and 40,000 peptides in single HeLa cells. Our workflow involves single cell dispensing and sample preparation using the cellenONE with a proteoCHIP EVO 96 and direct transfer to Evotip disposal trap columns, and subsequent analysis using the Evosep One LC with Whisper flow gradients coupled to narrow-window DIA (nDIA) on the Orbitrap Astral mass spectrometer (Fig. ). Our study includes a systematic evaluation of database search tools and an error-rate estimation using an entrapment approach, ensuring reliable data analysis. Furthermore, our method has enabled the direct investigation of PTMs in single-cell proteomes, achieving deep coverage in phosphorylation and glycosylation without previous enrichment. The application of this technique to spheroid samples with a new dissociation buffer underscores its robustness, offering important insights into the proteomic intricacies of individual cells and their implications for biological processes and disease states. Finally, we studied undirected differentiation of human-induced pluripotent stem cells (hiPSCs) into multiple cell types through embryoid body (EB) induction. We identified up to 4,700 proteins in hiPSCs and 6,200 in cells from EBs and quantified low-abundant stem cell transcription factors in hiPSCs and different cell lineage markers in EBs. Roadmap to deep and high-throughput LFQ-SCP using Chip-Tip Key considerations for SCP sample preparation workflows include minimizing surface adsorption losses by keeping samples as concentrated as possible and reducing buffer evaporation and pipetting steps for reproducibility and maximizing detection sensitivity. Consequently, we focused on developing a SCP workflow characterized by ultra-high sensitivity (Fig. and Supplementary Table ). Initially, cells were isolated and processed using a one-pot technique in the cellenONE X1 platform (Extended Data Fig. ). A key innovation is the proteoCHIP EVO 96, designed for single-cell sample preparation, which operates with minimized volumes at the nanoliter level, enabling the simultaneous proteomics sample preparation of up to 96 cells in parallel (Extended Data Fig. ). This chip is precisely tailored for compatibility with the Evosep One LC system for a streamlined sample transfer process that is free from additional pipetting steps. For analysis by LC with tandem MS (LC–MS/MS), we combined the Whisper flow methods on Evosep One LC with high-precision IonOpticks nanoUHPLC columns to maximize sensitivity and chromatographic performance (Extended Data Fig. ). Our recently introduced nDIA method , applied on the Orbitrap Astral mass spectrometer, greatly amplifies the sensitivity and efficiency of the SCP analysis. To optimize the nDIA approach for the Orbitrap Astral , , which had not been previously used for label-free single-cell analysis, we initially assessed different nDIA methods, examining different quadrupole isolation windows and scaled ion injection times for DIA–MS/MS scans, accordingly. Through this comparative analysis, we determined the optimal nDIA method and found that using 4-Th DIA windows and 6-ms maximum injection time (4Th6ms) resulted in the highest proteome coverage, leading to the identification of a median number of 5204 proteins in single HeLa cells and >6,000 proteins in one of the HeLa cell preparations (Fig. ). We observed that as the injection time increased in the 8Th12ms and 16Th24ms methods, there was a corresponding decline in the number of proteins identified, likely attributable to an increase in chemical noise signals within the analyzer. When we expanded our method to process larger cell quantities, we identified more than 7000 proteins from a batch of only 20 cells. The achieved depth at the peptide level was particularly remarkable, with median identifications of 41,700 peptides for single-cell samples and 98,054 peptides for samples from 20 cells (Fig. ), resulting in median protein sequence coverage of 12.9% for single cells and 25% for 20-cell samples (Fig. ). This profound peptide coverage facilitated highly accurate protein quantification, with intensity-based absolute quantitation (iBAQ) values exhibiting an extensive dynamic range spanning several orders of magnitude (Fig. and Extended Data Fig. ). The relative protein abundance estimates also demonstrated an almost linear relationship of iBAQ values across samples with varying cell numbers (Fig. ). Our comprehensive SCP profiles included proteins from all subcellular localizations , with a notable identification of over 200 proteins on plasma membranes, emphasizing the robustness of the method (Fig. ). Exploring the carrier proteome effect in label-free SCP In LFQ-based SCP, each cell is analyzed independently by LC–MS/MS with nDIA, free from the signal interferences characteristic of multiplexed SCP. However, the proteomic profiles identified from single cells can be largely influenced by the strategy employed for the spectra-to-database matching with a peptide search engine. Two strategies are currently employed to enhance identification numbers using two popular DIA database search tools, Spectronaut and DIA-NN . In Spectronaut, this is the directDIA+ or spectral library-free based approach, which includes searching alongside matched samples of higher quantities (such as 1-ng digests or those from 20 cells), whereas in DIA-NN it is the match-between-run (MBR) feature, similarly paired with higher quantity samples. While these search strategies differ between tools, both can elevate identification numbers in single-cell samples by incorporating data from matched higher quantity samples. These methods are widely applied in SCP, yet their precise effects remain to be fully understood. We performed a systematic evaluation to elucidate the effect of using such a ‘carrier proteome’ for SCP by benchmarking between Spectronaut and DIA-NN, alongside their respective search strategies. When compared to searches with only single-cell samples, inclusion of carrier proteomes resulted in a substantial increase in the number of identifications (Figs. and Extended Data Fig. ). Further investigation into the carrier proteome effect in Spectronaut revealed that as the number of single-cell files in a search increased, the number of identifications also rose incrementally. When carrier proteomes were included, there was a marked enhancement in identifications, increasing from around 4,000 to approximately 5,000 (Fig. and Extended Data Fig. ). Analysis of the abundance of proteins identified through different search strategies indicated that searches incorporating more cells could discern proteins of lower abundance, with the carrier proteome enabling the identification of the least abundant proteins (Fig. ). To compare the efficacy of these search strategies, we examined two separate runs from single-cell samples (Fig. ) and from 20-cell samples (Fig. ), contrasting their outcomes in DIA-NN and Spectronaut. The results showed a big overlap in identifications, suggesting consistency across different cells and search tools. However, the protein quantification correlation between the two cells was marginally higher in Spectronaut with R = 0.91 compared to DIA-NN with R = 0.89 (Fig. ). Conversely, the correlation between Spectronaut and DIA-NN for the same cell was not as high with R = 0.83 (Fig. ). The ultra-low abundance of single-cell proteomes raises questions about the confidence in peptide and protein identifications, particularly with the increased numbers achieved using the Orbitrap Astral mass spectrometer. To address these concerns, we conducted a false discovery rate (FDR) benchmark by using an entrapment database search strategy with both Spectronaut and DIA-NN to empirically estimate the error rate of identifications . We initiated this by conducting an entrapment experiment with a 1× larger shuffled human mimic protein database, analyzing single-cell samples in combination with 20-cell samples serving as a carrier proteome with default FDR parameter settings of 0.01 at protein level. The resulting empirical FDR at the protein level, corrected for the mimic database size, was estimated at approximately 3% in Spectronaut and 1% in DIA-NN (Extended Data Fig. ). Faster LC methods and FAIMS further enhance SCP performance Recognizing the throughput limitation in MS as a major bottleneck for label-free SCP, our methodology has innovated a solution that increases the throughput from 40 samples per day (SPD) to 80SPD and even 120SPD on Evosep One LC system with the Whisper Zoom method. This optimized LC method not only improves throughput but also enhances chromatographic performance and maintains a high level of sensitivity, identifying >4,500 proteins in individual HeLa cells using 80SPD and 120SPD methods (Fig. ). It should also be noted that the 80SPD and 120SPD methods use a 5 cm analytical LC column, while the 40SPD method makes use of a 15 cm column, which contributes to the slight difference in performance. Furthermore, we tested for potential contamination and carryover throughout the workflow by injecting samples prepared without a single cell. These blank samples, containing only Master Mix buffer and prepared identically to real single-cell samples, resulted in very few protein identifications when analyzed alongside real single-cell samples (Fig. ), demonstrating the robustness and low contamination risk of our workflow. Next, we evaluated another setup using the Vanquish Neo LC and Orbitrap Astral equipped with a front-end high field asymmetric waveform ion mobility spectrometry (FAIMS) interface to minimize background interference. Single cells were isolated using cellenONE and prepared in a low-bind 96-well plate for direct MS sample injection. Using a single compensation voltage of −48 V, the FAIMS Pro Duo interface further enhanced analytical performance, resulting in the identification of over 6,500 proteins in single HeLa cells (Fig. ). A key consideration with this well-plate format preparation and direct injection into the LC–MS system is the absence of a sample cleanup step, which could potentially affect the cleanliness, robustness and longevity of the LC–MS systems. Despite this, blank samples from this workflow also demonstrated contamination-free performance (Fig. ). In-depth PTM analysis without enrichment in SCP The exceptional depth achieved at the peptide precursor level in our SCP datasets not only enhances protein identification and quantification but could also unveil a wealth of information pertaining to PTMs. Due to the substoichiometric nature of PTMs, global analysis of any PTM by LC–MS/MS usually requires specific enrichment of the PTM-bearing peptides before MS analysis. However, with the high peptide coverage achieved in SCP, we speculated that identification of PTMs without specific enrichment could be possible. To test this hypothesis, we delved into the prevalence of two key PTMs of high cellular importance, phosphorylation and glycosylation, within the single-cell proteome samples, bypassing any specific enrichment processes. Focusing on the enzymes catalyzing the transfer of phosphate groups from ATP to target proteins, we quantified 168 protein kinases within single cells, encompassing all principal kinase families (Fig. ). Notably, kinases such as CDK1 from the CMGC group and MAPK1 from the STE group exhibited the highest abundances, whereas tyrosine kinases were less abundant. Subsequently, we conducted a database search for serine, threonine and tyrosine phosphorylation as a variable modification in single-cell samples, using 20-cell samples as a carrier proteome. Although we did not do any specific cell perturbation or sample treatment to preserve the phosphorylation sites, the search approach led to the confident identification of a median of 120 phospho-Ser, 28 phospho-Thr and 13 phospho-Tyr sites in single cells, with high site localization probabilities (Fig. and Extended Data Fig. ). An average of 114 proteins were identified as phosphoproteins and they are mostly involved in nucleosome assembly and organization (Extended Data Fig. ). A sequence logo analysis highlighted prevalent phosphorylation motifs such as the proline-directed SP motif corresponding to substrates for abundant kinases such as CDKs, aligning closely with our kinome data (Fig. ). An extracted ion chromatogram (XIC) screening of DIA–MS/MS spectra for the selective mass-deficient immonium ions for phospho-Tyr at m/z 216.042 showed intensive signals across the entire LC elution profile (Fig. ). We also investigated protein glycosylation patterns and detected multiple glycosyltransferases across all glycosylation pathways (Fig. ). Using a strategy akin to the phospho-Tyr immonium ion screening, we performed oxonium ion screening , for common glycans, including the monosaccharides HexNAc (Fig. ), NeuAc (Fig. ) and the disaccharide Hex-HexNAc (Fig. ). All glycans appeared abundantly and were present in most MS2 spectra, even when using nDIA. The screening of both immonium and oxonium ions indicated a pervasive presence of PTMs, such as protein phosphorylation and glycosylation, in single cells. Nevertheless, the precise identification of the modified peptides remains challenging due to current limitations in database search algorithms. SCP analysis of spheroid post-5-FU exposure To demonstrate the applicability of our SCP method with 80SPD, we next intended to capture the transformative effects of the chemotherapeutic drug 5-fluorouracil (5-FU) on colorectal cancer cells grown as a spheroid. The spheroids subjected to 5-FU demonstrate increased disintegration over time, after treatment with a recently developed disaggregation buffer for 20 minutes, whereas control spheroids retained their compact structure, indicative of the impact of 5-FU on cell cohesion (Fig. ). After single-cell analysis with 80SPD methods, we identified >2,500 proteins in total. The activation pathway of 5-FU reveals a slight upregulation of TYMP, crucial for converting 5-FU into its DNA-incorporating active form; these changes are crucial indicators of drug efficacy (Fig. ). The hierarchical clustering of gene ontology (GO) terms underscores the involvement of biological processes such as cyclase cytoplasmic activity and ribose purine synthesis, which are directly affected by mechanism of 5-FU action on ribosomal RNA (rRNA) synthesis, with ribosome and purine and/or pyrimidine synthesis being the main target of 5-FU (Fig. ). The representation of these pathways aligns with the known impact of 5-FU on the nucleotide synthesis pathways. An UpSet plot of the altered GO terms further dissects the effects of 5-FU, detailing which processes are most affected by the treatment and their regulatory trends (Fig. ). The alteration in proteins associated with these GO terms highlights specific proteins such as ADCY, which contributes to pyrophosphate formation, and keratin, integral for spheroid structural integrity and changes in the filaments organization in the stages of apoptosis. These findings suggest a targeted disruption by 5-FU on spheroid stability and purine metabolism, a reflection of the ability of the drug to interfere with key cellular functions (Fig. ). hiPSC differentiation Since our workflow allow us to quantify more than 5,000 proteins in single HeLa cells, this analytical depth is sufficient to obtain information on specific cell markers. Thus, we next studied undirected differentiation of hiPSCs into multiple cell types through EB induction, mimicking the characteristics of early-stage embryos. The challenges in stem cell research by SCP are that certain embryonic transcription factors such as OCT4 and NANOG are translated at a very low copy number but are fundamental to cellular identity for hiPSCs, so analytical depth is paramount. This is also the case for cell-specific lineage markers, which start appearing on hiPSCs differentiation and are sometimes only present in a handful of cells in the population. We quantified up to 4,700 proteins in hiPSCs and 6,200 in large cells from EBs (Supplementary Table and Extended Data Fig. ). Principal component analysis (PCA) showed a clear separation between hiPSCs and EBs, and the distance between some EB cells was on par with the distance between EBs and hiPSCs suggesting that EB cells can greatly differ from each other (Fig. ). We consistently quantified OCT4 in hiPSCs and in some of the EB cells demonstrating that we can indeed detect sparsely expressed transcription factors even in single cells. We detected SOX2, another stem cell marker that we found to be highly expressed in hiPSCs, and also detected multiple lineages markers such as GATA4 (endoderm), HAND1 (mesoderm) and MAP2 (ectoderm). These markers were more expressed in EB cells but with great variability, consistent with the fact that these cells differentiate into the three germ layers (Fig. ). We also observed early increase in SBDS protein abundance in most EB cells on differentiation (Extended Data Fig. ), which is a feature of pluripotent stem cells early differentiation . Finally, OCT4 and SOX2 showed significantly higher abundance in hiPSCs compared to EB cells, as expected (Fig. ). To study global differences between EBs and iPSCs and between different EB cells, we performed unsupervised hierarchical clustering that highlighted six different cell clusters with one of those being only composed of iPSCs, in line with the separation observed by the PCA. Next, we performed statistical analysis using analysis of variance (ANOVA) between cell clusters and performed unsupervised hierarchical clustering of proteins that showed significant differences ( P < 0.05) (Fig. ). Finally, we performed GO term enrichment on the protein clusters highlighted by the analysis. Various cell functions are enriched highlighting physiological differences between each cell clusters such as difference in Wnt signaling, respiration, cell cycle, DNA replication, mRNA and rRNA processing, mitochondrial function, cell adhesion and migration. Human pluripotent stem cells differ from most cells on several biological features namely: metabolism as they mostly use glycolysis even in presence of oxygen while differentiated cells usually employ oxidative phosphorylation , ; cell cycle as their cell cycle is shortened and they divide faster than most other cell types ; chromatin state as they maintain their chromatin in an open state ; adhesion to the extracellular matrix where they require specific proteins to adhere, migrate and maintain their pluripotent state such as laminin 521 (ref. ). These features were also displayed in the GO enrichment highlighting lower level of proteins involved in aerobic respiration and mitochondrial respiratory chain complex I assembly (cluster 7), lower level of adherence proteins (cluster 12), but higher levels of proteins involved in cell cycle, DNA replication and chromatin remodeling in hiPSCs compared to EB cells (cluster 9). Therefore, our analysis can deliver valuable biological insight into stem cell differentiation both in term of general pathways but also in term of specific, low-abundance biomarkers. Key considerations for SCP sample preparation workflows include minimizing surface adsorption losses by keeping samples as concentrated as possible and reducing buffer evaporation and pipetting steps for reproducibility and maximizing detection sensitivity. Consequently, we focused on developing a SCP workflow characterized by ultra-high sensitivity (Fig. and Supplementary Table ). Initially, cells were isolated and processed using a one-pot technique in the cellenONE X1 platform (Extended Data Fig. ). A key innovation is the proteoCHIP EVO 96, designed for single-cell sample preparation, which operates with minimized volumes at the nanoliter level, enabling the simultaneous proteomics sample preparation of up to 96 cells in parallel (Extended Data Fig. ). This chip is precisely tailored for compatibility with the Evosep One LC system for a streamlined sample transfer process that is free from additional pipetting steps. For analysis by LC with tandem MS (LC–MS/MS), we combined the Whisper flow methods on Evosep One LC with high-precision IonOpticks nanoUHPLC columns to maximize sensitivity and chromatographic performance (Extended Data Fig. ). Our recently introduced nDIA method , applied on the Orbitrap Astral mass spectrometer, greatly amplifies the sensitivity and efficiency of the SCP analysis. To optimize the nDIA approach for the Orbitrap Astral , , which had not been previously used for label-free single-cell analysis, we initially assessed different nDIA methods, examining different quadrupole isolation windows and scaled ion injection times for DIA–MS/MS scans, accordingly. Through this comparative analysis, we determined the optimal nDIA method and found that using 4-Th DIA windows and 6-ms maximum injection time (4Th6ms) resulted in the highest proteome coverage, leading to the identification of a median number of 5204 proteins in single HeLa cells and >6,000 proteins in one of the HeLa cell preparations (Fig. ). We observed that as the injection time increased in the 8Th12ms and 16Th24ms methods, there was a corresponding decline in the number of proteins identified, likely attributable to an increase in chemical noise signals within the analyzer. When we expanded our method to process larger cell quantities, we identified more than 7000 proteins from a batch of only 20 cells. The achieved depth at the peptide level was particularly remarkable, with median identifications of 41,700 peptides for single-cell samples and 98,054 peptides for samples from 20 cells (Fig. ), resulting in median protein sequence coverage of 12.9% for single cells and 25% for 20-cell samples (Fig. ). This profound peptide coverage facilitated highly accurate protein quantification, with intensity-based absolute quantitation (iBAQ) values exhibiting an extensive dynamic range spanning several orders of magnitude (Fig. and Extended Data Fig. ). The relative protein abundance estimates also demonstrated an almost linear relationship of iBAQ values across samples with varying cell numbers (Fig. ). Our comprehensive SCP profiles included proteins from all subcellular localizations , with a notable identification of over 200 proteins on plasma membranes, emphasizing the robustness of the method (Fig. ). In LFQ-based SCP, each cell is analyzed independently by LC–MS/MS with nDIA, free from the signal interferences characteristic of multiplexed SCP. However, the proteomic profiles identified from single cells can be largely influenced by the strategy employed for the spectra-to-database matching with a peptide search engine. Two strategies are currently employed to enhance identification numbers using two popular DIA database search tools, Spectronaut and DIA-NN . In Spectronaut, this is the directDIA+ or spectral library-free based approach, which includes searching alongside matched samples of higher quantities (such as 1-ng digests or those from 20 cells), whereas in DIA-NN it is the match-between-run (MBR) feature, similarly paired with higher quantity samples. While these search strategies differ between tools, both can elevate identification numbers in single-cell samples by incorporating data from matched higher quantity samples. These methods are widely applied in SCP, yet their precise effects remain to be fully understood. We performed a systematic evaluation to elucidate the effect of using such a ‘carrier proteome’ for SCP by benchmarking between Spectronaut and DIA-NN, alongside their respective search strategies. When compared to searches with only single-cell samples, inclusion of carrier proteomes resulted in a substantial increase in the number of identifications (Figs. and Extended Data Fig. ). Further investigation into the carrier proteome effect in Spectronaut revealed that as the number of single-cell files in a search increased, the number of identifications also rose incrementally. When carrier proteomes were included, there was a marked enhancement in identifications, increasing from around 4,000 to approximately 5,000 (Fig. and Extended Data Fig. ). Analysis of the abundance of proteins identified through different search strategies indicated that searches incorporating more cells could discern proteins of lower abundance, with the carrier proteome enabling the identification of the least abundant proteins (Fig. ). To compare the efficacy of these search strategies, we examined two separate runs from single-cell samples (Fig. ) and from 20-cell samples (Fig. ), contrasting their outcomes in DIA-NN and Spectronaut. The results showed a big overlap in identifications, suggesting consistency across different cells and search tools. However, the protein quantification correlation between the two cells was marginally higher in Spectronaut with R = 0.91 compared to DIA-NN with R = 0.89 (Fig. ). Conversely, the correlation between Spectronaut and DIA-NN for the same cell was not as high with R = 0.83 (Fig. ). The ultra-low abundance of single-cell proteomes raises questions about the confidence in peptide and protein identifications, particularly with the increased numbers achieved using the Orbitrap Astral mass spectrometer. To address these concerns, we conducted a false discovery rate (FDR) benchmark by using an entrapment database search strategy with both Spectronaut and DIA-NN to empirically estimate the error rate of identifications . We initiated this by conducting an entrapment experiment with a 1× larger shuffled human mimic protein database, analyzing single-cell samples in combination with 20-cell samples serving as a carrier proteome with default FDR parameter settings of 0.01 at protein level. The resulting empirical FDR at the protein level, corrected for the mimic database size, was estimated at approximately 3% in Spectronaut and 1% in DIA-NN (Extended Data Fig. ). Recognizing the throughput limitation in MS as a major bottleneck for label-free SCP, our methodology has innovated a solution that increases the throughput from 40 samples per day (SPD) to 80SPD and even 120SPD on Evosep One LC system with the Whisper Zoom method. This optimized LC method not only improves throughput but also enhances chromatographic performance and maintains a high level of sensitivity, identifying >4,500 proteins in individual HeLa cells using 80SPD and 120SPD methods (Fig. ). It should also be noted that the 80SPD and 120SPD methods use a 5 cm analytical LC column, while the 40SPD method makes use of a 15 cm column, which contributes to the slight difference in performance. Furthermore, we tested for potential contamination and carryover throughout the workflow by injecting samples prepared without a single cell. These blank samples, containing only Master Mix buffer and prepared identically to real single-cell samples, resulted in very few protein identifications when analyzed alongside real single-cell samples (Fig. ), demonstrating the robustness and low contamination risk of our workflow. Next, we evaluated another setup using the Vanquish Neo LC and Orbitrap Astral equipped with a front-end high field asymmetric waveform ion mobility spectrometry (FAIMS) interface to minimize background interference. Single cells were isolated using cellenONE and prepared in a low-bind 96-well plate for direct MS sample injection. Using a single compensation voltage of −48 V, the FAIMS Pro Duo interface further enhanced analytical performance, resulting in the identification of over 6,500 proteins in single HeLa cells (Fig. ). A key consideration with this well-plate format preparation and direct injection into the LC–MS system is the absence of a sample cleanup step, which could potentially affect the cleanliness, robustness and longevity of the LC–MS systems. Despite this, blank samples from this workflow also demonstrated contamination-free performance (Fig. ). The exceptional depth achieved at the peptide precursor level in our SCP datasets not only enhances protein identification and quantification but could also unveil a wealth of information pertaining to PTMs. Due to the substoichiometric nature of PTMs, global analysis of any PTM by LC–MS/MS usually requires specific enrichment of the PTM-bearing peptides before MS analysis. However, with the high peptide coverage achieved in SCP, we speculated that identification of PTMs without specific enrichment could be possible. To test this hypothesis, we delved into the prevalence of two key PTMs of high cellular importance, phosphorylation and glycosylation, within the single-cell proteome samples, bypassing any specific enrichment processes. Focusing on the enzymes catalyzing the transfer of phosphate groups from ATP to target proteins, we quantified 168 protein kinases within single cells, encompassing all principal kinase families (Fig. ). Notably, kinases such as CDK1 from the CMGC group and MAPK1 from the STE group exhibited the highest abundances, whereas tyrosine kinases were less abundant. Subsequently, we conducted a database search for serine, threonine and tyrosine phosphorylation as a variable modification in single-cell samples, using 20-cell samples as a carrier proteome. Although we did not do any specific cell perturbation or sample treatment to preserve the phosphorylation sites, the search approach led to the confident identification of a median of 120 phospho-Ser, 28 phospho-Thr and 13 phospho-Tyr sites in single cells, with high site localization probabilities (Fig. and Extended Data Fig. ). An average of 114 proteins were identified as phosphoproteins and they are mostly involved in nucleosome assembly and organization (Extended Data Fig. ). A sequence logo analysis highlighted prevalent phosphorylation motifs such as the proline-directed SP motif corresponding to substrates for abundant kinases such as CDKs, aligning closely with our kinome data (Fig. ). An extracted ion chromatogram (XIC) screening of DIA–MS/MS spectra for the selective mass-deficient immonium ions for phospho-Tyr at m/z 216.042 showed intensive signals across the entire LC elution profile (Fig. ). We also investigated protein glycosylation patterns and detected multiple glycosyltransferases across all glycosylation pathways (Fig. ). Using a strategy akin to the phospho-Tyr immonium ion screening, we performed oxonium ion screening , for common glycans, including the monosaccharides HexNAc (Fig. ), NeuAc (Fig. ) and the disaccharide Hex-HexNAc (Fig. ). All glycans appeared abundantly and were present in most MS2 spectra, even when using nDIA. The screening of both immonium and oxonium ions indicated a pervasive presence of PTMs, such as protein phosphorylation and glycosylation, in single cells. Nevertheless, the precise identification of the modified peptides remains challenging due to current limitations in database search algorithms. To demonstrate the applicability of our SCP method with 80SPD, we next intended to capture the transformative effects of the chemotherapeutic drug 5-fluorouracil (5-FU) on colorectal cancer cells grown as a spheroid. The spheroids subjected to 5-FU demonstrate increased disintegration over time, after treatment with a recently developed disaggregation buffer for 20 minutes, whereas control spheroids retained their compact structure, indicative of the impact of 5-FU on cell cohesion (Fig. ). After single-cell analysis with 80SPD methods, we identified >2,500 proteins in total. The activation pathway of 5-FU reveals a slight upregulation of TYMP, crucial for converting 5-FU into its DNA-incorporating active form; these changes are crucial indicators of drug efficacy (Fig. ). The hierarchical clustering of gene ontology (GO) terms underscores the involvement of biological processes such as cyclase cytoplasmic activity and ribose purine synthesis, which are directly affected by mechanism of 5-FU action on ribosomal RNA (rRNA) synthesis, with ribosome and purine and/or pyrimidine synthesis being the main target of 5-FU (Fig. ). The representation of these pathways aligns with the known impact of 5-FU on the nucleotide synthesis pathways. An UpSet plot of the altered GO terms further dissects the effects of 5-FU, detailing which processes are most affected by the treatment and their regulatory trends (Fig. ). The alteration in proteins associated with these GO terms highlights specific proteins such as ADCY, which contributes to pyrophosphate formation, and keratin, integral for spheroid structural integrity and changes in the filaments organization in the stages of apoptosis. These findings suggest a targeted disruption by 5-FU on spheroid stability and purine metabolism, a reflection of the ability of the drug to interfere with key cellular functions (Fig. ). Since our workflow allow us to quantify more than 5,000 proteins in single HeLa cells, this analytical depth is sufficient to obtain information on specific cell markers. Thus, we next studied undirected differentiation of hiPSCs into multiple cell types through EB induction, mimicking the characteristics of early-stage embryos. The challenges in stem cell research by SCP are that certain embryonic transcription factors such as OCT4 and NANOG are translated at a very low copy number but are fundamental to cellular identity for hiPSCs, so analytical depth is paramount. This is also the case for cell-specific lineage markers, which start appearing on hiPSCs differentiation and are sometimes only present in a handful of cells in the population. We quantified up to 4,700 proteins in hiPSCs and 6,200 in large cells from EBs (Supplementary Table and Extended Data Fig. ). Principal component analysis (PCA) showed a clear separation between hiPSCs and EBs, and the distance between some EB cells was on par with the distance between EBs and hiPSCs suggesting that EB cells can greatly differ from each other (Fig. ). We consistently quantified OCT4 in hiPSCs and in some of the EB cells demonstrating that we can indeed detect sparsely expressed transcription factors even in single cells. We detected SOX2, another stem cell marker that we found to be highly expressed in hiPSCs, and also detected multiple lineages markers such as GATA4 (endoderm), HAND1 (mesoderm) and MAP2 (ectoderm). These markers were more expressed in EB cells but with great variability, consistent with the fact that these cells differentiate into the three germ layers (Fig. ). We also observed early increase in SBDS protein abundance in most EB cells on differentiation (Extended Data Fig. ), which is a feature of pluripotent stem cells early differentiation . Finally, OCT4 and SOX2 showed significantly higher abundance in hiPSCs compared to EB cells, as expected (Fig. ). To study global differences between EBs and iPSCs and between different EB cells, we performed unsupervised hierarchical clustering that highlighted six different cell clusters with one of those being only composed of iPSCs, in line with the separation observed by the PCA. Next, we performed statistical analysis using analysis of variance (ANOVA) between cell clusters and performed unsupervised hierarchical clustering of proteins that showed significant differences ( P < 0.05) (Fig. ). Finally, we performed GO term enrichment on the protein clusters highlighted by the analysis. Various cell functions are enriched highlighting physiological differences between each cell clusters such as difference in Wnt signaling, respiration, cell cycle, DNA replication, mRNA and rRNA processing, mitochondrial function, cell adhesion and migration. Human pluripotent stem cells differ from most cells on several biological features namely: metabolism as they mostly use glycolysis even in presence of oxygen while differentiated cells usually employ oxidative phosphorylation , ; cell cycle as their cell cycle is shortened and they divide faster than most other cell types ; chromatin state as they maintain their chromatin in an open state ; adhesion to the extracellular matrix where they require specific proteins to adhere, migrate and maintain their pluripotent state such as laminin 521 (ref. ). These features were also displayed in the GO enrichment highlighting lower level of proteins involved in aerobic respiration and mitochondrial respiratory chain complex I assembly (cluster 7), lower level of adherence proteins (cluster 12), but higher levels of proteins involved in cell cycle, DNA replication and chromatin remodeling in hiPSCs compared to EB cells (cluster 9). Therefore, our analysis can deliver valuable biological insight into stem cell differentiation both in term of general pathways but also in term of specific, low-abundance biomarkers. Our study marks a big leap in the field of SCP, achieving a huge enhancement in sensitivity, with identifications increasing from approximately 2,000 proteins to 5,000–6,500 proteins. This achievement underscores the evolution of the powerful proteomics technologies empowering researchers to delve deeper into the molecular intricacies of individual cells. The combination of near lossless single-cell sample preparation using the cellenONE with the ultra-high sensitivity provided by the Evosep One Whisper flow gradient and nDIA on the Orbitrap Astral is a powerful setup enabling SCP with high coverage and robustness. The rigorous evaluation of software tools and the implementation of FDR strategies were pivotal in bolstering the confidence of our protein identifications. Through meticulous analysis, we discerned the subtleties and strengths of various computational approaches, ensuring the accuracy of our findings and reinforcing the validity of our data. Notably, we demonstrated the feasibility of PTM analysis without the prerequisite of specific enrichment protocols. This advancement paves the way for a more streamlined and efficient exploration of PTM landscapes, revealing the multifaceted regulatory mechanisms at play within the cell. The introduction of a spheroid-specific dissociation buffer has also proved influential, enhancing the dissociation efficiency of spheroids and bolstering the robustness of our proteomic analysis. This innovation stands as a testament to the relentless pursuit of methodological refinement in SCP. Despite these advancements, we recognize that the throughput in MS is a major bottleneck, currently capped at 120SPD. We propose that this limitation could be mitigated through the integration of nDIA with multiplexed approaches, including tandem mass tags and multiplexed DIA. This combinatorial strategy holds the potential to exponentially increase throughput and analytical depth. Meanwhile, exciting progress has been made in non-MS techniques for identifying and potentially sequencing individual proteins , . These methods draw inspiration from nucleic acid sequencing technologies and include single-molecule peptide sequencing through Edman degradation or amino peptidases in flow cells, as well as the use of nanopore sequencing adapted for proteins. While our results reported a dynamic range spanning several orders of magnitude, it is important to note that the entire dynamic range is not fully quantitative. Moreover, although our SCP workflow has shown high reproducibility and effective protein identification, other aspects such as processing volume and material of the chip could affect sensitivity and should be further investigated. Specifically, minimizing surface adsorption losses and buffer evaporation are crucial for ensuring the integrity and sensitivity of single-cell analyses. To demonstrate the universality and robustness of our Chip-Tip method, we have tested it with an Orbitrap Fusion Lumos system using HEK293T (human embryonic kidney 293T) cells, achieving approximately 3,000 protein identifications when analyzing the samples with DIA using Orbitrap detection. Moreover, another study has shown that the Chip-Tip method also performs well on the timsTOF Ultra instrument from Bruker identifying up to 4,000 proteins per single HEK293T (ref. ). These results underscore the versatility of our method across various commercial devices, further validating its outstanding performance. Our application of SCP analysis to EB induction from hiPSCs demonstrates the power of this technology for addressing biologically relevant questions. This workflow is poised to become a cornerstone in future SCP studies, illuminating the technological advancements and applications in complex dynamics of cellular function and disease. We foresee the integration of single-cell genomic, proteomic and other omics measurements as a transformative approach. We are on the verge of an era where multi-omics experiments will become the norm, and their synergistic application promises to provide a more comprehensive understanding of cellular states, particularly in disease contexts. Cell lines Different human cell lines (HeLa) were cultured in DMEM (Gibco, Invitrogen), supplemented with 10% fetal bovine serum, 100 U ml −1 penicillin (Invitrogen) and 100 μg ml −1 streptomycin (Invitrogen), at 37 °C in a humidified incubator with 5% CO 2 . Cells were collected at roughly 80% confluence by washing three times with PBS (Gibco, Life technologies). Cells were then resuspended in degassed PBS at 200 cells per µl for isolation within the cellenONE. Spheroid formation Before spheroid formation, HCT116 cells were seeded to P15 plates and grown to a confluence of 70–90%. Subsequently, cells were washed with PBS (Gibco, Life Technologies) and detached from the plate with trypsin. Following this the cells were counted using a Corning Cell Counter (Sigma-Aldrich). For the multicellular spheroid generation, 7,000 cells were seeded on ultra-low attachment 96-well plates (Corning CoStar, Merck). The spheroids were cultured for 72 h at 37 °C, in a humidified incubator with 5% CO 2 . Cell medium was refreshed after 48 h, by aspirating half the old medium (making sure not to alter the spheroid) and adding the same amount of fresh medium. Subsequently the spheroids were treated for 24 h with 2 μM 5-FU (previously identified as the half-maximum inhibitory concentration (IC 50 ) for this cell line). After 24 h, the spheroids were transferred to an Eppendorf tube using a p1000 and washed three times with ice cold PBS. The spheroids were disaggregated using the Dissociation Buffer for Spheroids (Cellenion SASU) by incubating for 15 min of shaking at room temperature followed by 10 min of shaking at 4 °C. The Eppendorf tube was gently shaken to induce mechanistical disintegration as well. Following this 10 μl of cell solution was diluted in 490 μl of cold PBS before sample preparation in the CellenONE. hiPSC culture and EB formation hi12 (ref. ) iPSCs were cultured at 37 °C and 5% CO 2 in a humidified incubator (Thermo Fisher Scientific) on LN-521-coated dishes (BioLamina) in NutriStem medium (Sartorius). For EB induction, the cells were detached using TrypLE Express (Thermo Fisher Scientific), resuspended in NutriStem supplemented with 10 µM of Y-27632 and cultured in suspension on low adhesion flasks. After 1 week in suspension, the EBs were plated in gelatin-coated culture dish in DMEM medium (Gibco) supplemented with 20% (vol/vol) fetal bovine serum (Gibco), 2 mM l -glutamine and 1% (wt/vol) nonessential amino acids (Gibco). The plated EBs were dissociated into single-cell suspension after 1 week along with hi12 cells using TrypLE Express sorted using cellenONE for SCP analysis (described below). Sample preparation with Chip-Tip Sample lysis and digestion is performed within the proteoCHIP EVO 96 inside the cellenONE (Cellenion SASU). The chip can be manufactured using either polypropylene or Teflon. This process begins with the manual deposition of 2 µl of immiscible liquid hexadecane oil into each well of the chip, which is then positioned on the target plate. The entire system is cooled to 8 °C to ensure the hexadecane oil layer solidifies effectively. Following this, a master mix consisting of 0.2% DDM (D4641-500MG, Sigma-Aldrich), 100 mM TEAB, 20 ng µl −1 trypsin and 10 ng µl −1 lys-C in a volume of 300 nl is dispensed into each well. The cell isolation process uses the precision of the cellenONE module to sort individual cells, which are selected based on morphological criteria (diameter range of 22–30 µm and a maximum elongation factor of 1.6), into each well. The proteoCHIP EVO 96 is then subjected to a controlled incubation phase at 50 °C with 85% relative humidity for 1.5 h within the instrument’s environment. As the melting point of hexadecane is 18.2 °C, the aqueous sample solution will be covered by the hexadecane oil layer on top to prevent evaporation. This maintains constant enzyme and chemical concentrations, and the oil layer thereby contributes to ensure reproducible processing efficiency. After incubation, the system’s temperature is reduced to 20 °C to stabilize the conditions postreaction. On completion of the incubation, the proteoCHIP EVO 96 is taken out of the cellenONE and processed further. This involves the manual addition of 4 µl of 0.1% formic acid to each well, followed by a chilling period at 4 °C to refreeze the oil. In parallel, the Evotips (Evosep Biosystems) are prepared in line with the vendor’s guidelines, which include a series of rinsing, conditioning and equilibrating steps with specified solvents to prepare them for sample uptake. Afterward, another 15 µl of 0.1% formic acid is introduced to each Evotip, and the proteoCHIP is promptly inverted onto the Evotips and centrifuged at 800 g for 20 s at 4 °C. Once the proteoCHIP EVO 96 is removed, the Evotips undergo a customized procedure that deviates slightly from the standard vendor’s protocol. The samples are first loaded onto the Evotips by centrifugation at 800 g for 60 s, ensuring that the peptides are fully captured by the tip matrix. After loading, the Evotips are meticulously washed with 20 µl of Solvent A, and the centrifugation step is repeated for another 60 s at 800 g to remove any nonspecifically bound substances, thereby increasing the purity of the captured peptides. The final step involves adding 100 µl of Solvent A to the Evotips and spinning them for a brief 10 s at 800 g . The samples are then ready for LC–MS/MS analysis. Sample preparation for the 96-well plate workflow Using a standard 96-well plate (Eppendorf twin.tec PCR Plate 96 LoBind), HeLa cells were diluted with degassed PBS to 100–200 cells per μl. The 96-well plate was placed inside the CellenONE, where 1 µl of master mix buffer was automatically injected into each well of the chip and then positioned on the target plate. To determine the suitability for SCP analysis, 5–10 µl of cells were picked up by the CellenONE via a standard autosampler to assess morphology, density and diameter, targeting cells with a diameter of 15–35 µm. If the cells met the criteria, the CellenONE module sorted individual cells based on a diameter range and an elongation factor not larger than 1.8 into the wells. After sorting, 500 nl of master mix buffer was injected into each well by CellenONE via a standard autosampler. The 96-well plate was then subjected to a controlled incubation phase at 50 °C with 85% relative humidity for 1 h within the instrument environment, involving an automatic cycle system that added 500 nl of water to each well until the process was complete. After incubation, the temperature was reduced to 20 °C to stabilize the conditions postreaction. On completion, the 96-well plate was removed from the CellenONE and processed further, with 3.5 µl of 0.1% TFA/1% DMSO manually added to the wells. The plates were then sealed with matching 96-well plate covers and placed in the Vanquish Neo, allowing for the direct injection of 4 µl ready for MS analysis. LC–MS/MS The LC–MS/MS analysis was conducted using an Orbitrap Astral mass spectrometer (Thermo Fisher Scientific) coupled with an Evosep One chromatography system (Evosep Biosystems). The sample runs were set up for a 40SPD, 80SPD or 120SPD Whisper protocol. We used Aurora Elite TS analytical columns (15 cm × 75 µM, IonOpticks) with an EASY-Spray ion source (Thermo Fisher Scientific) for the 40SPD method. A rapid column (5 cm × 75 µM, IonOpticks) was used for the 80SPD and 120SPD methods with a Nanospray Flex ion source (Thermo Fisher Scientific). The Orbitrap Astral operated with a resolution setting of 240,000 for full MS scans across a mass-to-charge range of 380 to 980 m/z . The automatic gain control (AGC) for full MS was adjusted to 500%. MS/MS scans employed various nDIA methods with isolation windows and ion injection times tailored to the specificity of the analysis—these included settings such as 2Th3ms, 4Th6ms, 8Th12ms, and 16Th24ms. The MS/MS scanning spanned the same m/z range of 380 to 980. Fragmentation of the isolated ions was carried out using higher-energy collisional dissociation set at a normalized collision energy of 27%. For the 40SPD gradient, single-cell samples were distributed among the different nDIA methods: 12 for 2Th3ms, 12 for 4Th6ms, 24 for 8Th12ms and 12 for 16Th24ms. Samples composed of 10, 20 and 40 cells were analyzed in sets of four using the 2Th3ms, 4Th6ms and 8Th12ms methods, respectively. For the analysis with Orbitrap Astral coupled with Vanquish Neo LC, the sample runs were configured for 60 SPD, with each run having a total gradient of 14 min and not exceeding 10 min for equilibrating and loading using Vanquish Neo. The analysis used Aurora Elite TS analytical columns and was interfaced online using an EASY-Spray source. Additionally, FAIMS Pro Duo was employed to reduce background interference, enhancing the identification sensitivity of low-abundance peptides. The detailed MS parameters were as follows: the ion source parameter had a spray voltage of 1.9 kV and a capillary temperature of 270 °C, with FAIMS compensation voltage set to −48 V. For Orbitrap MS full scans, the resolution was 240,000, the normalized AGC target was 500%, the maximum injection time was 100 ms, the RF lens was 45% and the scan range was 400–800 m/z . For Astral DIA MS2 scans, the precursor mass range was 400–800 m/z , the DIA window type was set to Auto with window placement optimization on, the DIA window mode was m/z range, the higher-energy collisional dissociation normalized collision energy was 25%, the scan range was 150–2,000 m/z , the RF lens was 45%, the normalized AGC target was 800% and the loop control time was 0.6 s. The DIA isolation window was 20Th and the maximum injection time was 40 ms. Data analysis For the analysis using Spectronaut v.18 (Biognosys), raw files underwent a library-free directDIA+ approach, employing the human reference database from the UniProt 2022 release, which contains 20,588 sequences, alongside an additional 246 common contaminant sequences. Notably, cysteine carbamidomethylation was not included as a modification, while variable modifications were set for methionine oxidation and protein N-terminal acetylation. The precursor filtering was based on the Q value, and cross-run normalization was not applied. For the phosphorylation search, phosphorylation on serine, threonine and tyrosine was set as a variable modification. For the phosphorylation analysis, a filter of PTM.SiteProbability ≥0.75 was applied to ensure confident identification and site localization. For the glycan oxonium ion analysis, the rawrr package was used to extract a mass list of 163.060, 204.087, 274.092, 366.139, 485.046 and 657.140 from the single-cell raw file, representing different glycan structures. A mass tolerance of 5 ppm was set for the XIC. In the case of DIA-NN (v.1.9) searches, the raw data files were first converted to .mzML format. The analyses were then conducted in a library-free mode with some modifications: the maximum number of variable modifications was limited to two, and as with Spectronaut, cysteine carbamidomethylation was not set as a modification. It was configured to use highly heuristic protein grouping, and the MBR feature was activated. Results from both the first pass search and the MBR search are shown in Fig. . For the analysis in Fig. , a total of 24 single cells were analyzed using the 4Th6ms method. Thirteen searches were conducted in total: the first search included the initial two single-cell samples, the second search included the first four single-cell samples, and this pattern continued until the 12th search, which included all 24 single-cell samples. The 13th search combined all 24 single-cell samples with additional 20-cell samples using the 4Th6ms method. The figure displays the identifications from the first four cells. For the entrapment analysis, entrapment peptides were processed using an updated version of a tool known as ‘mimic’ from the percolator repository. This tool shuffles the target database ninefold and appends the shuffled proteins flagged as mimic and ‘Random_XXXX’, preserving the original amino acid composition from the fasta file. This strategy ensures a rigorous assessment of the FDRs in our proteomic analysis. The entrapment analysis utilized a database containing both the human reference database and its 1× mimic database. The analysis was conducted using both Spectronaut and DIA-NN with the same settings as previously mentioned. The results from the first pass search in the DIA-NN analysis were shown. All the analysis log files and settings files were summarized in Supplementary Data . For the analysis in Fig. , the data was analyzed using Spectronaut (v.18). The differential analysis was calculated using the Spectronaut comparison. Consequently, R was used for further analysis of the data. The geneset enrichment was calculated using the gseGO function from the Clusterprofiler package. For visualization of the data the enrichplot package was used. Pathview, treeplot and UpSet plot were used to create the associated figures. For the creation of the heatmap, the heatplot function from the enrichplot package was used. For the analysis in Fig. , missing values were imputed as random draws from the low quantile values (2.5th percentile) of the data distribution using the impute.pa function of the imp4p package v.1.2, with cells belonging to EBs and iPSCs set as the two conditions. Statistics and reproducibility No statistical method was used to predetermine sample size. For single-cell analysis, samples corresponding to more than one cell due to technical error, which was highlighted by the CellenONE sorter were excluded from analysis since the analysis only aimed to study single cells. The experiments were not randomized. The Investigators were not blinded to allocation during experiments and outcome assessment. Reporting summary Further information on research design is available in the linked to this article. Different human cell lines (HeLa) were cultured in DMEM (Gibco, Invitrogen), supplemented with 10% fetal bovine serum, 100 U ml −1 penicillin (Invitrogen) and 100 μg ml −1 streptomycin (Invitrogen), at 37 °C in a humidified incubator with 5% CO 2 . Cells were collected at roughly 80% confluence by washing three times with PBS (Gibco, Life technologies). Cells were then resuspended in degassed PBS at 200 cells per µl for isolation within the cellenONE. Before spheroid formation, HCT116 cells were seeded to P15 plates and grown to a confluence of 70–90%. Subsequently, cells were washed with PBS (Gibco, Life Technologies) and detached from the plate with trypsin. Following this the cells were counted using a Corning Cell Counter (Sigma-Aldrich). For the multicellular spheroid generation, 7,000 cells were seeded on ultra-low attachment 96-well plates (Corning CoStar, Merck). The spheroids were cultured for 72 h at 37 °C, in a humidified incubator with 5% CO 2 . Cell medium was refreshed after 48 h, by aspirating half the old medium (making sure not to alter the spheroid) and adding the same amount of fresh medium. Subsequently the spheroids were treated for 24 h with 2 μM 5-FU (previously identified as the half-maximum inhibitory concentration (IC 50 ) for this cell line). After 24 h, the spheroids were transferred to an Eppendorf tube using a p1000 and washed three times with ice cold PBS. The spheroids were disaggregated using the Dissociation Buffer for Spheroids (Cellenion SASU) by incubating for 15 min of shaking at room temperature followed by 10 min of shaking at 4 °C. The Eppendorf tube was gently shaken to induce mechanistical disintegration as well. Following this 10 μl of cell solution was diluted in 490 μl of cold PBS before sample preparation in the CellenONE. hi12 (ref. ) iPSCs were cultured at 37 °C and 5% CO 2 in a humidified incubator (Thermo Fisher Scientific) on LN-521-coated dishes (BioLamina) in NutriStem medium (Sartorius). For EB induction, the cells were detached using TrypLE Express (Thermo Fisher Scientific), resuspended in NutriStem supplemented with 10 µM of Y-27632 and cultured in suspension on low adhesion flasks. After 1 week in suspension, the EBs were plated in gelatin-coated culture dish in DMEM medium (Gibco) supplemented with 20% (vol/vol) fetal bovine serum (Gibco), 2 mM l -glutamine and 1% (wt/vol) nonessential amino acids (Gibco). The plated EBs were dissociated into single-cell suspension after 1 week along with hi12 cells using TrypLE Express sorted using cellenONE for SCP analysis (described below). Sample lysis and digestion is performed within the proteoCHIP EVO 96 inside the cellenONE (Cellenion SASU). The chip can be manufactured using either polypropylene or Teflon. This process begins with the manual deposition of 2 µl of immiscible liquid hexadecane oil into each well of the chip, which is then positioned on the target plate. The entire system is cooled to 8 °C to ensure the hexadecane oil layer solidifies effectively. Following this, a master mix consisting of 0.2% DDM (D4641-500MG, Sigma-Aldrich), 100 mM TEAB, 20 ng µl −1 trypsin and 10 ng µl −1 lys-C in a volume of 300 nl is dispensed into each well. The cell isolation process uses the precision of the cellenONE module to sort individual cells, which are selected based on morphological criteria (diameter range of 22–30 µm and a maximum elongation factor of 1.6), into each well. The proteoCHIP EVO 96 is then subjected to a controlled incubation phase at 50 °C with 85% relative humidity for 1.5 h within the instrument’s environment. As the melting point of hexadecane is 18.2 °C, the aqueous sample solution will be covered by the hexadecane oil layer on top to prevent evaporation. This maintains constant enzyme and chemical concentrations, and the oil layer thereby contributes to ensure reproducible processing efficiency. After incubation, the system’s temperature is reduced to 20 °C to stabilize the conditions postreaction. On completion of the incubation, the proteoCHIP EVO 96 is taken out of the cellenONE and processed further. This involves the manual addition of 4 µl of 0.1% formic acid to each well, followed by a chilling period at 4 °C to refreeze the oil. In parallel, the Evotips (Evosep Biosystems) are prepared in line with the vendor’s guidelines, which include a series of rinsing, conditioning and equilibrating steps with specified solvents to prepare them for sample uptake. Afterward, another 15 µl of 0.1% formic acid is introduced to each Evotip, and the proteoCHIP is promptly inverted onto the Evotips and centrifuged at 800 g for 20 s at 4 °C. Once the proteoCHIP EVO 96 is removed, the Evotips undergo a customized procedure that deviates slightly from the standard vendor’s protocol. The samples are first loaded onto the Evotips by centrifugation at 800 g for 60 s, ensuring that the peptides are fully captured by the tip matrix. After loading, the Evotips are meticulously washed with 20 µl of Solvent A, and the centrifugation step is repeated for another 60 s at 800 g to remove any nonspecifically bound substances, thereby increasing the purity of the captured peptides. The final step involves adding 100 µl of Solvent A to the Evotips and spinning them for a brief 10 s at 800 g . The samples are then ready for LC–MS/MS analysis. Using a standard 96-well plate (Eppendorf twin.tec PCR Plate 96 LoBind), HeLa cells were diluted with degassed PBS to 100–200 cells per μl. The 96-well plate was placed inside the CellenONE, where 1 µl of master mix buffer was automatically injected into each well of the chip and then positioned on the target plate. To determine the suitability for SCP analysis, 5–10 µl of cells were picked up by the CellenONE via a standard autosampler to assess morphology, density and diameter, targeting cells with a diameter of 15–35 µm. If the cells met the criteria, the CellenONE module sorted individual cells based on a diameter range and an elongation factor not larger than 1.8 into the wells. After sorting, 500 nl of master mix buffer was injected into each well by CellenONE via a standard autosampler. The 96-well plate was then subjected to a controlled incubation phase at 50 °C with 85% relative humidity for 1 h within the instrument environment, involving an automatic cycle system that added 500 nl of water to each well until the process was complete. After incubation, the temperature was reduced to 20 °C to stabilize the conditions postreaction. On completion, the 96-well plate was removed from the CellenONE and processed further, with 3.5 µl of 0.1% TFA/1% DMSO manually added to the wells. The plates were then sealed with matching 96-well plate covers and placed in the Vanquish Neo, allowing for the direct injection of 4 µl ready for MS analysis. The LC–MS/MS analysis was conducted using an Orbitrap Astral mass spectrometer (Thermo Fisher Scientific) coupled with an Evosep One chromatography system (Evosep Biosystems). The sample runs were set up for a 40SPD, 80SPD or 120SPD Whisper protocol. We used Aurora Elite TS analytical columns (15 cm × 75 µM, IonOpticks) with an EASY-Spray ion source (Thermo Fisher Scientific) for the 40SPD method. A rapid column (5 cm × 75 µM, IonOpticks) was used for the 80SPD and 120SPD methods with a Nanospray Flex ion source (Thermo Fisher Scientific). The Orbitrap Astral operated with a resolution setting of 240,000 for full MS scans across a mass-to-charge range of 380 to 980 m/z . The automatic gain control (AGC) for full MS was adjusted to 500%. MS/MS scans employed various nDIA methods with isolation windows and ion injection times tailored to the specificity of the analysis—these included settings such as 2Th3ms, 4Th6ms, 8Th12ms, and 16Th24ms. The MS/MS scanning spanned the same m/z range of 380 to 980. Fragmentation of the isolated ions was carried out using higher-energy collisional dissociation set at a normalized collision energy of 27%. For the 40SPD gradient, single-cell samples were distributed among the different nDIA methods: 12 for 2Th3ms, 12 for 4Th6ms, 24 for 8Th12ms and 12 for 16Th24ms. Samples composed of 10, 20 and 40 cells were analyzed in sets of four using the 2Th3ms, 4Th6ms and 8Th12ms methods, respectively. For the analysis with Orbitrap Astral coupled with Vanquish Neo LC, the sample runs were configured for 60 SPD, with each run having a total gradient of 14 min and not exceeding 10 min for equilibrating and loading using Vanquish Neo. The analysis used Aurora Elite TS analytical columns and was interfaced online using an EASY-Spray source. Additionally, FAIMS Pro Duo was employed to reduce background interference, enhancing the identification sensitivity of low-abundance peptides. The detailed MS parameters were as follows: the ion source parameter had a spray voltage of 1.9 kV and a capillary temperature of 270 °C, with FAIMS compensation voltage set to −48 V. For Orbitrap MS full scans, the resolution was 240,000, the normalized AGC target was 500%, the maximum injection time was 100 ms, the RF lens was 45% and the scan range was 400–800 m/z . For Astral DIA MS2 scans, the precursor mass range was 400–800 m/z , the DIA window type was set to Auto with window placement optimization on, the DIA window mode was m/z range, the higher-energy collisional dissociation normalized collision energy was 25%, the scan range was 150–2,000 m/z , the RF lens was 45%, the normalized AGC target was 800% and the loop control time was 0.6 s. The DIA isolation window was 20Th and the maximum injection time was 40 ms. For the analysis using Spectronaut v.18 (Biognosys), raw files underwent a library-free directDIA+ approach, employing the human reference database from the UniProt 2022 release, which contains 20,588 sequences, alongside an additional 246 common contaminant sequences. Notably, cysteine carbamidomethylation was not included as a modification, while variable modifications were set for methionine oxidation and protein N-terminal acetylation. The precursor filtering was based on the Q value, and cross-run normalization was not applied. For the phosphorylation search, phosphorylation on serine, threonine and tyrosine was set as a variable modification. For the phosphorylation analysis, a filter of PTM.SiteProbability ≥0.75 was applied to ensure confident identification and site localization. For the glycan oxonium ion analysis, the rawrr package was used to extract a mass list of 163.060, 204.087, 274.092, 366.139, 485.046 and 657.140 from the single-cell raw file, representing different glycan structures. A mass tolerance of 5 ppm was set for the XIC. In the case of DIA-NN (v.1.9) searches, the raw data files were first converted to .mzML format. The analyses were then conducted in a library-free mode with some modifications: the maximum number of variable modifications was limited to two, and as with Spectronaut, cysteine carbamidomethylation was not set as a modification. It was configured to use highly heuristic protein grouping, and the MBR feature was activated. Results from both the first pass search and the MBR search are shown in Fig. . For the analysis in Fig. , a total of 24 single cells were analyzed using the 4Th6ms method. Thirteen searches were conducted in total: the first search included the initial two single-cell samples, the second search included the first four single-cell samples, and this pattern continued until the 12th search, which included all 24 single-cell samples. The 13th search combined all 24 single-cell samples with additional 20-cell samples using the 4Th6ms method. The figure displays the identifications from the first four cells. For the entrapment analysis, entrapment peptides were processed using an updated version of a tool known as ‘mimic’ from the percolator repository. This tool shuffles the target database ninefold and appends the shuffled proteins flagged as mimic and ‘Random_XXXX’, preserving the original amino acid composition from the fasta file. This strategy ensures a rigorous assessment of the FDRs in our proteomic analysis. The entrapment analysis utilized a database containing both the human reference database and its 1× mimic database. The analysis was conducted using both Spectronaut and DIA-NN with the same settings as previously mentioned. The results from the first pass search in the DIA-NN analysis were shown. All the analysis log files and settings files were summarized in Supplementary Data . For the analysis in Fig. , the data was analyzed using Spectronaut (v.18). The differential analysis was calculated using the Spectronaut comparison. Consequently, R was used for further analysis of the data. The geneset enrichment was calculated using the gseGO function from the Clusterprofiler package. For visualization of the data the enrichplot package was used. Pathview, treeplot and UpSet plot were used to create the associated figures. For the creation of the heatmap, the heatplot function from the enrichplot package was used. For the analysis in Fig. , missing values were imputed as random draws from the low quantile values (2.5th percentile) of the data distribution using the impute.pa function of the imp4p package v.1.2, with cells belonging to EBs and iPSCs set as the two conditions. No statistical method was used to predetermine sample size. For single-cell analysis, samples corresponding to more than one cell due to technical error, which was highlighted by the CellenONE sorter were excluded from analysis since the analysis only aimed to study single cells. The experiments were not randomized. The Investigators were not blinded to allocation during experiments and outcome assessment. Further information on research design is available in the linked to this article. Any methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at 10.1038/s41592-024-02558-2. Reporting Summary Supplementary Table 1 Identified proteins and peptides from HeLa single cells using the Chip-Tip method. Supplementary Table 2 Protein quantities in iPSCs and EBs single-cell samples. Supplementary Data 1 Analysis log files and settings files from Spectronaut and DIA-NN. Source Data Figs. 1–4 and 6 and Extended Data Fig. 7 Statistical source data.
Efficacy of continuous venovenous hemodiafiltration in patients with metformin associated lactic acidosis and acute kidney injury
51f30b9a-b4e2-4125-89a1-2bc999726508
11906756
Surgery[mh]
Metformin is a biguanide that has been in use for more than 50 years in the treatment of type 2 diabetes mellitus (T2DM) as a first-line agent, according to the guidelines of the European and American Diabetes Associations . Metformin toxicity can result in a severe form of lactic acidosis (LA). Although the exact mechanism of LA is not completely elucidated, metformin inhibits mitochondrial complex activity resulting in a decrease of ATP production. Consequently, cells shift from aerobic to anaerobic metabolism. This results in the accumulation of pyruvate upstream of the Krebs cycle, which is converted to lactate by lactate dehydrogenase , . Furthermore, the lactate cannot be efficiently cleared through the liver due to hepatic inhibition by metformin. Hence, the increase in lactate is the consequence of both production and clearance impairment . Three forms of LA can be identified in patients under metformin therapy , : Metformin-induced lactic acidosis (MILA): when LA is caused by metformin, as in acute metformin toxicity, and when very high levels of metformin are documented in the absence of other likely causes. Metformin-unrelated lactic acidosis (MULA): when LA is not related to metformin. Metformin-associated lactic acidosis (MALA): when metformin is one of the possible causes of LA. Metformin itself does not induce acute kidney injury (AKI), and recent publications even speculate about its potential renoprotective effect in the context of sepsis . Nonetheless, renal function is often altered in cases of MALA. The ‘REMIND’ study demonstrated that the risk of LA in patients treated with metformin, increases as the estimated glomerular filtration rate (eGFR) declines, even after adjusting for confounders . This is due to the pharmacokinetics of metformin, a molecule with a high renal clearance (400 ml/min) that can consequently accumulate in individuals with renal impairment . The guidelines of the European Medicines Agency recommend dose adjustment in patients with moderate renal impairment (GFR 30–59 ml/min) and advise against prescription when the eGFR falls below 30 mL/min . In most types of LA in critically ill patients the treatment consists of supportive measures and, whenever feasible, correcting the underlying cause, with the benefit of RRT being dubious at best . On the other hand metformin is a small molecule with a low drug-protein binding, so it is efficiently removed by RRT. Hence, in these cases, RRT is not only effective in correcting the acid-base imbalance, but also in treating the underlying disease by removing metformin. Typically, RRT is unnecessary in milder cases, while it is required in instances where MALA is associated with severe renal insufficiency. The Extracorporeal Treatments in Poisoning (EXTRIP) guidelines recommend initiating RRT in cases of lactatemia > 20 mmol/L, pH < = 7.0, and/or failure of standard therapy. It is suggested in cases of lactatemia between 15 and 20 mmol/L and pH 7–7.1. If conditions such as shock, impaired kidney function, liver failure, or an altered state of consciousness are present, lower thresholds for RRT should be considered . Currently, the optimal RRT regimen for MALA and severe AKI remains unknown , , . Based on available data, intermittent hemodialysis (IHD) provides higher metformin clearance , , while continuous renal replacement therapy (CRRT) is often preferred for patients with hemodynamic instability. However, metformin has a large volume of distribution and can diffuse into different types of cells, especially erythrocytes, enterocytes, and hepatocytes . This explains the occurrence of a ‘rebound phenomenon’, wherein metabolic parameters worsen some hours after the end of IHD treatment , (Fig. ). CRRT avoids this complication by achieving a gradual correction of the metabolic disorder with constant removal of the drug. In contrast, some authors argue that CRRT may not effectively control severe cases of MALA due to its lower efficiency, leading to experiences of concomitant double CRRT to enhance treatment efficacy . Furthermore, in cases of acute overdose, such as a suicide attempt, there may be a need for efficient removal of plasmatic metformin, making IHD the preferred choice. In chronic poisoning, a significant amount of metformin is already inside the cells, thus necessitating prolonged treatment. Ten years ago, we developed a local protocol for treating patients with severe MALA and AKI with continuous venovenous hemodiafiltration (CVVHDF). The intensity of CVVHDF is set by the treating nephrologist to achieve a minimum of effluent dose of 37 ml/Kg/h. This dose exceeds the 20–25 ml/kg/h recommendation for other forms of AKI by KDIGO guidelines, in order to achieve a higher clearance of the drugs responsible for the intoxication . For circulatory anticoagulation, regional citrate anticoagulation (RCA) is the first choice in our centre. However, there is some concern in cases of severe hyperlactatemia where citrate may not be efficiently metabolized to bicarbonate potentially leading to citrate accumulation. However, our experience and literature data suggest that RCA can be safely used if serum lactate is not rising rapidly and under strict metabolic monitoring . Therefore, either unfractionated heparin or RCA could be used in these cases. The aim of this retrospective study is to review our single-centre experience with patients undergoing CVVHDF for severe MALA and AKI. This is a retrospective single center study analyzing all patients presented at the San Giovanni Bosco Hospital of Turin between June 2011 and December 2021 with MALA and severe AKI (KDIGO stage III) requiring RRT. The joint protocol of treatment for these cases required that CVVHDF should be initiated in an intensive care setting within one hour of diagnosis. CVVHDF should be prescribed to achieve a convective dose of at least 25 ml/kg/h and a diffusive dose of at least 12 ml/kg/h. CVVHDF could be discontinued once the metabolic imbalance was corrected and signs of renal recovery were evident. Patients underwent CVVHDF treatment using Gambro Baxter USA Prismaflex ® machines and AN69ST membrane filters (ST150, 1.5 square meters). A central venous catheter placed in the jugular or femoral vein, depending on individual factors, was used for vascular access. We used polyurethane bulbous catheters (MahurkarTM, CovidienTM - USA) of 11.5 French gauge and 16–24 cm in length. For circuit anticoagulation either heparin or regional citrate anticoagulation (RCA) could be prescribed, according to the indication of the attending nephrologist. Sodium heparin was prescribed at a dose of 5–10 Units/kg/hour, using a continuous infusion pump with or without a starting bolus. Activated clotting time (ACT) or partial thromboplastin time (aPTT) measured at baseline, after 1 h and then every 4 h was used to adjust the heparin dose. RCA was obtained with pre-dilution of sodium citrate 18.0 mmol/l (@Gambro Baxter USA). Post filter calcium was monitored to maintain it within the target range. To avoid the possible metabolic complications due to citrate accumulation the ratio between total plasma calcium and plasma ionized calcium (tCa/iCa) was measured as a surrogate for blood citrate levels (with a cut-off value < 2.5) . The “territorial Ethical Committee (comitato etico interaziendale CTE Torino) approved this study (protocol 0073420 of 2022) and waived of informed consent. A total of 27 patients with MALA requiring CVVHDF were included in our retrospective analysis. Patients baseline characteristics are presented in Table . The mean age was 69.2 years (range 44–86 years), with a female predominance (20/27, 74%). The mean Glasgow Coma Scale score at presentation was 13.7 (range 5–15), and the mean daily metformin dose was 2330 mg (range 1000–3000 mg). All patients had severe AKI classified as stage III KDIGO classification. Serum creatinine was compared with the pre-existing kidney function value estimated by reported history. The average presenting serum creatinine was 7.28 mg/dL. Only one patient had chronic kidney disease (stage IIIB). Twenty-five patients experienced a greater than threefold increase in serum creatinine compared to their baseline, meeting the KDIGO creatinine increase criterion for AKI stage III. Two patients had smaller increases: one between two and threefold, and one less than twofold. However, they were both anuric (defined as urine output less than 50 ml in 12 h) so they met the KDIGO urine output criterion for AKI stage III. Considering the urine output 51% (14 patients) were anuric and 33% (9 patients) had severe oliguria (urine output < 0.3 ml/kg/h for 12 h). Twenty-two patients (81%) presented with gastrointestinal symptoms (vomiting and/or diarrhea), 10 (37%) with fever or other causes of dehydration. Four patients (15%) were diagnosed with severe infection and subsequently treated with antibiotics. Fourteen patients (51%) were receiving a renin angiotensin system blocker and 2 (7%) a non-steroidal anti-inflammatory drug. Average blood pressure was 113/59 mmHg and the mean blood pressure 75.4 mmHg. At the start of CVVHDF, 8 patients (29%) required vasopressor therapy with catecholamines. All received norepinephrine at a mean dose of 0.13 mcg/kg/h. Two patients also received co-administered epinephrine. Four deaths occurred due to cardiovascular shock (4 women, mean age 80.8 years, range 74–86). The overall mortality rate was 14.8%. All four died patients were anuric. One required administration of 2 catecholamines at the start of CVVHDF due to severe hypotension in severely compromised left ventricular ejection fraction (25%). One patient has history of cachexia. One patient has pancreas neoplasm with no evidence of diffuse metastatic disease. One presented with severe neurologic impairment (GCS 5). Among the 23 surviving patients, all achieved renal recovery and discontinued CVVHDF treatment. Laboratory data at the start and the end of treatment are detailed in Table . Mean lactate and pH levels improved significantly, from 12.9 mmol/l (range 7.0–24.0) and 6.99 (range 6.50–7.22) at diagnosis to 1.5 mmol/l (range 0.6–3.6) and 7.38 (range 7.26–7.53) at dialysis discontinuation, respectively. The analysis of the rate of metabolic imbalance resolution after 12 h by the start of CVVHDF demonstrated a substantial improvement of mean pH 7.34 (7.13–7.48) and mean lactatemia 4.2 mmol/L (1.3–10.0). No significant differences were found between survivors and non-survivors at presentation. CVVHDF parameters and anticoagulation . CVVHDF parameters are summarized in Table . The mean duration of CVVHDF treatments was 56.2 h with a mean downtime of 17.8 min. The prescribed effluent dose of CVVHDF averaged 52.1 ml/kg/h, with a convective dose of 31.9 ml/kg/h and a diffusive dose of 19.8 ml/kg/h. The right femoral vein was the most commonly used site for central venous catheter placement (22/27). As regard to anticoagulation, 14 out of 27 patients received heparin and 7 patients RCA. In the remaining 6 cases heparin was switched to RCA after dialysis initiation (Table ). The mean dose of sodium heparin administered was 4.4 U/Kg/h (range 2.7–10), the mean ACT value achieved was 165.9 s (range 144–200 s). No major or minor bleeding events occurred during the treatments. The mean treatment duration using heparin anticoagulation was 50.5 h (24–120) with a mean downtime of 23.3 min (0–120 min). For RCA the mean serum ionized calcium value was 1.1 mmol/L (range 0.93–1.23). The average value of tCa/iCa-s ratio, measured as a surrogate of blood citrate levels, was 2.1 (range 1.78–2.74) after 24 h of treatment. One patient had a ratio of 2.74, and the sodium citrate dose was reduced, resulting in normalization of the ratio at the subsequent 4-hour control. No further adjustments were required. No metabolic complications related to citrate accumulation were observed. The mean treatment duration using RCA was 68.3 h (24–144) with a mean downtime of 8.1 min (0–90 min). Our retrospective analysis of CVVHDF management in patients with MALA and AKI showed encouraging results in terms of overall survival and renal recovery. Notably, only 4 of 27 patients (14.8%) died, while the remaining 23 patients were able to discontinue dialysis. Higher mortality rates have been reported in the literature, ranging from 21.4% in the case series of Mariano et al. to 30% in EXTRIP and even up to 50% as reported by Weisberg et al. . A recent meta-analysis including 242 individual cases from 158 case reports and 26 case series showed a cumulative mortality rate of 19.8% . The low mortality rate observed in our study may be partially attributed to the strict metabolic control achievable with our dialysis protocol, ensuring hemodynamic stability and gradual correction of the LA. Furthermore, once accumulated, metformin is released from the intracellular to the extracellular compartment . Thus, it continues to inhibit the mitochondrial respiratory chain, promoting anaerobic metabolism and shifting glucose into the “Cori cycle”. It also hinders the use of pyruvate and lactate for gluconeogenesis and promotes the conversion of glucose to lactate in the intestine . The constant clearance of metformin achieved with CVVHDF prevented from this ‘rebound’ phenomenon . In fact, none of the 23 patients who discontinued CVVHDF experienced any relapse of metabolic acidosis or an increase in lactate levels, nor did they require further RRT treatment. We also assessed the rate of metabolic disturbance resolution by evaluating laboratory data at 12 h. This analysis demonstrated substantial improvement in the metabolic disturbance within 12 h, with mean pH and lactate levels approaching normal values (mean pH 7.34 and mean lactatemia 4.2 mmol/L). While CVVHDF is undeniably less rapid than IHD in correcting metabolic derangements, these findings suggest that the treatment demonstrates efficacy within 12 h, allowing for patient stabilization and the possible de-escalation of therapeutic interventions (e.g., vasopressor tapering or cessation). This is a crucial consideration, as certain complications and outcomes arise as a consequence of the emergency interventions employed in these situations. EXTRIP guidelines identify IHD as the most effective technique for metformin intoxication . Nevertheless, the optimal management of patients with severe AKI, such as those in our cohort, is not well-defined. In these patients, that frequently presented with hemodynamic instability, there is also the potential requirement for both depuration and ultrafiltration, which may make IHD less safe. Moreover, CRRT is most effective when renal metformin clearance is minimal . Regarding the efficiency of CRRT it should be kept in mind that adequate dosing is crucial in cases of intoxication . Notably, the mean effluent dose in our study (52.1 ml/Kg/h) was twice the recommended dose for AKI. This is in line with the recommendation of the EXTRIP guidelines to increase the dose of CVVHDF in order to achieve more efficiency. Successful CRRT experiences described in the literature present high dialysis dose data that align with this assertion (> 40 ml/kg/h) , , . Since metformin has a high diffusion coefficient due to its low molecular weight (PM 165 Da) and low drug-protein binding, we used a high diffusive dose in our study (19.8 mL/kg/h) that exceeded our typical prescription for AKI. It should be noted, however, that at these dialysate flow rates, increasing blood flow rates has a very marginal effect on small molecule clearance . Of note, RCA was successfully used in 13 patients. Despite elevated lactate levels, no complications such as hypocalcemia, citrate accumulation, or acid-base imbalance were observed. In one patient the rate of sodium citrate infusion was lowered without further complications. It should be noted that because sodium citrate infusion provides buffering capacity, it is generally combined with low-bicarbonate dialysis fluids. Nonetheless, in cases of profound acidosis, solutions with increased buffering capacity (in both the dialysate and replacement fluid) should be chosen. In patients who underwent a switch from heparin to RCA, we waited for lactate levels to stabilize below 10 mmol/L before switching to prevent potential citrate accumulation In our opinion, RCA could be used in selected cases of MALA, as also shown in a recent study of 23 MALA patients . Conditions predisposing to an abrupt decline in eGFR are often described in such cases. Indeed, in our cohort, we observed that a precipitating condition for eGFR decline, such as gastrointestinal disease, hypovolemia and fever, often in combination with the administration of RAS blockers and NSAIDs, was present in all of our patients. Our study has several limitations. The sample size, coupled with the retrospective and observational design, restricts the generalizability of the observed results. In addition, the study design does not allow for a direct comparison between IHD and CRRT. We also cannot provide an index of severity of illness at the start of CVVHDF, thus this limits the possibility to compare our results. Lastly, we were unable to assess blood levels of metformin and citrate. Notwithstanding the aforementioned limitations, this study comprises one of the most extensive single-center experiences reported on this subject. The severity of both renal damage and metabolic derangement, coupled with the common administration of vasopressors, underscores the clinical acuity of this patient population. The consistent application of high-dose CVVHDF, standardized by a joint protocol, enables outcome assessment despite the previously mentioned limitations. Recent studies highlight the importance of not only blood levels, but also intracellular metformin concentrations, in understanding the pathogenesis of such cases . Since conducting randomized controlled trials with sufficient sample size in patients with drug intoxications presents a significant challenge, furthering our understanding of the underlying pathogenetic processes could pave the way for improved management strategies. Metformin therapy in T2DM may pose a risk of serious LA. MALA associated with severe AKI may be difficult to treat and burdened with high mortality rates. Based on our ten-year experience, CVVHDF have yielded favorable results, both in terms of patient survival and the metabolic control of the disease. We believe that with prompt initiation and appropriate dosing, CVVHDF ensures correction of the metabolic disorder and avoids the risk of rebound, which can occur due to metformin’s large volume of distribution. In particular in our study a dose of CVVHDF as high as 52.1 ml/Kg/h, started within one hour from the diagnosis, was associated to an overall mortality rate of 14.8% with a substantial correction of the metabolic imbalance within 12 h of treatment. In our experience, CVVHDF can be performed safely and efficiently in selected cases using RCA. Below is the link to the electronic supplementary material. Supplementary Material 1
Contacts in general practice during the COVID-19 pandemic: a register-based study
81821655-c9b4-4c72-9e25-1d6ce659d2c5
9591020
Family Medicine[mh]
The COVID-19 pandemic has altered the provision of health care. Across the world, telehealth consultations have widely replaced in-clinic consultations because of the risk of spreading severe acute respiratory syndrome coronavirus 2. – Telehealth solutions can be defined as ‘remote delivery of healthcare services using information and communication technology’ . They include video consultation, telephone consultation, text/instant messaging, email consultation, and online patient portals. Several studies have reported lower use of primary and ambulatory care, and rapid increases in the use of remote consultations during the early phases of the COVID-19 pandemic. – The pandemic and the introduction of virtual care might have caused variations in healthcare use in different populations, , as video consultation may constitute a barrier to receiving health care for some patients. Thus, the shift to remote consultations may have exacerbated disparities in access to health care. – The decline in contacts seemed less pronounced among females, older adults, patients with poor mental health, and patients with high expected healthcare use. , The greatest decline was seen among parents making contact regarding children and among those with low expected healthcare use. Some delayed care occurred for health problems that could be postponed without harm, but some patients may have faced complications because of delayed treatment of acute medical issues or insufficient management of chronic illness. Even though these studies have added relevant knowledge, most have focused on regional daytime care. The national registers in Denmark make it possible to study the entire population both during the day and outside office hours. More insight into the implications of COVID-19 and the introduction of virtual care is needed to optimise future healthcare provision. Moreover, there is a need to gain more knowledge about how to use these new telehealth opportunities in the best way in general practice. This study set out to explore the effect of the COVID-19 pandemic on contact patterns in general practice in Denmark and to identify patient groups at risk of reducing contacts with general practice. Design and population A register-based time series study was conducted including all Danish residents from 1 January 2017 to 31 October 2020. The number of contacts with general practice during 2017–2019 was compared with the number of contacts during the first months of the COVID-19 pandemic. Setting In Denmark, general practice is tax-funded and free of charge for the patient. During the day, GPs provide care to their listed patients. GPs are remunerated through a fee per capita, but the main income (approximately 70%–75%) is based on fee-for-service reimbursement. During the day (8.00 am to 4.00 pm), GPs offer a range of basic services, including face-to-face in-clinic consultations (regular, prenatal appointments, preventive child care, and conversational therapy for mental health issues), home visits, regular telephone consultations, and email consultations. Outside office hours, GPs are paid on a fee-for-service basis. In four of the five Danish regions, GPs run the out-of-hours (OOH) general practice service, also referred to as a GP cooperative, which patients must call to schedule an appointment. At the GP cooperative, GPs perform telephone triage and decide whether to offer a regular telephone consultation, refer for a face-to-face GP consultation (in the clinic or a home visit), or refer to hospital or emergency medical services. The OOH general practice service is open on weekdays between 4.00 pm and 8.00 am as well as at weekends and during holidays. Only the Capital Region of Denmark operates a different OOH healthcare service; the medical helpline 1813 (MH-1813). As data from MH-1813 are not available in the national registers, the Capital Region of Denmark was excluded from the current analyses about the use of general practice outside office hours but included in the analyses for daytime care. Video consultation was rapidly introduced during the COVID-19 pandemic. To enhance the use of virtual care as an alternative to regular face-to-face in-clinic consultations in the daytime, the GPs could choose to perform a range of basic services by video or as an extended telephone consultation for health problems that would usually have prompted a face-to-face in-clinic consultation in the pre-pandemic period. Examples include consultations for prenatal appointments and preventive child care. However, an extended telephone consultation could not be used for conversational therapy. In OOH care, the GPs could use video consultations. For these consultations, new (temporary) remuneration codes were introduced, and these temporary codes could be used in combination with existing codes for reimbursement purposes. Outcome measures The following outcome measure was defined: number of contacts with general practice per patient year (daytime, OOH, and all; stratified by basic remuneration codes before and during the pandemic). Contact types are registered with a range of remuneration codes. In this study, the term ‘virtual care’ refers to both video consultations and extended telephone consultations (see overview of remuneration codes in Supplementary Table S1). Preventive care contacts consisted of prenatal appointments and preventive child care. Furthermore, as extended telephone consultations and video consultations were new alternatives to contacts concerning health problems that were previously managed by face- to-face in-clinic consultations, in this study these were considered equivalent to face-to-face in-clinic consultations. Thus, clinic consultations consisted of three subtypes: regular face-to-face in-clinic consultations; video consultations; and extended telephone consultations. The term ‘telephone consultations’ covered solely regular telephone consultations. The ‘extended telephone consultation’ was primarily for patients who were technically unable to participate in a video consultation. The word ‘extended’ refers to the fact that this consultation concerned more extended topics than normally handled by telephone consultations. Therefore, these extended telephone consultations were most often lengthier than regular telephone consultations. Email consultations are not part of clinic consultations but a separate contact type. Data collection Data were collected from a range of national registers for the study period and these data were linked through the personal identification number. The National Health Insurance Service Register provided information on date, time, and type of contact with daytime GP and OOH general practice, and the services delivered (through remuneration codes). The National Patient Register holds records on hospital contacts (somatic, psychiatric, as well as private hospitals), and provided the diagnosis codes included in the Charlson Comorbidity Index. The Civil Registration System and Statistics Denmark delivered data on patient characteristics (age, sex, cohabitation, education, ethnicity, income, urbanisation, and employment status). Comorbidity was defined as the number of diagnoses included in the Charlson Comorbidity Index. Apart from age, sex, and comorbidity, all covariates were at a household level, for example, a household’s level of income was determined by its highest earning occupant. Thus, it was possible to avoid excluding contacts for children because of missing values. Furthermore, it was anticipated that socioeconomic characteristics (for example, education or income) at the household level would be stronger predictors for help-seeking behaviour than those at individual levels. Help-seeking is often discussed with, or suggested by, other members of the household, in particular for children, who have low levels of education and income in the registers. Prior to any analysis, individuals with missing values for income, employment status, or cohabitation were excluded, as this information was often missing concurrently and thus led to convergence issues for the model. This meant excluding 40 246 unique individuals (0.66%). For the remaining individuals, missing covariates were placed in a separate category. Analyses The study population was followed from birth, immigration, or 1 January 2017 (whichever came last) until death, emigration, or 31 October 2020 (whichever came first). For each person, this period was divided into shorter time spans according to changes in covariates (see Supplementary Table S2). For each time span, the number of outcomes per resident was recorded along with the duration of each time span. However, age and sex of each resident were recorded at the beginning of each time span. Next, the number of outcomes and the durations were summed by month and year. Dividing the number of outcomes by the risk time provided the unadjusted observed incidence rate, which was plotted in categories of related remuneration codes. The date of 11 March 2020 was used as the starting date for the pandemic period, when the first official lockdown in Denmark was announced. To provide adjusted incidence rate ratios (IRRs), Poisson regressions were run for each group of remuneration codes on the data for 2017–2019 (that is, the pre- pandemic period), with risk time serving as the offset, and adjusted for the following covariates: sex, age, cohabitation, education, ethnicity, comorbidity, income, urbanisation, employment status, month, and month-ID (ID number of month in dataset), with the latter being treated as a continuous linear effect. Seasonality was taken into account through adjustment for month. This made it possible to calculate the expected utilisation (expected incidence rate) of general practice throughout the pandemic period as an extrapolation of previous help-seeking. Dividing the observed incidence rate by the adjusted expected incidence rate gave the adjusted IRR, which was plotted as curves according to groups of related remuneration codes. The change because of the pandemic was calculated by subtracting the expected number of contacts from the observed number of contacts after 11 March 2020 and the results (that is, overall effect) are presented as a percentage of the expected number of consultations. Finally, to see if changes in contact patterns were evenly distributed within subsets of the population, modifications of the pandemic effect within each of the covariates were looked for. This was done by using fully adjusted Poisson models, one for each covariate, and each with an interaction term for the covariate in question. Results were presented in a forest plot. Stata (version 16) was used for all analyses. A register-based time series study was conducted including all Danish residents from 1 January 2017 to 31 October 2020. The number of contacts with general practice during 2017–2019 was compared with the number of contacts during the first months of the COVID-19 pandemic. In Denmark, general practice is tax-funded and free of charge for the patient. During the day, GPs provide care to their listed patients. GPs are remunerated through a fee per capita, but the main income (approximately 70%–75%) is based on fee-for-service reimbursement. During the day (8.00 am to 4.00 pm), GPs offer a range of basic services, including face-to-face in-clinic consultations (regular, prenatal appointments, preventive child care, and conversational therapy for mental health issues), home visits, regular telephone consultations, and email consultations. Outside office hours, GPs are paid on a fee-for-service basis. In four of the five Danish regions, GPs run the out-of-hours (OOH) general practice service, also referred to as a GP cooperative, which patients must call to schedule an appointment. At the GP cooperative, GPs perform telephone triage and decide whether to offer a regular telephone consultation, refer for a face-to-face GP consultation (in the clinic or a home visit), or refer to hospital or emergency medical services. The OOH general practice service is open on weekdays between 4.00 pm and 8.00 am as well as at weekends and during holidays. Only the Capital Region of Denmark operates a different OOH healthcare service; the medical helpline 1813 (MH-1813). As data from MH-1813 are not available in the national registers, the Capital Region of Denmark was excluded from the current analyses about the use of general practice outside office hours but included in the analyses for daytime care. Video consultation was rapidly introduced during the COVID-19 pandemic. To enhance the use of virtual care as an alternative to regular face-to-face in-clinic consultations in the daytime, the GPs could choose to perform a range of basic services by video or as an extended telephone consultation for health problems that would usually have prompted a face-to-face in-clinic consultation in the pre-pandemic period. Examples include consultations for prenatal appointments and preventive child care. However, an extended telephone consultation could not be used for conversational therapy. In OOH care, the GPs could use video consultations. For these consultations, new (temporary) remuneration codes were introduced, and these temporary codes could be used in combination with existing codes for reimbursement purposes. The following outcome measure was defined: number of contacts with general practice per patient year (daytime, OOH, and all; stratified by basic remuneration codes before and during the pandemic). Contact types are registered with a range of remuneration codes. In this study, the term ‘virtual care’ refers to both video consultations and extended telephone consultations (see overview of remuneration codes in Supplementary Table S1). Preventive care contacts consisted of prenatal appointments and preventive child care. Furthermore, as extended telephone consultations and video consultations were new alternatives to contacts concerning health problems that were previously managed by face- to-face in-clinic consultations, in this study these were considered equivalent to face-to-face in-clinic consultations. Thus, clinic consultations consisted of three subtypes: regular face-to-face in-clinic consultations; video consultations; and extended telephone consultations. The term ‘telephone consultations’ covered solely regular telephone consultations. The ‘extended telephone consultation’ was primarily for patients who were technically unable to participate in a video consultation. The word ‘extended’ refers to the fact that this consultation concerned more extended topics than normally handled by telephone consultations. Therefore, these extended telephone consultations were most often lengthier than regular telephone consultations. Email consultations are not part of clinic consultations but a separate contact type. Data were collected from a range of national registers for the study period and these data were linked through the personal identification number. The National Health Insurance Service Register provided information on date, time, and type of contact with daytime GP and OOH general practice, and the services delivered (through remuneration codes). The National Patient Register holds records on hospital contacts (somatic, psychiatric, as well as private hospitals), and provided the diagnosis codes included in the Charlson Comorbidity Index. The Civil Registration System and Statistics Denmark delivered data on patient characteristics (age, sex, cohabitation, education, ethnicity, income, urbanisation, and employment status). Comorbidity was defined as the number of diagnoses included in the Charlson Comorbidity Index. Apart from age, sex, and comorbidity, all covariates were at a household level, for example, a household’s level of income was determined by its highest earning occupant. Thus, it was possible to avoid excluding contacts for children because of missing values. Furthermore, it was anticipated that socioeconomic characteristics (for example, education or income) at the household level would be stronger predictors for help-seeking behaviour than those at individual levels. Help-seeking is often discussed with, or suggested by, other members of the household, in particular for children, who have low levels of education and income in the registers. Prior to any analysis, individuals with missing values for income, employment status, or cohabitation were excluded, as this information was often missing concurrently and thus led to convergence issues for the model. This meant excluding 40 246 unique individuals (0.66%). For the remaining individuals, missing covariates were placed in a separate category. The study population was followed from birth, immigration, or 1 January 2017 (whichever came last) until death, emigration, or 31 October 2020 (whichever came first). For each person, this period was divided into shorter time spans according to changes in covariates (see Supplementary Table S2). For each time span, the number of outcomes per resident was recorded along with the duration of each time span. However, age and sex of each resident were recorded at the beginning of each time span. Next, the number of outcomes and the durations were summed by month and year. Dividing the number of outcomes by the risk time provided the unadjusted observed incidence rate, which was plotted in categories of related remuneration codes. The date of 11 March 2020 was used as the starting date for the pandemic period, when the first official lockdown in Denmark was announced. To provide adjusted incidence rate ratios (IRRs), Poisson regressions were run for each group of remuneration codes on the data for 2017–2019 (that is, the pre- pandemic period), with risk time serving as the offset, and adjusted for the following covariates: sex, age, cohabitation, education, ethnicity, comorbidity, income, urbanisation, employment status, month, and month-ID (ID number of month in dataset), with the latter being treated as a continuous linear effect. Seasonality was taken into account through adjustment for month. This made it possible to calculate the expected utilisation (expected incidence rate) of general practice throughout the pandemic period as an extrapolation of previous help-seeking. Dividing the observed incidence rate by the adjusted expected incidence rate gave the adjusted IRR, which was plotted as curves according to groups of related remuneration codes. The change because of the pandemic was calculated by subtracting the expected number of contacts from the observed number of contacts after 11 March 2020 and the results (that is, overall effect) are presented as a percentage of the expected number of consultations. Finally, to see if changes in contact patterns were evenly distributed within subsets of the population, modifications of the pandemic effect within each of the covariates were looked for. This was done by using fully adjusted Poisson models, one for each covariate, and each with an interaction term for the covariate in question. Results were presented in a forest plot. Stata (version 16) was used for all analyses. Contact patterns varied between the pre-pandemic period and the pandemic period, and variations in adjusted numbers were seen for both daytime and OOH general practice . presents the total number of contacts for the basic services. A population description is presented in Supplementary Table S3. presents the contact rate relative to the rate predicted by the model during the pandemic (presenting the same overall picture as seen in ). The clinic consultations showed an initial drop of 25% in March 2020, but this number increased to above pre-pandemic levels soon thereafter (IRRs ranging from 0.98 to 1.29); this was mainly as a result of extended telephone consultations (proportion ranging from 27% to 46% of all clinic contacts) and video consultations (ranging from 1% to 4%) . A similar pattern was seen for preventive care consultations. Regular telephone consultations peaked at the start of the pandemic (IRR 1.28 in March), dropped for a few months, and ended around pre-pandemic levels (IRRs ranging from 0.99 to 1.07). Home visits remained mostly below the pre-pandemic levels (IRRs ranging from 0.81 to 1.01), as did conversational therapy. Email consultations significantly increased during the pandemic (IRRs ranging from 1.12 to 1.42) . At the OOH general practice services, clinic consultations dropped considerably in the first 3 months (IRRs ranging from 0.38 to 0.68). Thereafter, the number of OOH clinic consultations remained mostly below pre-pandemic levels (IRRs ranging from 0.62 to 1.06), even though video consultations were used in up to 24% of all clinic consultations. The number of home visits kept below that of the pre-pandemic level (IRRs ranging from 0.54 to 0.79), whereas the number of regular telephone consultations stayed above (IRRs ranging from 1.07 to 1.45) . As seen in , daytime GP contacts increased by 9.9% in the pandemic period, relative to what was to be expected. This increase was driven primarily by clinic consultations (8.6%) and email consultations (24.2%). Contact with OOH primary care increased by 4.3%, relative to what was to be expected, which was mainly because of regular telephone consultations (21.5%), with large decreases in clinic consultations (−25.9%) and home visits (−29.4%). The overall effects shown in were not distributed equally across patient groups. and present the impact of the pandemic on daytime general practice and OOH contacts, respectively, during the pandemic compared with before the pandemic for patient groups. Across all types of contacts, consultation rates of vulnerable patients (that is, those being older, being unemployed/retired, with a lower educational level, lower income level, and experiencing existing comorbidity) were more adversely affected by the pandemic than more affluent patients. Compared with older patients, children aged 0–9 years experienced the largest adverse impact in daytime contacts (here together with patients aged 60–89 years) and in OOH contacts. Patients from suburban and rural areas (population ≤100 000) also experienced a larger adverse impact than patients from urban areas (population >100 000). Summary At the start of lockdown in March 2020, the number of clinic consultations declined steeply. This was quickly followed by a countertrend towards and even above pre-pandemic levels, which was prompted mainly by the introduction of extended telephone consultations and video consultations (this trend was most distinct for OOH services). In general, the largest decrease in contacts was seen for the patients who were most vulnerable. Strengths and limitations A large dataset was used in the current study, including all general practice contacts in Denmark and various patient characteristics. The results of this study are generalisable to other countries with a similar setting using GP gatekeeping and that are free of charge for the patient. Data based on regular coding are useful for research purposes, but some reservations may exist about their validity. The economic incentive to register services contributes to completeness, in particular for regular remuneration codes. The reliability of the GPs’ use of the hastily implemented COVID-19 remuneration codes is unknown, and the GPs might have had varying practices. Therefore, possible misclassification of contact types cannot be ruled out. In the current study whether the contact rate was lower for certain patient groups during the pandemic compared with the pre-pandemic period was explored, but the study design did not allow the authors to assess whether this was because of a lower level of illness, reluctance to contact (because of fear of infection or overburdening the health services), or reduced accessibility and availability of general practice. Hospital-based data were used to calculate comorbidity, using the list of diagnoses from the Charlson Comorbidity Index. This may have led to underestimation of comorbidity as this list is limited and as patients with mild chronic diseases are often treated solely in general practice. Comparison with existing literature Several other studies have also reported lower use of general practice , , and rapid increases in virtual care during the early phases of the COVID-19 pandemic. , , , – The reported decrease varies from 16% to 79% for in-clinic consultations in the daytime, , , , , and in the current study a monthly decrease of up to 25% for daytime clinic consultations and up to 62% for OOH clinic consultations was found. The share of virtual care (video and telephone) has varied considerably between studies, ranging from 19% to 90% of all consultations , – , , whereas the current study showed that up to 31% of all daytime consultations and up to 18% of all OOH clinic consultations were conducted as virtual care. However, in the current study when regular telephone consultations were added to video consultations and extended telephone consultations, this percentage increased to 53% of all daytime contacts and 78% of all OOH contacts. Several studies also found a countertrend in the total number of visits, which led to an average of near-pre-pandemic levels. , The largest relative decrease in contacts in the current study was seen among patients who were vulnerable, but a Canadian study found that the patient groups with the highest care needs, including older patients and patients with high morbidity levels, maintained high levels of care during the course of the pandemic. Likewise, British GPs and nurses have been shown to keep a focus on patients who are vulnerable. Most governments and public authorities encouraged the population to limit contacts with healthcare services and change their help-seeking behaviour. Anxiety in the population about contracting COVID-19 at a health centre may have contributed to the decrease in contacts. Patients who are vulnerable may have had even more restrictive behaviour compared with other patient groups. The pandemic resulted in postponement of most chronic disease monitoring, health checks, preventive care, and screening activity, as these were not deemed ‘essential’. , Additionally, the shift towards virtual care may have altered the contact patterns, in particular for older patients and patients with multiple chronic health problems. , , , , Finally, children aged 0–9 years experienced the most severe adverse impact on daytime clinic consultations and OOH contacts. Social isolation because of lockdown measures, such as the closing of schools and day care facilities, in combination with social distancing resulted in a decline in non-COVID-19 infectious diseases, in particular respiratory tract infections in children. Several studies found a prominent decrease in antibiotics prescribing for children aged 0–11 years during the COVID-19 pandemic. – Furthermore, patients with respiratory tract infections were kept out of waiting rooms, which most likely affected children most. Implications for research and practice Several studies have indicated concerns that the management of patients with chronic illness may now be lagging behind because of the pandemic. , , Some of the lost contacts are likely to be related to medically unnecessary or non-urgent short-term health problems, as well as to a lower incidence of non-COVID-19 infections because of lockdown measures and social distancing, whereas others may have caused delayed diagnosis and treatment of medical problems or delayed management of chronic illness. , As this might have led to increased morbidity and mortality unrelated to COVID-19, future research should address the (long-term) effects of the pandemic on vulnerable patient groups. Furthermore, ways to support vulnerable patient groups in the use of virtual care technology should be investigated. At the start of lockdown in March 2020, the number of clinic consultations declined steeply. This was quickly followed by a countertrend towards and even above pre-pandemic levels, which was prompted mainly by the introduction of extended telephone consultations and video consultations (this trend was most distinct for OOH services). In general, the largest decrease in contacts was seen for the patients who were most vulnerable. A large dataset was used in the current study, including all general practice contacts in Denmark and various patient characteristics. The results of this study are generalisable to other countries with a similar setting using GP gatekeeping and that are free of charge for the patient. Data based on regular coding are useful for research purposes, but some reservations may exist about their validity. The economic incentive to register services contributes to completeness, in particular for regular remuneration codes. The reliability of the GPs’ use of the hastily implemented COVID-19 remuneration codes is unknown, and the GPs might have had varying practices. Therefore, possible misclassification of contact types cannot be ruled out. In the current study whether the contact rate was lower for certain patient groups during the pandemic compared with the pre-pandemic period was explored, but the study design did not allow the authors to assess whether this was because of a lower level of illness, reluctance to contact (because of fear of infection or overburdening the health services), or reduced accessibility and availability of general practice. Hospital-based data were used to calculate comorbidity, using the list of diagnoses from the Charlson Comorbidity Index. This may have led to underestimation of comorbidity as this list is limited and as patients with mild chronic diseases are often treated solely in general practice. Several other studies have also reported lower use of general practice , , and rapid increases in virtual care during the early phases of the COVID-19 pandemic. , , , – The reported decrease varies from 16% to 79% for in-clinic consultations in the daytime, , , , , and in the current study a monthly decrease of up to 25% for daytime clinic consultations and up to 62% for OOH clinic consultations was found. The share of virtual care (video and telephone) has varied considerably between studies, ranging from 19% to 90% of all consultations , – , , whereas the current study showed that up to 31% of all daytime consultations and up to 18% of all OOH clinic consultations were conducted as virtual care. However, in the current study when regular telephone consultations were added to video consultations and extended telephone consultations, this percentage increased to 53% of all daytime contacts and 78% of all OOH contacts. Several studies also found a countertrend in the total number of visits, which led to an average of near-pre-pandemic levels. , The largest relative decrease in contacts in the current study was seen among patients who were vulnerable, but a Canadian study found that the patient groups with the highest care needs, including older patients and patients with high morbidity levels, maintained high levels of care during the course of the pandemic. Likewise, British GPs and nurses have been shown to keep a focus on patients who are vulnerable. Most governments and public authorities encouraged the population to limit contacts with healthcare services and change their help-seeking behaviour. Anxiety in the population about contracting COVID-19 at a health centre may have contributed to the decrease in contacts. Patients who are vulnerable may have had even more restrictive behaviour compared with other patient groups. The pandemic resulted in postponement of most chronic disease monitoring, health checks, preventive care, and screening activity, as these were not deemed ‘essential’. , Additionally, the shift towards virtual care may have altered the contact patterns, in particular for older patients and patients with multiple chronic health problems. , , , , Finally, children aged 0–9 years experienced the most severe adverse impact on daytime clinic consultations and OOH contacts. Social isolation because of lockdown measures, such as the closing of schools and day care facilities, in combination with social distancing resulted in a decline in non-COVID-19 infectious diseases, in particular respiratory tract infections in children. Several studies found a prominent decrease in antibiotics prescribing for children aged 0–11 years during the COVID-19 pandemic. – Furthermore, patients with respiratory tract infections were kept out of waiting rooms, which most likely affected children most. Several studies have indicated concerns that the management of patients with chronic illness may now be lagging behind because of the pandemic. , , Some of the lost contacts are likely to be related to medically unnecessary or non-urgent short-term health problems, as well as to a lower incidence of non-COVID-19 infections because of lockdown measures and social distancing, whereas others may have caused delayed diagnosis and treatment of medical problems or delayed management of chronic illness. , As this might have led to increased morbidity and mortality unrelated to COVID-19, future research should address the (long-term) effects of the pandemic on vulnerable patient groups. Furthermore, ways to support vulnerable patient groups in the use of virtual care technology should be investigated.
Analyzing the impact of surgical technique on intraoperative adverse events in laparoscopic Roux-en-Y gastric bypass surgery by video-based assessment
e46248bb-0d9c-4861-97e9-55bcb5eb4e13
11870895
Surgical Procedures, Operative[mh]
Dataset and annotations MultiBypass140, a multicentric dataset of 140 LRYGB videos consisting of 70 videos from Strasbourg University Hospital, France (referred to as StrasBypass70) and 70 videos from Inselspital, Bern University Hospital, Switzerland (referred to as BernBypass70), was used in this observational study . Patients undergoing LRYGB surgery in one of both hospitals with complete video recordings of the procedure were retrospectively included into MultiBypass140. Surgical phases and steps of the procedures in the dataset were annotated by two surgeons with over ten years of clinical experience using an ontology of 12 phases, further divided into 46 steps that were validated for multicentric use (see Fig. , Supplemental Digital Content) . The interobserver reliability of the used ontology was almost perfect (Cohen’s kappa 0.96 ± 0.04 and 0.81 ± 0.10 for phases and steps, respectively) . iAEs within the dataset were annotated by a single surgeon using the SEVerity of intraoperative Events and REctification (SEVERE) index . The surgeon annotator was trained based on the SEVERE index manual version 7. The SEVERE index contains 5 different types of iAEs (bleeding, thermal injury, mechanical injury, ischemic injury, and insufficient closure of anastomosis) and up to 5 severity grades per iAE type (Table ). Rectification of iAE is assessed in a binary fashion (completely rectified vs. incompletely rectified). IAEs were considered completely rectified if the rectification process was satisfactory or the corresponding injury did not require rectification. The surgeon annotator watched the full MultiBypass140 video dataset and annotated the type, grade, and temporal duration of iAEs using the MOSaiC video annotation platform . The SEVERE index was developed by analyzing 120 videos of LRYGB surgeries and showed excellent interobserver reliability (intraclass correlation coefficient 0.87, 95% CI 0.77–0.92) . Based on single iAEs, a cumulative SEVERE score was calculated per procedure using the original SEVERE index item weights . Major events were defined as 90th percentile contributing SEVERE score among the observed events in MultiBypass140. This corresponds to a SEVERE score ≥ 2.7. BernBypass70 included 3 attending surgeons after completion of the LRYGB learning curve. For BernBypass70, postoperative complications until 30 days postoperatively were recorded and graded using the Clavien–Dindo classification . Major complications refer to Clavien–Dindo grades ≥ IIIa, which correspond to re-interventions and/or organ failure necessitating intensive care unit treatment. As StrasBypass70 is an anonymous dataset, no surgeon and patient data were available. Statistical analysis Statistical analyses were performed using the SciPy.Stats, Statsmodel, and Scikit-learn libraries for Python. Categorical data are presented as number and frequency , and continuous data as mean ± standard deviation (SD). The Fisher exact test was used to compare iAE frequencies between centers. Normality of distribution was assessed using the Shapiro–Wilk test. The Mann–Whitney U test was used to compare average frequencies of minor and major events. The Pearson correlation coefficient was used to test the correlation of SEVERE score and procedure duration. A p value ≤ 0.05 was considered statistically significant. This study is reported in accordance with the STROBE statement . Ethical approval As the videos from Strasbourg University Hospital are anonymous, no institutional review board (IRB) approval was necessary. The use of videos from Inselspital, Bern University Hospital was approved by the IRB (Ethics Committee of the Canton of Bern 2021-01666) and the need to obtain informed consent was waived. All potentially privacy revealing scenes of MultiBypass140 were deidentified using OoBNet, a publicly available deep learning model for endoscopic video deidentification . MultiBypass140, a multicentric dataset of 140 LRYGB videos consisting of 70 videos from Strasbourg University Hospital, France (referred to as StrasBypass70) and 70 videos from Inselspital, Bern University Hospital, Switzerland (referred to as BernBypass70), was used in this observational study . Patients undergoing LRYGB surgery in one of both hospitals with complete video recordings of the procedure were retrospectively included into MultiBypass140. Surgical phases and steps of the procedures in the dataset were annotated by two surgeons with over ten years of clinical experience using an ontology of 12 phases, further divided into 46 steps that were validated for multicentric use (see Fig. , Supplemental Digital Content) . The interobserver reliability of the used ontology was almost perfect (Cohen’s kappa 0.96 ± 0.04 and 0.81 ± 0.10 for phases and steps, respectively) . iAEs within the dataset were annotated by a single surgeon using the SEVerity of intraoperative Events and REctification (SEVERE) index . The surgeon annotator was trained based on the SEVERE index manual version 7. The SEVERE index contains 5 different types of iAEs (bleeding, thermal injury, mechanical injury, ischemic injury, and insufficient closure of anastomosis) and up to 5 severity grades per iAE type (Table ). Rectification of iAE is assessed in a binary fashion (completely rectified vs. incompletely rectified). IAEs were considered completely rectified if the rectification process was satisfactory or the corresponding injury did not require rectification. The surgeon annotator watched the full MultiBypass140 video dataset and annotated the type, grade, and temporal duration of iAEs using the MOSaiC video annotation platform . The SEVERE index was developed by analyzing 120 videos of LRYGB surgeries and showed excellent interobserver reliability (intraclass correlation coefficient 0.87, 95% CI 0.77–0.92) . Based on single iAEs, a cumulative SEVERE score was calculated per procedure using the original SEVERE index item weights . Major events were defined as 90th percentile contributing SEVERE score among the observed events in MultiBypass140. This corresponds to a SEVERE score ≥ 2.7. BernBypass70 included 3 attending surgeons after completion of the LRYGB learning curve. For BernBypass70, postoperative complications until 30 days postoperatively were recorded and graded using the Clavien–Dindo classification . Major complications refer to Clavien–Dindo grades ≥ IIIa, which correspond to re-interventions and/or organ failure necessitating intensive care unit treatment. As StrasBypass70 is an anonymous dataset, no surgeon and patient data were available. Statistical analyses were performed using the SciPy.Stats, Statsmodel, and Scikit-learn libraries for Python. Categorical data are presented as number and frequency , and continuous data as mean ± standard deviation (SD). The Fisher exact test was used to compare iAE frequencies between centers. Normality of distribution was assessed using the Shapiro–Wilk test. The Mann–Whitney U test was used to compare average frequencies of minor and major events. The Pearson correlation coefficient was used to test the correlation of SEVERE score and procedure duration. A p value ≤ 0.05 was considered statistically significant. This study is reported in accordance with the STROBE statement . As the videos from Strasbourg University Hospital are anonymous, no institutional review board (IRB) approval was necessary. The use of videos from Inselspital, Bern University Hospital was approved by the IRB (Ethics Committee of the Canton of Bern 2021-01666) and the need to obtain informed consent was waived. All potentially privacy revealing scenes of MultiBypass140 were deidentified using OoBNet, a publicly available deep learning model for endoscopic video deidentification . MultiBypass140 included 140 LRYGB surgeries with a mean ± SD video duration of 92 ± 33 min. StrasBypass70 had a mean ± SD video duration of 111 ± 33 min and BernBypass70 of 73 ± 20 min. Phase occurrence differed significantly between StrasBypass70 and BernBypass70. Significant and systematic workflow variations due to different surgical techniques were observed between the StrasBypass70 and BernBypass70 datasets. Both the Petersen space (99% vs. 16%, p < 0.01) and the mesenteric defect (100% vs. 21%, p < 0.01) were systematically closed, while the greater omentum was routinely divided (94% vs. 36%, p < 0.01) in StrasBypass70 versus BernBypass70. Figure shows the occurrence of phases and steps across the MultiBypass140 dataset. A total of 797 iAEs were analyzed. Thereof, 658 (83%) were bleeding events, 117 (15%) were mechanical injuries, 12 (2%) were thermal injuries, 6 (1%) were ischemic injuries, and 4 (1%) were insufficient closures of anastomosis. On average 5.7 ± 3.2 events occurred per surgery. Baseline characteristics and the distribution of iAE type and grade in StrasBypass70 and BernBypass70 are shown in Table . In total, 25% of procedures (35/140) had one major event and 12% of procedures (17/140) had more than one major event. The average number of minor events was not significantly different between procedures with and without major events (5.0 vs. 5.2, p = 0.76). The average number of minor events taking place before and after a major event did not differ significantly (2.4 vs. 2.7, p = 0.22). The average number of major events taking place before a major event was significantly lower than after a major event (0.8 vs. 1.3, p < 0.01). The progression of the SEVERE score over time stratified by procedures with and without major iAE is displayed in Fig. . Eighty-five percent (675/797) of iAEs were completely rectified. Three phases of LRYGB surgery are the most iAE prone: gastric pouch creation (Phase 2, 293 iAEs, 37%), gastrojejunal anastomosis (Phase 4, 178 iAEs, 22%), and jejunojejunal anastomosis (Phase 8, 145 iAEs, 18%) (Table ). Corresponding to the most iAE-prone phases, the following steps of LRYGB surgery are most iAE prone: lesser curve dissection (Step 5, 112 iAEs, 14%), gastrojejunal defect closure (Step 19, 118 iAEs, 15%), and jejunojejunal defect closure (Step 33, 85 iAEs, 11%). The frequency of iAE occurrence was compared between centers. StrasBypass70 showed significantly more iAEs in the omentum division (Phase 3, 23 vs. 5 iAEs, p = 0.02), Petersen space closure (Phase 7, 13 vs. 1 iAEs, p = 0.03), mesenteric defect closure (Phase 9, 34 vs. 2 iAEs, p < 0.01), and disassembling phases (Phase 11, 15 vs. 1 iAEs, p = 0.02) when compared to BernBypass70 (Fig. a). The frequencies of iAE were also assessed on the step level. StrasBypass70 showed significantly more iAEs in the vertical stapling (Step 8, 27 vs. 9 iAEs, p = 0.05), Petersen space closure (Step 28, 13 vs. 1 iAEs, p = 0.03), and mesenteric defect closure (Step 37, 33 vs. 2 iAEs, p < 0.01) when compared to BernBypass70 (Fig. b). For both datasets, there was a positive correlation between SEVERE score and procedure duration (Fig. , StrassBypass70 r = 0.44, p < 0.01; BernBypass70 r = 0.33, p < 0.01). However, in BernBypass70, the operative time increased less per SEVERE score point than in StrasBypass70. Out of 70 patients in BernBypass70, five patients (7%) underwent minor complications and six patients (9%) major postoperative complications. There was no significant difference in the average iAE frequency of patients with postoperative complications and those with an uncomplicated postoperative course (5.5 ± 2.5 vs. 5.1 ± 2.5 iAEs, p = 0.67). When stratifying according to iAE type, there were no significant differences in average bleeding events (4.3 ± 2.2 vs. 4.2 ± 2.1, p = 0.87), mechanical injuries (0.8 ± 1.0 vs. 0.8 ± 1.0, p = 1.00), thermal injuries (0.0 ± 0.0 vs. 0.1 ± 0.4, p = 0.27), and ischemic injuries (0.2 ± 0.4 vs. 0.1 ± 0.2, p = 0.26) between patients with postoperative complications and those with an uncomplicated postoperative course. Although these events were rare and all were rectified, insufficient closure of anastomosis was significantly more frequent in patients with postoperative complications (0.2 ± 0.6 vs. 0.0 ± 0.1, p < 0.01). iAEs are important to study as they are often considered preventable and may contribute to the development of postoperative complications and influence outcomes. Previous work has shown that iAEs can be associated with increased procedure duration , which in turn could be associated with increased postoperative complications . Other work has suggested that minor events are often erroneously deemed inconsequential and may serve as a potential surrogate for performance assessment . Still, the occurrence of iAEs is intricately intertwined with technical, organizational, human, and patient-related factors, limiting our understanding of the root cause and consequence of these events. This study used video-based assessment to analyze the temporal occurrence, frequency, type, and grade of iAE in LRYGB surgery in a multicentric video dataset, aiming to shed light on the various links between surgical technique and iAEs. Several frameworks have been proposed to classify iAEs in surgery (Table ) . Some classifications measure the cumulative severity of iAEs or the most severe single iAE assessing the intervention globally , other classifications like the SEVERE index assess multiple individual events during an intervention. Other classifications use postoperative outcomes measures like intensive care unit admission or reoperation to classify the severity of iAEs . Classifications relying on postoperative outcomes prohibit prospective applications, such as developing predictors of postoperative complications, as the downstream consequences of the observed iAE are not known beforehand. Only classifications relying solely on intraoperative data at the time point of iAE occurrence are suitable for prospective video-based assessment. Therefore, the SEVERE index was applied in this study. A single-center study from Canada assessed iAE in 120 consecutive LRYGB surgeries with a median of 12 events per patient . Despite using the same assessment tool (SEVERE index), the present study revealed on average 6 events per patient. Furthermore, the Canadian study does not give information on the temporal occurrence of iAE. Thus, with the inclusion of two high-volume academic centers along with corresponding phase and step information, the present study achieves a new level of detailed understanding of variations in surgical technique and the occurrence of iAEs. This fine-grained analysis of iAE in LRYGB enables data-driven feedback to trainees and researchers aiming to improve surgical technique. The present study identified that gastric pouch, gastrojejunal anastomosis, and jejunojejunal anastomosis creation are the most iAE-prone phases (Fig. ). In total, 77% of all iAEs in the MultiBypass140 dataset occur during those three phases. According to the ontology used, every phase has its corresponding steps. Therefore, the most iAE-prone steps correspond to the most iAE-prone phases. The most iAE-prone steps are lesser curve dissection, gastrojejunal defect closure, and jejunojejunal defect closure. Knowing the iAE-prone phases and steps of LRYGB facilitates case review for educational purposes, quality improvement programs, and mortality & morbidity conferences. Moreover, the knowledge of iAE-prone phases and steps enables safer teaching and focused video documentation of LRYGB procedures. Surgical trainees can be closely mentored in iAE-prone phases of LRYGB. Further, to reduce iAEs and overcome the learning curve of LRYGB iAE-prone steps can be practiced in box model trainers. Where in StrasBypass70 the greater omentum is usually divided and the mesenteric defects are systematically closed, this is not the case for most of the surgeries in BernBypass70. In a registry-based cohort study from Sweden routine division of the greater omentum in LRYGB has been shown to reduce postoperative small bowel obstruction . However, omental division is associated with significantly increased overall iAEs and intraoperative bleeding. This finding can be confirmed by the present study. StrasBypass70 had significantly more iAEs in the omentum division phase and the omental transection step than BernBypass70. Another difference in surgical technique between StrasBypass70 and BernBypass70 lies in the routine closure of the Petersen and the jejunal mesenteric defect in StrasBypass70. There is strong evidence that routine closure of mesenteric defects results in reduced incidence of small bowel obstruction and internal hernia . However, as the present study suggests this comes at the price of an increased frequency of iAEs in the respective phases and steps. The increased iAE frequency of StrasBypass70 compared to BernBypass70 is due to omentum division (Phase 3) and mesenteric defect closure (Phases 7 & 9). By omitting those phases, potentially 16% (70/432) of iAE could be prevented. These apparent trade-offs between improved complication rates despite increases in iAE rates further emphasize the need to study iAEs in conjunction with variations in technique and outcomes. Irrespective of the differences in surgical technique, for both datasets, a correlation of the SEVERE score, which is the cumulative severity of iAEs per procedure, and the procedure duration has been observed (Fig. ). The Multibypass140 dataset does not contain information regarding the technical skill level of the surgeons or the case difficulty. Nevertheless, the SEVERE score is likely a proxy for both. Examining the progression of SEVERE score over time, our study revealed that major iAE did not occur in isolation. Most major iAE have precedent or subsequent major or minor iAEs (Fig. ). However, major events are not correlated with an increased frequency of minor events. Minor events are likely to take place before than after a major event. A major event increases the likelihood of subsequent major events. There is a large body of literature showing that prolonged operative duration is associated with postoperative complication . However, it remains unknown whether increased surgery duration is the cause or the consequence of iAEs. As with procedure duration the number of iAEs and the cumulative iAE severity is increasing, iAEs might be one of the drivers for postoperative complications in patients with prolonged operative time. However, in BernBypass70, where postoperative outcomes were available, a correlation of iAE frequency and 30-day postoperative complications could only be shown for the most severe type of iAEs, which is insufficient closure of the anastomosis. iAEs may translate to postoperative complications but do not necessarily have to. Indeed, a vast majority of iAEs (65%) in this study were low-grade bleeding and completely rectified. They are unlikely to translate into postoperative complications. On the other hand, in this study, certain iAEs such as insufficient closure of anastomosis, though rare and always rectified, were significantly associated with postoperative complications. A need for granular, and potentially independent analysis, of various types of iAEs may be warranted. While iAEs may be predictors of postoperative complications, any robust tool will need to account for variations in surgical technique, operator skill, and patient characteristics, among other factors that could severely impact the occurrence of iAEs. A strength of this study is its unprecedented granularity of iAE analysis. To our knowledge, there is no prior work analyzing iAEs with respect to the temporal occurrence and surgical technique. The present study is limited by the fact that the patients in StrasBypass70 and BernBypass70 are not matched. As StrasBypass70 is an anonymous dataset, no demographic or outcome variables are available and therefore patients cannot be matched with patients from BernBypass70. Thus, every comparison between the two datasets must be interpreted cautiously. Future studies will need to include more centers with greater variability of surgical technique. To further enhance the understanding of the relationship of iAE with postoperative complications, the inclusion of more video datasets with corresponding outcome data is needed. The MultiBypass140 dataset and corresponding iAE annotations will serve as basis for the training of a deep learning iAE detection algorithm. This will lead to automated iAE detection in LRYGB and help to prevent the translation of iAE into postoperative complications. In summary, this study used video-based assessment to study the frequency, the type, and grade of iAEs in LRYGB regarding surgical phases and steps. Gastric pouch creation, gastrojejunal anastomosis and jejunojejunal anastomosis are the most iAE-prone phases of LRYGB. Routine division of the greater omentum and closure of the mesenteric defects increases the frequency of iAEs. The cumulative severity of iAEs is correlated with operative time. An association of iAE with postoperative complications was solely shown for the most severe iAE. Below is the link to the electronic supplementary material. Supplementary file1 (PDF 33 KB) Hierarchical structure of the laparoscopic Roux-en-Y gastric bypass phase (P) and step (S) ontology as proposed in [15]. Optional phases and steps have a dashed border
Exploring the underlying mechanisms of obesity and diabetes and the potential of Traditional Chinese Medicine: an overview of the literature
58ef375a-44ad-4aa2-9f4a-717b62353c19
10433171
Pharmacology[mh]
Introduction Obesity is the accumulation of excess fat tissue in the body, which can occur at any age, and it is characterized by increased body and fat mass, hormone imbalances, eating patterns, and genetic factors. This condition has significantly contributed to the global burden of chronic diseases, including type 2 diabetes (T2D), cardiovascular disease, and asthma. It is considered a worldwide pandemic, and approximately 2.8 million people die from its complications annually . Directly measuring fat throughout the body is impossible, so the body mass index (BMI) is commonly used to evaluate the relationship between weight and height. Other methods, such as waist circumference, waist-to-hip ratio, skinfold thickness, and bio-impedance, are also utilized to assess overweight and obesity . If individuals fall into the heavy range of BMI, they are more likely to develop other diseases such as T2D, hypertension, cardiovascular diseases, and gallstones. The risk is moderate for those in obesity class 1, severe for those in obesity class 2, and very high for those with extreme obesity, especially if they already have other obesity-related diseases . illustrates the BMI classification recommended by the World Health Organization and the National Institute of Health in the United States. Visceral and subcutaneous are the two types of fats in the human body. Visceral fat is the one that gets deposited around organs such as the liver, pancreas, and kidneys, among others. This type of fat is also known as active fat because it significantly impacts hormonal activity. According to research, visceral fat accumulation can result in metabolic syndrome and insulin resistance, affecting appetite and body fat distribution . Subcutaneous fat, on the other hand, is the fat located beneath the skin and can be felt in areas like the underarms and legs. Fat distribution in the body can result in two types of shapes, apple, and pear. People with an apple shape tend to accumulate fat in the upper region of their waist, abdomen, neck, arms, and shoulders . This physique is predominantly linked with visceral fat, heightening the possibility of developing T2D. Conversely, individuals with a pear-shaped body have fat stored in their hips and thighs, resulting in lower visceral fat levels and a diminished risk of weight-related ailments . Obesity can give rise to numerous complications, such as reproductive problems for both genders, respiratory and cardiovascular illnesses, and issues with the gastrointestinal system and pancreas, as illustrated in . The United States ranked first in obesity, followed by China and India, according to the Organization for Economic Co-operation and Development (OECD) in 2017 . There has been a significant surge in obesity rates from 1999-2000 to 2015-2016 . In 2016, the World Health Organization (WHO) stated that of 1.9 billion overweight adults aged 18 and above, 650 million had obesity . An estimated 25 million individuals perish yearly due to obesity or being overweight . Moreover, a study by WHO in 2019 found that approximately 38.2 million children under 5 were either overweight or suffering from obesity . The individual’s lifestyle and dietary choices are crucial in contributing to the development of obesity . Foods that are high in fat and sugar tend to have low micronutrient content, which can lead to weight gain. Excessive intake of processed grains, unhealthy snacks, and sugary drinks may lead to a more excellent waist-to-hip ratio and heightened accumulation of body fat . Diabetes is a chronic condition characterized by elevated blood sugar levels due to insufficient insulin production or the body’s inability to use insulin effectively. Insulin is a hormone produced by the pancreas that helps the body absorb and use glucose from food as energy . There are three primary types of diabetes: type 1, type 2, and gestational diabetes. Type 1 diabetes is an autoimmune disease that occurs when the immune system attacks and destroys the cells in the pancreas responsible for insulin production . This type of diabetes is typically diagnosed during childhood or adolescence and requires lifelong insulin therapy . T2D is the most common form, accounting for roughly 90% of cases, and occurs when the body becomes resistant to insulin’s effects. The pancreas can no longer produce enough insulin to meet demand . T2D is often linked to lifestyle factors such as obesity and physical inactivity but can be managed through diet, exercise, medication, or insulin therapy . Gestational diabetes develops during pregnancy and is caused by hormones that make it harder for the body to use insulin effectively. While this type of diabetes typically resolves after giving birth, women who develop gestational diabetes have an increased risk of developing T2D later in life, as do their children . If diabetes is not managed correctly, it can result in various complications, such as cardiovascular disease, nerve damage, kidney disease, blindness, and amputations. Nevertheless, individuals with diabetes can maintain a long and healthy life with proper treatment and management, which may involve a blend of medication, lifestyle alterations, and frequent monitoring of blood glucose levels. This can aid in preventing the development of complications associated with diabetes . The focus of this review is to explore the underlying mechanisms of obesity and diabetes and to evaluate the potential of Traditional Chinese Medicine (TCM) in managing these conditions. The study examined the current understanding of the pathophysiology of obesity and diabetes and investigated how TCM may help address these conditions. The aim is to provide a comprehensive and critical analysis of the existing research in this field and to assess the potential of TCM as a complementary or alternative treatment option for obesity and diabetes. The present study used a comprehensive search in major scientific databases, including PubMed, Scopus, and Web of Science, to identify relevant studies. The search used keywords related to obesity, diabetes, Traditional Chinese Medicine, underlying mechanisms, and their various synonyms. The search was limited to articles published between 2019 and 2023, with no language restrictions. Inclusion criteria encompassed original research articles, review articles, and meta-analyses investigating the relationship between obesity, diabetes, and TCM at the molecular, cellular, and clinical levels. Studies focusing on the mechanisms of action of TCM interventions, such as herbal remedies, acupuncture, and dietary modifications, were also included. Exclusion criteria consisted of studies that primarily focused on non-TCM interventions or those unrelated to the specific topic of interest. The underlying mechanisms of obesity and diabetes 2.1 Insulin resistance Insulin resistance refers to the decreased sensitivity of cells to insulin, which increases insulin levels in the bloodstream. It is considered a significant risk factor for obesity and is believed to be critical in its onset and advancement. The mechanisms by which insulin resistance mediates obesity are complex, but several key pathways have been identified . One of the primary mechanisms insulin resistance promotes obesity is its effects on regulating glucose metabolism. Insulin is an essential hormone for maintaining blood glucose levels within a healthy range. When insulin levels are high, glucose is taken up by cells and used for energy. However, in insulin-resistant individuals, cells become less responsive to insulin, leading to elevated glucose levels in the bloodstream . The body produces more insulin to compensate, increasing insulin levels. These high insulin levels can promote the storage of excess glucose as fat, leading to weight gain and obesity. Another important mechanism by which insulin resistance promotes obesity is through its effects on the regulation of lipids. Insulin also regulates lipid metabolism; insulin-resistant individuals often have abnormal lipid profiles . For example, they may have elevated triglyceride levels, a type of fat in the blood. High triglyceride levels can promote fat storage in adipose tissue, leading to weight gain and obesity. Insulin resistance can also promote obesity through its effects on regulating appetite and energy expenditure. Insulin regulates several hormones that control appetite and metabolism, including leptin, ghrelin, and adiponectin . Insulin-resistant individuals may have abnormal levels of these hormones, which can lead to increased appetite and reduced energy expenditure. This can lead to a positive energy balance, which promotes weight gain and obesity. Finally, insulin resistance can promote obesity through its effects on inflammation . Chronic low-grade inflammation is linked to insulin resistance, which may encourage the emergence of obesity and other metabolic disorders. Inflammatory cytokines produced by adipose tissue can impair insulin signaling and promote insulin resistance, further exacerbating the cycle of weight gain and metabolic dysfunction . 2.2 Inflammation Obesity and diabetes are two closely related chronic diseases that are major health concerns worldwide. Research has shown that inflammation plays a crucial role in developing both conditions. When the immune system responds to infection or injury, inflammation occurs naturally. However, if it persists over a long period, chronic inflammation can result in various health issues, such as obesity and diabetes . One mechanism by which inflammation contributes to obesity and diabetes is releasing cytokines. Cytokines are signaling molecules secreted by immune cells and play a critical role in inflammation. In obese individuals, there is an increase in the production of cytokines such as tumor necrosis factor-alpha (TNF-α) and interleukin-6 (IL-6) by adipose tissue . The cytokines found to hinder the function of insulin, which is crucial in regulating blood sugar levels, have been demonstrated to promote insulin resistance, a defining characteristic of T2D. Persistent inflammation associated with obesity plays a role in developing this disease. Furthermore, inflammation also triggers the activation of the nuclear factor kappa B (NF-κB) pathway, which is a transcription factor that regulates gene expression during inflammation. Studies indicate that activation of the NF-κB pathway occurs in the adipose tissue of obese individuals and contributes to insulin resistance . Additionally, NF-κB pathway activation has been linked to the onset of T2D. Inflammation can also contribute to obesity and diabetes by altering the gut microbiome. The gut microbiome refers to the trillions of microorganisms that live in the human gut and play a critical role in maintaining health . Studies have shown that obesity and diabetes are associated with alterations in the gut microbiome, and inflammation may be a contributing factor. Inflammation can lead to changes in the composition of the gut microbiome, which can, in turn, contribute to metabolic dysfunction . 2.3 Hormonal imbalances Obesity and diabetes are two of the most prevalent chronic diseases worldwide. Hormonal imbalances can contribute to the development and progression of both conditions, as they can affect the regulation of appetite, energy metabolism, and glucose homeostasis. The two main hormones involved in these processes are insulin and leptin. In this essay, we will examine how imbalances in insulin and leptin can lead to obesity and diabetes . Insulin is a hormone the pancreas produces that is critical in regulating glucose metabolism. The process of glucose uptake into cells and its subsequent storage as glycogen in the liver and muscles is enhanced by insulin . Obesity and T2D commonly display insulin resistance, characterized by a reduced responsiveness of the body’s cells to insulin. Consequently, the pancreas compensates by producing more insulin to maintain normal blood glucose levels. This can result in hyperinsulinemia, characterized by elevated insulin levels in the bloodstream . Hyperinsulinemia has been linked to obesity, as it facilitates fat storage in adipose tissue and impedes the breakdown of stored fat. It can also contribute to the development of diabetes by inhibiting glucose uptake into cells and promoting glucose production by the liver. Leptin is a hormone adipose tissue produces critical in regulating energy balance. It acts on the hypothalamus, a brain region controlling appetite and energy expenditure . Leptin signals the brain when energy stores are sufficient, reducing appetite and increasing energy expenditure. Leptin resistance is another common feature of obesity. It occurs when the brain becomes less responsive to leptin, and the body produces more leptin to compensate. This condition can lead to hyperleptinemia, characterized by high levels of leptin in the blood . Hyperleptinemia can contribute to obesity by promoting fat storage in adipose tissue and inhibiting the breakdown of stored fat. Impaired glucose uptake into cells and increased glucose production by the liver, both of which can be caused by it, may also play a role in the development of diabetes . 2.4 Genetic predisposition Obesity and diabetes are complex disorders resulting from genetic and environmental factors interplay. The genetic predisposition to these disorders has been extensively studied, and several mechanisms have been proposed to explain their mediation by genetics. One tool of obesity mediation by genetics is regulating appetite and energy expenditure. Several gene roles in appetite regulation have been identified, such as the leptin and melanocortin-4 receptor genes . Leptin is a hormone adipose tissue produces that regulates appetite and energy expenditure by acting on the hypothalamus. Mutations in the leptin gene or its receptor can lead to leptin resistance, resulting in increased appetite and reduced energy expenditure, leading to obesity . Similarly, mutations in the melanocortin-4 receptor gene can also increase hunger and obesity. Another mechanism of obesity mediation by genetics is regulating adipose tissue distribution. The distribution of fatty tissue in the body, particularly visceral adipose tissue, strongly predicts metabolic disorders such as diabetes and cardiovascular disease . Several genes that play a role in adipose tissue distribution have been identified, such as the FTO and PPARG genes. Variants in the FTO gene have been associated with increased body mass index and obesity. This effect is believed to be mediated by regulating adipose tissue distribution . Similarly, variants in the PPARG gene have been associated with increased visceral adipose tissue, insulin resistance, and T2D. The genetics behind the development of diabetes are intricate and involve multiple factors. One way in which genetics contributes to diabetes is through the regulation of insulin secretion and sensitivity. Specific genes, such as the TCF7L2 gene and the insulin receptor gene, are involved in this process. Research has shown that variants in the TCF7L2 gene are strongly linked to a higher risk of T2D, which is believed to occur due to reduced insulin secretion . Similarly, mutations in the insulin receptor gene can lead to insulin resistance, leading to T2D. Genetics can also mediate diabetes by controlling the maintenance of glucose balance, which is another disease mechanism . Glucose homeostasis is tightly regulated by a complex interplay between several hormones, including insulin, glucagon, and amylin. Several gene roles in glucose homeostasis have been identified, such as the KCNJ11 gene and the HNF1A gene. Variants in the KCNJ11 gene have been associated with impaired insulin secretion and an increased risk of T2D. In contrast, mutations in the HNF1A gene can lead to impaired glucose homeostasis and maturity-onset diabetes of the young . 2.5 Lifestyle factors Obesity and diabetes are two of the most prevalent chronic diseases worldwide, and their incidence is rising due to lifestyle factors. Obesity, characterized by excessive body fat accumulation, is a significant risk factor for T2D due to insulin resistance and impaired insulin secretion. Lifestyle factors such as sedentary behavior, unhealthy diet, and inadequate sleep are known to mediate the mechanisms of obesity and diabetes . Sedentary behavior, such as prolonged sitting or inactivity, has been linked to increased obesity and diabetes risk. Physical inactivity leads to decreased energy expenditure, reduced muscle mass, and impaired glucose metabolism, contributing to insulin resistance and diabetes development . Regular exercise can improve insulin sensitivity, glucose uptake, and body weight, reducing the risk of diabetes and obesity. An unhealthy diet, characterized by a high intake of refined carbohydrates, saturated and trans fats, and a low fiber intake of fruits and vegetables, is another crucial factor in developing obesity and diabetes . The high glycemic load of refined carbohydrates leads to rapid glucose absorption, causing insulin spikes and subsequent insulin resistance. The saturated and trans fats in unhealthy diets contribute to weight gain, insulin resistance, and inflammation, leading to obesity and diabetes. In contrast, a healthy diet that includes whole grains, fruits, vegetables, and moderate amounts of healthy fats, can help prevent and manage obesity and diabetes . Inadequate sleep, characterized by insufficient duration and poor quality, has been linked to increased obesity and diabetes risk. Sleep deprivation disrupts the regulation of hormones that control appetite and energy metabolism, leading to increased food intake, decreased physical activity, and altered glucose metabolism, contributing to obesity and diabetes . Adequate sleep, on the other hand, can improve insulin sensitivity, reduce appetite, and promote weight loss, reducing the risk of diabetes and obesity . 2.6 Gut microbiota Obesity and diabetes are chronic metabolic disorders that affect a large portion of the global population. Recent research has highlighted the potential role of gut microbiota in developing and progressing these diseases. Gut microbiota refers to the trillions of microorganisms that inhabit the human gut, including bacteria, viruses, and fungi . These microorganisms regulate various metabolic processes, including energy homeostasis, glucose metabolism, and inflammation. One of the critical mechanisms by which gut microbiota mediates obesity is regulating energy balance. Gut bacteria have been shown to influence the amount of energy extracted from the diet by breaking down complex carbohydrates and other nutrients that are resistant to digestion by human enzymes . This produces short-chain fatty acids (SCFAs), which the host can use as an energy source. However, excessive production of SCFAs can lead to increased fat storage and obesity. Another mechanism by which gut microbiota contributes to the development of obesity is through the regulation of appetite and satiety . Studies have shown that gut bacteria, such as ghrelin, leptin, and serotonin, can produce various hormones and neurotransmitters that regulate hunger and food intake. Disruption of the regulation can cause an increase in food consumption and weight gain, as mentioned in reference . The gut microbiota, aside from being linked to obesity, has also been connected to diabetes development. The gut bacteria play a critical role in regulating glucose metabolism, one of the fundamental mechanisms contributing to diabetes development, as explained in reference . Various pathways have been discovered through which gut bacteria impact glucose metabolisms, such as the production of SCFAs that improve insulin sensitivity and the regulation of intestinal permeability that affects glucose absorption from the gut. Moreover, gut microbiota contributes to diabetes development through inflammation regulation, a hallmark of diabetes. Gut bacteria play a vital role in modulating the inflammatory response in the gut and systemic circulation . Dysbiosis, or an imbalance in the gut microbiota, has been associated with increased pro-inflammatory cytokine levels that can cause insulin resistance and impaired glucose metabolism . 2.7 Environmental factors Obesity and diabetes are complex conditions that various environmental factors can influence. These factors include dietary habits, physical activity, stress, pollution, and socioeconomic status. This response will explore how environmental factors can mediate obesity and diabetes. One primary ecological factor contributing to obesity and diabetes is dietary habits . The risk of developing obesity and diabetes can be heightened by consuming diets high in calories, fats, and sugars. Excess calories from these diets are stored as fat, leading to weight gain, and can also cause insulin resistance, which impairs glucose uptake by cells and can lead to the development of diabetes . Moreover, a lack of nutrient-dense foods, such as fruits and vegetables, can lead to deficiencies in essential vitamins and minerals, further exacerbating the risk of obesity and diabetes . Another environmental factor that can contribute to obesity and diabetes is physical activity . Sedentary lifestyles, such as spending extended periods sitting, can reduce metabolic rate, decreasing calorie burning and weight gain. Additionally, physical inactivity can increase insulin resistance, making it more difficult for cells to use glucose effectively, thereby increasing the risk of diabetes. Conversely, regular physical activity can improve insulin sensitivity, aid weight management, and reduce the risk of developing both obesity and diabetes . Stress is another environmental factor that can contribute to obesity and diabetes. Chronic stress can lead to releasing cortisol, a hormone that increases appetite and can cause weight gain. Moreover, cortisol can impair glucose uptake by cells, leading to insulin resistance and increasing the risk of diabetes. Therefore, effective stress management techniques, such as mindfulness, exercise, and social support, can help reduce the risk of developing obesity and diabetes . Environmental factors such as pollution and socioeconomic status can also play a role in developing obesity and diabetes. Insulin resistance and the onset of diabetes have been associated with air pollution. In contrast, poverty, limited availability of healthy food choices, and unsafe physical activity environments can contribute to obesity and diabetes . Insulin resistance Insulin resistance refers to the decreased sensitivity of cells to insulin, which increases insulin levels in the bloodstream. It is considered a significant risk factor for obesity and is believed to be critical in its onset and advancement. The mechanisms by which insulin resistance mediates obesity are complex, but several key pathways have been identified . One of the primary mechanisms insulin resistance promotes obesity is its effects on regulating glucose metabolism. Insulin is an essential hormone for maintaining blood glucose levels within a healthy range. When insulin levels are high, glucose is taken up by cells and used for energy. However, in insulin-resistant individuals, cells become less responsive to insulin, leading to elevated glucose levels in the bloodstream . The body produces more insulin to compensate, increasing insulin levels. These high insulin levels can promote the storage of excess glucose as fat, leading to weight gain and obesity. Another important mechanism by which insulin resistance promotes obesity is through its effects on the regulation of lipids. Insulin also regulates lipid metabolism; insulin-resistant individuals often have abnormal lipid profiles . For example, they may have elevated triglyceride levels, a type of fat in the blood. High triglyceride levels can promote fat storage in adipose tissue, leading to weight gain and obesity. Insulin resistance can also promote obesity through its effects on regulating appetite and energy expenditure. Insulin regulates several hormones that control appetite and metabolism, including leptin, ghrelin, and adiponectin . Insulin-resistant individuals may have abnormal levels of these hormones, which can lead to increased appetite and reduced energy expenditure. This can lead to a positive energy balance, which promotes weight gain and obesity. Finally, insulin resistance can promote obesity through its effects on inflammation . Chronic low-grade inflammation is linked to insulin resistance, which may encourage the emergence of obesity and other metabolic disorders. Inflammatory cytokines produced by adipose tissue can impair insulin signaling and promote insulin resistance, further exacerbating the cycle of weight gain and metabolic dysfunction . Inflammation Obesity and diabetes are two closely related chronic diseases that are major health concerns worldwide. Research has shown that inflammation plays a crucial role in developing both conditions. When the immune system responds to infection or injury, inflammation occurs naturally. However, if it persists over a long period, chronic inflammation can result in various health issues, such as obesity and diabetes . One mechanism by which inflammation contributes to obesity and diabetes is releasing cytokines. Cytokines are signaling molecules secreted by immune cells and play a critical role in inflammation. In obese individuals, there is an increase in the production of cytokines such as tumor necrosis factor-alpha (TNF-α) and interleukin-6 (IL-6) by adipose tissue . The cytokines found to hinder the function of insulin, which is crucial in regulating blood sugar levels, have been demonstrated to promote insulin resistance, a defining characteristic of T2D. Persistent inflammation associated with obesity plays a role in developing this disease. Furthermore, inflammation also triggers the activation of the nuclear factor kappa B (NF-κB) pathway, which is a transcription factor that regulates gene expression during inflammation. Studies indicate that activation of the NF-κB pathway occurs in the adipose tissue of obese individuals and contributes to insulin resistance . Additionally, NF-κB pathway activation has been linked to the onset of T2D. Inflammation can also contribute to obesity and diabetes by altering the gut microbiome. The gut microbiome refers to the trillions of microorganisms that live in the human gut and play a critical role in maintaining health . Studies have shown that obesity and diabetes are associated with alterations in the gut microbiome, and inflammation may be a contributing factor. Inflammation can lead to changes in the composition of the gut microbiome, which can, in turn, contribute to metabolic dysfunction . Hormonal imbalances Obesity and diabetes are two of the most prevalent chronic diseases worldwide. Hormonal imbalances can contribute to the development and progression of both conditions, as they can affect the regulation of appetite, energy metabolism, and glucose homeostasis. The two main hormones involved in these processes are insulin and leptin. In this essay, we will examine how imbalances in insulin and leptin can lead to obesity and diabetes . Insulin is a hormone the pancreas produces that is critical in regulating glucose metabolism. The process of glucose uptake into cells and its subsequent storage as glycogen in the liver and muscles is enhanced by insulin . Obesity and T2D commonly display insulin resistance, characterized by a reduced responsiveness of the body’s cells to insulin. Consequently, the pancreas compensates by producing more insulin to maintain normal blood glucose levels. This can result in hyperinsulinemia, characterized by elevated insulin levels in the bloodstream . Hyperinsulinemia has been linked to obesity, as it facilitates fat storage in adipose tissue and impedes the breakdown of stored fat. It can also contribute to the development of diabetes by inhibiting glucose uptake into cells and promoting glucose production by the liver. Leptin is a hormone adipose tissue produces critical in regulating energy balance. It acts on the hypothalamus, a brain region controlling appetite and energy expenditure . Leptin signals the brain when energy stores are sufficient, reducing appetite and increasing energy expenditure. Leptin resistance is another common feature of obesity. It occurs when the brain becomes less responsive to leptin, and the body produces more leptin to compensate. This condition can lead to hyperleptinemia, characterized by high levels of leptin in the blood . Hyperleptinemia can contribute to obesity by promoting fat storage in adipose tissue and inhibiting the breakdown of stored fat. Impaired glucose uptake into cells and increased glucose production by the liver, both of which can be caused by it, may also play a role in the development of diabetes . Genetic predisposition Obesity and diabetes are complex disorders resulting from genetic and environmental factors interplay. The genetic predisposition to these disorders has been extensively studied, and several mechanisms have been proposed to explain their mediation by genetics. One tool of obesity mediation by genetics is regulating appetite and energy expenditure. Several gene roles in appetite regulation have been identified, such as the leptin and melanocortin-4 receptor genes . Leptin is a hormone adipose tissue produces that regulates appetite and energy expenditure by acting on the hypothalamus. Mutations in the leptin gene or its receptor can lead to leptin resistance, resulting in increased appetite and reduced energy expenditure, leading to obesity . Similarly, mutations in the melanocortin-4 receptor gene can also increase hunger and obesity. Another mechanism of obesity mediation by genetics is regulating adipose tissue distribution. The distribution of fatty tissue in the body, particularly visceral adipose tissue, strongly predicts metabolic disorders such as diabetes and cardiovascular disease . Several genes that play a role in adipose tissue distribution have been identified, such as the FTO and PPARG genes. Variants in the FTO gene have been associated with increased body mass index and obesity. This effect is believed to be mediated by regulating adipose tissue distribution . Similarly, variants in the PPARG gene have been associated with increased visceral adipose tissue, insulin resistance, and T2D. The genetics behind the development of diabetes are intricate and involve multiple factors. One way in which genetics contributes to diabetes is through the regulation of insulin secretion and sensitivity. Specific genes, such as the TCF7L2 gene and the insulin receptor gene, are involved in this process. Research has shown that variants in the TCF7L2 gene are strongly linked to a higher risk of T2D, which is believed to occur due to reduced insulin secretion . Similarly, mutations in the insulin receptor gene can lead to insulin resistance, leading to T2D. Genetics can also mediate diabetes by controlling the maintenance of glucose balance, which is another disease mechanism . Glucose homeostasis is tightly regulated by a complex interplay between several hormones, including insulin, glucagon, and amylin. Several gene roles in glucose homeostasis have been identified, such as the KCNJ11 gene and the HNF1A gene. Variants in the KCNJ11 gene have been associated with impaired insulin secretion and an increased risk of T2D. In contrast, mutations in the HNF1A gene can lead to impaired glucose homeostasis and maturity-onset diabetes of the young . Lifestyle factors Obesity and diabetes are two of the most prevalent chronic diseases worldwide, and their incidence is rising due to lifestyle factors. Obesity, characterized by excessive body fat accumulation, is a significant risk factor for T2D due to insulin resistance and impaired insulin secretion. Lifestyle factors such as sedentary behavior, unhealthy diet, and inadequate sleep are known to mediate the mechanisms of obesity and diabetes . Sedentary behavior, such as prolonged sitting or inactivity, has been linked to increased obesity and diabetes risk. Physical inactivity leads to decreased energy expenditure, reduced muscle mass, and impaired glucose metabolism, contributing to insulin resistance and diabetes development . Regular exercise can improve insulin sensitivity, glucose uptake, and body weight, reducing the risk of diabetes and obesity. An unhealthy diet, characterized by a high intake of refined carbohydrates, saturated and trans fats, and a low fiber intake of fruits and vegetables, is another crucial factor in developing obesity and diabetes . The high glycemic load of refined carbohydrates leads to rapid glucose absorption, causing insulin spikes and subsequent insulin resistance. The saturated and trans fats in unhealthy diets contribute to weight gain, insulin resistance, and inflammation, leading to obesity and diabetes. In contrast, a healthy diet that includes whole grains, fruits, vegetables, and moderate amounts of healthy fats, can help prevent and manage obesity and diabetes . Inadequate sleep, characterized by insufficient duration and poor quality, has been linked to increased obesity and diabetes risk. Sleep deprivation disrupts the regulation of hormones that control appetite and energy metabolism, leading to increased food intake, decreased physical activity, and altered glucose metabolism, contributing to obesity and diabetes . Adequate sleep, on the other hand, can improve insulin sensitivity, reduce appetite, and promote weight loss, reducing the risk of diabetes and obesity . Gut microbiota Obesity and diabetes are chronic metabolic disorders that affect a large portion of the global population. Recent research has highlighted the potential role of gut microbiota in developing and progressing these diseases. Gut microbiota refers to the trillions of microorganisms that inhabit the human gut, including bacteria, viruses, and fungi . These microorganisms regulate various metabolic processes, including energy homeostasis, glucose metabolism, and inflammation. One of the critical mechanisms by which gut microbiota mediates obesity is regulating energy balance. Gut bacteria have been shown to influence the amount of energy extracted from the diet by breaking down complex carbohydrates and other nutrients that are resistant to digestion by human enzymes . This produces short-chain fatty acids (SCFAs), which the host can use as an energy source. However, excessive production of SCFAs can lead to increased fat storage and obesity. Another mechanism by which gut microbiota contributes to the development of obesity is through the regulation of appetite and satiety . Studies have shown that gut bacteria, such as ghrelin, leptin, and serotonin, can produce various hormones and neurotransmitters that regulate hunger and food intake. Disruption of the regulation can cause an increase in food consumption and weight gain, as mentioned in reference . The gut microbiota, aside from being linked to obesity, has also been connected to diabetes development. The gut bacteria play a critical role in regulating glucose metabolism, one of the fundamental mechanisms contributing to diabetes development, as explained in reference . Various pathways have been discovered through which gut bacteria impact glucose metabolisms, such as the production of SCFAs that improve insulin sensitivity and the regulation of intestinal permeability that affects glucose absorption from the gut. Moreover, gut microbiota contributes to diabetes development through inflammation regulation, a hallmark of diabetes. Gut bacteria play a vital role in modulating the inflammatory response in the gut and systemic circulation . Dysbiosis, or an imbalance in the gut microbiota, has been associated with increased pro-inflammatory cytokine levels that can cause insulin resistance and impaired glucose metabolism . Environmental factors Obesity and diabetes are complex conditions that various environmental factors can influence. These factors include dietary habits, physical activity, stress, pollution, and socioeconomic status. This response will explore how environmental factors can mediate obesity and diabetes. One primary ecological factor contributing to obesity and diabetes is dietary habits . The risk of developing obesity and diabetes can be heightened by consuming diets high in calories, fats, and sugars. Excess calories from these diets are stored as fat, leading to weight gain, and can also cause insulin resistance, which impairs glucose uptake by cells and can lead to the development of diabetes . Moreover, a lack of nutrient-dense foods, such as fruits and vegetables, can lead to deficiencies in essential vitamins and minerals, further exacerbating the risk of obesity and diabetes . Another environmental factor that can contribute to obesity and diabetes is physical activity . Sedentary lifestyles, such as spending extended periods sitting, can reduce metabolic rate, decreasing calorie burning and weight gain. Additionally, physical inactivity can increase insulin resistance, making it more difficult for cells to use glucose effectively, thereby increasing the risk of diabetes. Conversely, regular physical activity can improve insulin sensitivity, aid weight management, and reduce the risk of developing both obesity and diabetes . Stress is another environmental factor that can contribute to obesity and diabetes. Chronic stress can lead to releasing cortisol, a hormone that increases appetite and can cause weight gain. Moreover, cortisol can impair glucose uptake by cells, leading to insulin resistance and increasing the risk of diabetes. Therefore, effective stress management techniques, such as mindfulness, exercise, and social support, can help reduce the risk of developing obesity and diabetes . Environmental factors such as pollution and socioeconomic status can also play a role in developing obesity and diabetes. Insulin resistance and the onset of diabetes have been associated with air pollution. In contrast, poverty, limited availability of healthy food choices, and unsafe physical activity environments can contribute to obesity and diabetes . Role of Traditional Chinese Medicine (TCM) Traditional Chinese Medicine (TCM) has a long history of use for treating various health conditions, including obesity and diabetes. TCM adopts a comprehensive perspective towards well-being, perceiving the body as an intricate network of interrelated components that both internal and external factors can influence. TCM uses a combination of modalities, including herbal medicine, acupuncture, dietary therapy, and lifestyle changes, to promote balance and harmony within the body and restore optimal health . Obesity is a growing health problem worldwide, and TCM has been used for centuries to address this condition. In TCM, obesity is seen as a result of imbalances within the body, such as dampness and phlegm accumulation, qi stagnation, and spleen and stomach deficiency . TCM practitioners will first evaluate the patient’s constitution and identify any underlying imbalances. Then, they will develop a personalized treatment plan that may include a combination of acupuncture, herbal medicine, and dietary therapy 3.1 Acupuncture For thousands of years, acupuncture has been a traditional Chinese medicine method to address various conditions, such as diabetes and obesity. This technique entails the insertion of thin needles into particular locations on the body, with the belief that it enhances the flow of energy or Qi, bringing about equilibrium within the body . In tackling obesity, acupuncture has shown promise in clinical studies by effectively decreasing body weight and body mass index (BMI). The needles are inserted into specific points on the body, including the ears, stomach, and spleen, which are thought to regulate appetite and metabolism. Acupuncture may also help to reduce inflammation in the body, which can contribute to weight gain and obesity-related health problems . Similarly, acupuncture can also be a helpful treatment for diabetes. By enhancing insulin sensitivity and decreasing insulin resistance, acupuncture has the potential to regulate blood sugar levels. The needles are typically inserted into points on the hands, feet, and ears, connected to the pancreas and other organs involved in blood sugar regulation . Furthermore, acupuncture can help to alleviate some of the symptoms of diabetes, such as neuropathy and nerve pain. Acupuncture has demonstrated the ability to trigger the production of endorphins, which are natural chemicals in the body that alleviate pain. This could potentially enhance diabetes management and decrease the likelihood of complications by minimizing discomfort and promoting overall health . Acupuncture deforms connective tissue and increases the release of different molecules in acupoints as part of its anti-inflammatory impact, further activating the NF-κB, MAPK, and ERK pathways in mast cells, fibroblasts, keratinocytes, and monocytes/macrophages. Acupuncture-activated acupoints have somatic afferents that send sensory signals to the spinal cord, brainstem, and hypothalamus neurons. Acupuncture stimulates multiple neuro-immune pathways after information integration in the brain, such as the hypothalamus-pituitary-adrenal axis, which ultimately acts on immune cells via the release of critical neurotransmitters and hormones, the vagus-adrenal medulla-dopamine, the cholinergic anti-inflammatory, and sympathetic pathways . 3.2 Herbal medicine Traditional Chinese Medicine (TCM) has been used for thousands of years to treat various health conditions, including obesity and diabetes. In TCM, obesity is viewed as a result of an imbalance in the body’s energy or “qi,” while diabetes is seen as a disorder of the body’s “yin” and “yang.” TCM practitioners use a variety of herbal medicines to address these imbalances and help promote weight loss and better blood sugar control . One commonly used herb in TCM for treating obesity and diabetes is ginseng. Ginseng has been found to have anti-obesity and anti-diabetic effects, as it can help reduce insulin resistance, improve glucose metabolism, and increase energy expenditure . Additionally, it has been shown to positively affect the gut microbiota, which can also contribute to weight loss . Another popular herb used in TCM for treating obesity and diabetes is bitter melon. Compounds present in bitter melon aid in regulating blood sugar levels and enhancing insulin sensitivity. It also has been found to have a mild appetite suppressant effect, which can aid in weight loss . Cinnamon is also commonly used in TCM for its anti-diabetic properties. It can help reduce fasting blood sugar levels, improve insulin sensitivity, and reduce inflammation. Cinnamon has also been shown to positively affect lipid metabolism, which can contribute to weight loss . In addition to these herbs, TCM practitioners may also recommend other lifestyle modifications, such as dietary and exercise, to help address obesity and diabetes. For example, TCM dietary guidelines often emphasize consuming nutrient-dense whole foods, such as vegetables, fruits, and whole grains, while limiting the intake of refined sugars and processed foods . In low-impact activities like tai chi and qigong, exercise can also help improve energy flow and promote overall health. While TCM may not cure obesity and diabetes, it can provide a valuable adjunct to conventional medical treatments. By addressing underlying imbalances in the body’s energy and promoting healthy lifestyle habits, TCM can help patients achieve better weight management and blood sugar control. As always, anyone considering herbal medicines should consult a qualified TCM practitioner or medical professional to ensure the treatment is safe and appropriate for their needs . 3.3 Dietary therapy For centuries, Traditional Chinese Medicine (TCM) has relied on dietary therapy to address various health conditions, such as obesity and diabetes. TCM views the body as a whole and focuses on restoring balance and harmony between different organ systems. In TCM, obesity and diabetes are seen as imbalances in the body’s energy, or Qi, and can be treated through changes in diet and lifestyle . In TCM, obesity is often associated with excessive dampness and phlegm in the body, which an unhealthy diet and lack of exercise can cause. The dietary therapy for obesity in TCM involves reducing the intake of fatty, greasy, and sweet foods while increasing the consumption of cooling foods that can help disperse dampness, such as bitter melon, lotus leaf, and green tea. Eating smaller, more frequent meals is also recommended, and avoiding eating late at night . Regular exercise, especially low-impact activities like walking and tai chi, is also recommended to help increase circulation and burn fat. Diabetes, on the other hand, is seen as a deficiency of Qi and Yin in the body. Qi refers to the body’s energy, while Yin refers to the body’s moisture and nourishment. In TCM, diabetes is often treated with dietary therapy, acupuncture, and herbal medicine . The dietary treatment for diabetes in TCM involves reducing the intake of sweet and greasy foods while increasing the consumption of foods rich in Qi and Yin, such as yams, sweet potatoes, and lotus seeds. Eating smaller, more frequent meals and avoiding raw or cold foods are also recommended. Regular exercises, such as brisk walking or cycling, are also advised to help improve circulation and regulate blood sugar levels . Overall, TCM dietary therapy for obesity and diabetes focuses on achieving balance and harmony in the body rather than simply treating the condition’s symptoms. By making healthy nutritional and lifestyle choices and receiving acupuncture and herbal medicine treatments, individuals can help restore their body’s natural balance and improve their overall health and well-being . 3.4 Qi gong and tai chi Qi Gong and Tai Chi are traditional Chinese practices that can be used as alternative therapies for treating obesity and diabetes. These practices are based on the principles of Traditional Chinese Medicine, which views the body as a complex system of interdependent parts that must be harmonious for optimal health. Qi Gong and Tai Chi are gentle exercises incorporating breathing techniques, mindfulness, and gentle movements to improve overall health and well-being . Obesity is a primary global public health concern linked to many health complications, such as diabetes. Qi Gong and Tai Chi are effective in helping individuals manage their weight and reduce the risk of developing obesity-related . These practices promote weight loss by improving metabolism, reducing stress, and increasing physical activity. One of the primary ways that Qi Gong and Tai Chi help with obesity is through stress reduction. Chronic stress significantly contributes to weight gain, as it can cause hormonal imbalances that increase appetite and decrease metabolism . Qi Gong and Tai Chi help to reduce stress by promoting relaxation, improving sleep quality, and reducing anxiety and depression. Additionally, Qi Gong and Tai Chi are low-impact exercises that individuals of all fitness levels can practice. These practices can improve cardiovascular health, muscle strength, and flexibility, contributing to weight loss and overall health. Diabetes is a metabolic disorder that affects millions of people worldwide. While there is no cure for diabetes, lifestyle changes, including exercise, can help manage the disease. Qi Gong and Tai Chi effectively manage diabetes by controlling blood glucose, reducing inflammation, and improving cardiovascular health . One of the primary ways that Qi Gong and Tai Chi help manage diabetes is by improving blood glucose control. These techniques aid in managing blood sugar levels by enhancing insulin sensitivity and diminishing insulin resistance. Moreover, the practice of Qi Gong and Tai Chi can potentially decrease inflammation, which is crucial in the onset and advancement of diabetes . Qi Gong and Tai Chi are effective traditional Chinese medicine options for managing obesity and diabetes. These practices promote overall health and well-being by reducing stress, improving physical activity levels, and helping to manage chronic diseases. Additionally, Qi Gong and Tai Chi are safe and gentle exercises that individuals of all ages and fitness levels can practice . 3.5 Massage and bodywork Massage and bodywork therapies are integral to Traditional Chinese Medicine (TCM), a holistic healthcare system that focuses on restoring the balance of the body’s vital energy or Qi. TCM offers a range of therapeutic options for treating various health conditions, including obesity and diabetes . Obesity is a metabolic disorder caused by an imbalance between energy intake and expenditure, leading to excessive accumulation of body fat . TCM views obesity as a result of Qi stagnation, dampness accumulation, and spleen and stomach weakness. Massage and bodywork therapies can help to regulate Qi flow, improve digestion, and promote lymphatic drainage, which can help to reduce body fat and improve metabolic function. Some typical massage and bodywork techniques used in TCM for obesity include acupressure, cupping, and Guasha . Acupressure involves applying pressure to specific points on the body, called acupoints, which correspond to different organs and systems. By stimulating these points, acupressure can help to regulate the function of the spleen and stomach, promote digestion, and reduce food cravings. Cupping is a technique in which cups are placed on the skin to create a suction effect . This can help to stimulate blood flow and lymphatic drainage, which can help to eliminate excess fluids and toxins from the body. Gua sha involves using a smooth-edged tool to scrape the skin, which can help to improve circulation, reduce inflammation, and promote healing . Diabetes is a condition affecting metabolism marked by elevated blood sugar levels, which can result in various complications such as nerve damage, kidney disease, and cardiovascular disease . TCM views diabetes as a result of Qi deficiency, Yin deficiency, and dampness accumulation. Massage and bodywork therapies can help to tonify Qi and Yin, regulate blood sugar levels, and improve circulation . Some standard massage and bodywork techniques used in TCM for diabetes include acupressure, moxibustion, and foot reflexology . Moxibustion involves burning a herb called mugwort over specific acupoints, which can help to notify Qi and improve circulation. Foot reflexology is the practice of exerting pressure on particular points on the feet that correspond to various organs and systems within the body. By stimulating these points, foot reflexology can assist in regulating blood sugar levels and enhancing overall well-being . 3.6 Tauroursodeoxycholic acid (TUDCA) A naturally occurring hydrophilic bile acid called tauroursodeoxycholic acid (TUDCA) has been used for generations in CM. In chemical terminology, TUDCA is a taurine conjugate of ursodeoxycholic acid (UDCA). This drug has been accepted by the Food and Drug Administration (FDA) for use in treating primary biliary cholangitis in modern pharmacology . Recent research studies indicate that TUDCA’s functioning mechanisms extend beyond hepatobiliary conditions. Due to its cytoprotective effect, TUDCA has been demonstrated to have potential therapeutic applications in various disease models, including neurodegenerative diseases, obesity, and diabetes. TUDCA was identified as a chemical chaperone due to the mechanisms underlying its cytoprotective action, mostly associated with regulating the unfolded protein response (UPR) and reducing ER stress. In addition, TUDCA has been shown to reduce oxidative stress, control apoptosis and reduce inflammation in numerous in-vitro and in-vivo models of different diseases . 3.7 Therapeutic effects of western medicine The weight-related effects of drugs used to treat T2D vary; some show a beneficial effect on weight loss, some have weight-neutral effects, and some result in a gain in weight. Examining the currently available drug profile is crucial when weight loss is a priority to identify prospective areas for improving blood-glucose control and weight management. discusses several classes of drugs, including metformin, SGLT2 inhibitors, and GLP-1 agonists, and how they affect weight loss in individuals with T2D . Acupuncture For thousands of years, acupuncture has been a traditional Chinese medicine method to address various conditions, such as diabetes and obesity. This technique entails the insertion of thin needles into particular locations on the body, with the belief that it enhances the flow of energy or Qi, bringing about equilibrium within the body . In tackling obesity, acupuncture has shown promise in clinical studies by effectively decreasing body weight and body mass index (BMI). The needles are inserted into specific points on the body, including the ears, stomach, and spleen, which are thought to regulate appetite and metabolism. Acupuncture may also help to reduce inflammation in the body, which can contribute to weight gain and obesity-related health problems . Similarly, acupuncture can also be a helpful treatment for diabetes. By enhancing insulin sensitivity and decreasing insulin resistance, acupuncture has the potential to regulate blood sugar levels. The needles are typically inserted into points on the hands, feet, and ears, connected to the pancreas and other organs involved in blood sugar regulation . Furthermore, acupuncture can help to alleviate some of the symptoms of diabetes, such as neuropathy and nerve pain. Acupuncture has demonstrated the ability to trigger the production of endorphins, which are natural chemicals in the body that alleviate pain. This could potentially enhance diabetes management and decrease the likelihood of complications by minimizing discomfort and promoting overall health . Acupuncture deforms connective tissue and increases the release of different molecules in acupoints as part of its anti-inflammatory impact, further activating the NF-κB, MAPK, and ERK pathways in mast cells, fibroblasts, keratinocytes, and monocytes/macrophages. Acupuncture-activated acupoints have somatic afferents that send sensory signals to the spinal cord, brainstem, and hypothalamus neurons. Acupuncture stimulates multiple neuro-immune pathways after information integration in the brain, such as the hypothalamus-pituitary-adrenal axis, which ultimately acts on immune cells via the release of critical neurotransmitters and hormones, the vagus-adrenal medulla-dopamine, the cholinergic anti-inflammatory, and sympathetic pathways . Herbal medicine Traditional Chinese Medicine (TCM) has been used for thousands of years to treat various health conditions, including obesity and diabetes. In TCM, obesity is viewed as a result of an imbalance in the body’s energy or “qi,” while diabetes is seen as a disorder of the body’s “yin” and “yang.” TCM practitioners use a variety of herbal medicines to address these imbalances and help promote weight loss and better blood sugar control . One commonly used herb in TCM for treating obesity and diabetes is ginseng. Ginseng has been found to have anti-obesity and anti-diabetic effects, as it can help reduce insulin resistance, improve glucose metabolism, and increase energy expenditure . Additionally, it has been shown to positively affect the gut microbiota, which can also contribute to weight loss . Another popular herb used in TCM for treating obesity and diabetes is bitter melon. Compounds present in bitter melon aid in regulating blood sugar levels and enhancing insulin sensitivity. It also has been found to have a mild appetite suppressant effect, which can aid in weight loss . Cinnamon is also commonly used in TCM for its anti-diabetic properties. It can help reduce fasting blood sugar levels, improve insulin sensitivity, and reduce inflammation. Cinnamon has also been shown to positively affect lipid metabolism, which can contribute to weight loss . In addition to these herbs, TCM practitioners may also recommend other lifestyle modifications, such as dietary and exercise, to help address obesity and diabetes. For example, TCM dietary guidelines often emphasize consuming nutrient-dense whole foods, such as vegetables, fruits, and whole grains, while limiting the intake of refined sugars and processed foods . In low-impact activities like tai chi and qigong, exercise can also help improve energy flow and promote overall health. While TCM may not cure obesity and diabetes, it can provide a valuable adjunct to conventional medical treatments. By addressing underlying imbalances in the body’s energy and promoting healthy lifestyle habits, TCM can help patients achieve better weight management and blood sugar control. As always, anyone considering herbal medicines should consult a qualified TCM practitioner or medical professional to ensure the treatment is safe and appropriate for their needs . Dietary therapy For centuries, Traditional Chinese Medicine (TCM) has relied on dietary therapy to address various health conditions, such as obesity and diabetes. TCM views the body as a whole and focuses on restoring balance and harmony between different organ systems. In TCM, obesity and diabetes are seen as imbalances in the body’s energy, or Qi, and can be treated through changes in diet and lifestyle . In TCM, obesity is often associated with excessive dampness and phlegm in the body, which an unhealthy diet and lack of exercise can cause. The dietary therapy for obesity in TCM involves reducing the intake of fatty, greasy, and sweet foods while increasing the consumption of cooling foods that can help disperse dampness, such as bitter melon, lotus leaf, and green tea. Eating smaller, more frequent meals is also recommended, and avoiding eating late at night . Regular exercise, especially low-impact activities like walking and tai chi, is also recommended to help increase circulation and burn fat. Diabetes, on the other hand, is seen as a deficiency of Qi and Yin in the body. Qi refers to the body’s energy, while Yin refers to the body’s moisture and nourishment. In TCM, diabetes is often treated with dietary therapy, acupuncture, and herbal medicine . The dietary treatment for diabetes in TCM involves reducing the intake of sweet and greasy foods while increasing the consumption of foods rich in Qi and Yin, such as yams, sweet potatoes, and lotus seeds. Eating smaller, more frequent meals and avoiding raw or cold foods are also recommended. Regular exercises, such as brisk walking or cycling, are also advised to help improve circulation and regulate blood sugar levels . Overall, TCM dietary therapy for obesity and diabetes focuses on achieving balance and harmony in the body rather than simply treating the condition’s symptoms. By making healthy nutritional and lifestyle choices and receiving acupuncture and herbal medicine treatments, individuals can help restore their body’s natural balance and improve their overall health and well-being . Qi gong and tai chi Qi Gong and Tai Chi are traditional Chinese practices that can be used as alternative therapies for treating obesity and diabetes. These practices are based on the principles of Traditional Chinese Medicine, which views the body as a complex system of interdependent parts that must be harmonious for optimal health. Qi Gong and Tai Chi are gentle exercises incorporating breathing techniques, mindfulness, and gentle movements to improve overall health and well-being . Obesity is a primary global public health concern linked to many health complications, such as diabetes. Qi Gong and Tai Chi are effective in helping individuals manage their weight and reduce the risk of developing obesity-related . These practices promote weight loss by improving metabolism, reducing stress, and increasing physical activity. One of the primary ways that Qi Gong and Tai Chi help with obesity is through stress reduction. Chronic stress significantly contributes to weight gain, as it can cause hormonal imbalances that increase appetite and decrease metabolism . Qi Gong and Tai Chi help to reduce stress by promoting relaxation, improving sleep quality, and reducing anxiety and depression. Additionally, Qi Gong and Tai Chi are low-impact exercises that individuals of all fitness levels can practice. These practices can improve cardiovascular health, muscle strength, and flexibility, contributing to weight loss and overall health. Diabetes is a metabolic disorder that affects millions of people worldwide. While there is no cure for diabetes, lifestyle changes, including exercise, can help manage the disease. Qi Gong and Tai Chi effectively manage diabetes by controlling blood glucose, reducing inflammation, and improving cardiovascular health . One of the primary ways that Qi Gong and Tai Chi help manage diabetes is by improving blood glucose control. These techniques aid in managing blood sugar levels by enhancing insulin sensitivity and diminishing insulin resistance. Moreover, the practice of Qi Gong and Tai Chi can potentially decrease inflammation, which is crucial in the onset and advancement of diabetes . Qi Gong and Tai Chi are effective traditional Chinese medicine options for managing obesity and diabetes. These practices promote overall health and well-being by reducing stress, improving physical activity levels, and helping to manage chronic diseases. Additionally, Qi Gong and Tai Chi are safe and gentle exercises that individuals of all ages and fitness levels can practice . Massage and bodywork Massage and bodywork therapies are integral to Traditional Chinese Medicine (TCM), a holistic healthcare system that focuses on restoring the balance of the body’s vital energy or Qi. TCM offers a range of therapeutic options for treating various health conditions, including obesity and diabetes . Obesity is a metabolic disorder caused by an imbalance between energy intake and expenditure, leading to excessive accumulation of body fat . TCM views obesity as a result of Qi stagnation, dampness accumulation, and spleen and stomach weakness. Massage and bodywork therapies can help to regulate Qi flow, improve digestion, and promote lymphatic drainage, which can help to reduce body fat and improve metabolic function. Some typical massage and bodywork techniques used in TCM for obesity include acupressure, cupping, and Guasha . Acupressure involves applying pressure to specific points on the body, called acupoints, which correspond to different organs and systems. By stimulating these points, acupressure can help to regulate the function of the spleen and stomach, promote digestion, and reduce food cravings. Cupping is a technique in which cups are placed on the skin to create a suction effect . This can help to stimulate blood flow and lymphatic drainage, which can help to eliminate excess fluids and toxins from the body. Gua sha involves using a smooth-edged tool to scrape the skin, which can help to improve circulation, reduce inflammation, and promote healing . Diabetes is a condition affecting metabolism marked by elevated blood sugar levels, which can result in various complications such as nerve damage, kidney disease, and cardiovascular disease . TCM views diabetes as a result of Qi deficiency, Yin deficiency, and dampness accumulation. Massage and bodywork therapies can help to tonify Qi and Yin, regulate blood sugar levels, and improve circulation . Some standard massage and bodywork techniques used in TCM for diabetes include acupressure, moxibustion, and foot reflexology . Moxibustion involves burning a herb called mugwort over specific acupoints, which can help to notify Qi and improve circulation. Foot reflexology is the practice of exerting pressure on particular points on the feet that correspond to various organs and systems within the body. By stimulating these points, foot reflexology can assist in regulating blood sugar levels and enhancing overall well-being . Tauroursodeoxycholic acid (TUDCA) A naturally occurring hydrophilic bile acid called tauroursodeoxycholic acid (TUDCA) has been used for generations in CM. In chemical terminology, TUDCA is a taurine conjugate of ursodeoxycholic acid (UDCA). This drug has been accepted by the Food and Drug Administration (FDA) for use in treating primary biliary cholangitis in modern pharmacology . Recent research studies indicate that TUDCA’s functioning mechanisms extend beyond hepatobiliary conditions. Due to its cytoprotective effect, TUDCA has been demonstrated to have potential therapeutic applications in various disease models, including neurodegenerative diseases, obesity, and diabetes. TUDCA was identified as a chemical chaperone due to the mechanisms underlying its cytoprotective action, mostly associated with regulating the unfolded protein response (UPR) and reducing ER stress. In addition, TUDCA has been shown to reduce oxidative stress, control apoptosis and reduce inflammation in numerous in-vitro and in-vivo models of different diseases . Therapeutic effects of western medicine The weight-related effects of drugs used to treat T2D vary; some show a beneficial effect on weight loss, some have weight-neutral effects, and some result in a gain in weight. Examining the currently available drug profile is crucial when weight loss is a priority to identify prospective areas for improving blood-glucose control and weight management. discusses several classes of drugs, including metformin, SGLT2 inhibitors, and GLP-1 agonists, and how they affect weight loss in individuals with T2D . Hormone associated with obesity 4.1 Insulin Insulin is an essential hormone responsible for regulating blood sugar levels within the body. After we consume food, carbohydrates are broken down into glucose, which is absorbed into the bloodstream. The pancreas produces insulin, which facilitates the transportation of glucose from the bloodstream to cells, where it can be utilized as energy or stored for future use . However, in individuals with obesity, their bodies may become less responsive to the effects of insulin, leading to elevated glucose levels in the bloodstream and an augmented likelihood of developing T2D . One of the main ways that insulin resistance contributes to obesity is through its effects on fat cells. Insulin helps to regulate the storage and breakdown of fat in the body. When insulin levels are high, fat cells store glucose as fat. When insulin levels are low, fat cells break down stored fat to release energy. However, in people with insulin resistance, fat cells become less responsive to insulin and are less able to take up glucose and store fat. As a result, more glucose remains in the bloodstream, leading to higher insulin levels and increased fat storage . Another way that insulin resistance can contribute to obesity is through its effects on appetite and metabolism . Insulin helps regulate hunger and satiety by signaling to the brain that the body has enough to eat. When insulin levels are high, the brain receives signals to stop eating and start using stored energy. However, the brain may become less responsive to these signals in people with insulin resistance, leading to increased appetite and overeating . Insulin resistance can also affect the body’s metabolism or the rate at which it burns calories. When insulin levels are high, the body tends to store energy in the form of fat. However, in people with insulin resistance, the body may be less able to use stored fat for energy and instead rely on glucose as a fuel source. This can lead to lower metabolic rates and decreased calorie burning, challenging losing and maintaining a healthy weight . 4.2 Omentin Omentin is a hormone that is primarily produced by adipose tissue, which is the tissue that stores fat in the body. It belongs to a group of hormones known as adipokines, which regulate metabolism and inflammation. Omentin is associated with obesity and metabolic disorders, and its levels in the body are altered in individuals with these conditions . Research has shown that omentin is essential in regulating insulin sensitivity, which is the body’s ability to respond to insulin and use glucose for energy. Insulin resistance, which is the impaired ability of cells to respond to insulin, is a common feature of obesity and is a risk factor for T2D . Omentin has been found to improve insulin sensitivity in animal studies, and lower levels of omentin have been observed in individuals with insulin resistance. In addition to its role in insulin sensitivity, omentin regulates inflammation in the body . Inflammation is a natural response of the immune system to injury or infection, but chronic inflammation is associated with various health conditions, including obesity, diabetes, and cardiovascular disease. Research has indicated that omentin possesses anti-inflammatory properties, and evidence suggests that heightened inflammation within the body is associated with lower levels of omentin . An interesting finding is that omentin levels seem to be influenced by the location of adipose tissue. Specifically, subcutaneous adipose tissue, located just beneath the skin, has been observed to produce greater amounts of omentin compared to visceral adipose tissue surrounding internal organs. This implies that how body fat is distributed could impact omentin levels and their impact on metabolism . 4.3 Leptin Leptin, a hormone synthesized by adipose tissue or fat cells, significantly regulates body weight and metabolism. The amount of leptin released into the bloodstream is directly proportional to the body’s fat stores. Its primary function is communicating with the brain, inducing decreased appetite and increased energy expenditure . In people with obesity, there is often a condition called leptin resistance, in which the body becomes less responsive to the effects of leptin. This can lead to a cycle of overeating and weight gain, as the brain doesn’t receive the signal to decrease appetite or increase energy expenditure. Leptin resistance is thought to develop due to chronic overeating and high circulating leptin levels over an extended period. This leads to a desensitization of the receptors that respond to the hormone . Interestingly, while leptin resistance is commonly associated with obesity, not all people with obesity have leptin resistance, and not all people with leptin resistance are obese. Evidence suggests that other factors, such as genetics, inflammation, and environmental toxins, may play a role in developing leptin resistance . In addition to regulating appetite and energy expenditure, leptin has other bodily functions, such as immune function, fertility, and bone metabolism. Therefore, disruptions in leptin signaling can have far-reaching effects on overall health and well-being . 4.4 Acylation stimulating protein (ASP) ASP is a hormone found to regulate energy metabolism and adipose tissue physiology. ASP is produced primarily by adipocytes and is secreted into the circulation in response to food intake, especially dietary fat. Once in the bloodstream, ASP binds to its receptor, C5L2, expressed in adipocytes, muscle cells, and other tissues, and initiates various cellular responses . One of the main functions of ASP is to promote the uptake and storage of dietary fat in adipose tissue. ASP can encourage the production of fatty acids, or lipogenesis, and increase the absorption of fatty acids by adipocytes. This can lead to the enlargement of adipose tissue depots and potentially contribute to the development of obesity . Research has indicated that obese individuals have higher levels of ASP in their bloodstream than those who are lean. Additionally, ASP has been linked to glucose homeostasis and insulin sensitivity regulation. Some studies have demonstrated that ASP can facilitate glucose uptake in muscle cells and adipocytes and improve insulin sensitivity in these tissues . However, the impact of ASP on glucose metabolism appears to depend on the circumstances, as some studies suggest that ASP may hinder glucose tolerance and contribute to insulin resistance. The precise mechanisms underlying the effects of ASP on energy metabolism and glucose homeostasis are not fully understood . It is thought that ASP may act in concert with other hormones and signaling pathways, such as insulin and adipokine leptin, to regulate these processes. Evidence suggests that ASP may directly affect the hypothalamus, a brain region crucial in regulating energy balance . The processes by which ASP increases triacylglycerol production are now firmly established. Stimulating the last (and most likely rate-limiting) enzyme involved in triacylglycerol production, diacylglycerol acyltransferase, has two distinct implications. The second is an increase in glucose-specific membrane transport. Increased diacylglycerol acyltransferase (EC 2.3.1.20) activity increases fatty acid incorporation into triacylglycerol and, as a result, adipocytes’ rate of fatty acid intake. The increase in specific membrane glucose transport, an additional effect of ASP, is also of considerable significance. In human skin fibroblasts, human adipocytes, and L6 myotubes, ASP increases glucose transport . 4.5 Ghrelin Ghrelin is a hormone produced mainly by the stomach, although small amounts are also secreted by other organs such as the pancreas and small intestine . It stimulates appetite and promotes weight gain, making it an essential hormone in regulating energy balance and body weight. Ghrelin acts on the hypothalamus, a region in the brain that controls food intake and energy expenditure, as well as other areas involved in reward and motivation. The release of ghrelin is influenced by various factors such as fasting, stress, and sleep deprivation . It is secreted in higher amounts during fasting or calorie restriction periods, which may explain why people often experience intense hunger. Ghrelin levels also increase in stress response, which may contribute to overeating and weight gain in some individuals who use food as a coping mechanism . Lack of sleep has also increased ghrelin levels, possibly contributing to the link between sleep deprivation and obesity . Ghrelin is believed to promote weight gain by several mechanisms. First, it increases appetite and food intake, increasing energy surplus and weight gain. Second, it slows down metabolism and reduces energy expenditure, which can also contribute to weight gain. Third, it promotes fat accumulation in adipose tissue by stimulating the release of growth hormones, insulin, and other hormones involved in fat storage . Studies have shown that ghrelin levels are often higher in obese individuals than those of average weight. This suggests that ghrelin may play a role in developing obesity and related metabolic disorders . However, the relationship between ghrelin and obesity is complex, and further research is needed to understand its role in this context entirely . 4.6 Peptides YY (PYY) The endocrine cells in the gastrointestinal tract’s ileum and colon secrete a hormone called Peptide YY (PYY), which plays a vital role in controlling appetite and satiety by being released after food intake. PYY belongs to the pancreatic polypeptide family and is structurally similar to neuropeptide Y (NPY) and peptide YY2 (PYY2) . Several studies have shown that PYY levels are altered in individuals with obesity. In particular, it has been observed that obese individuals have lower levels of PYY compared to normal-weight individuals. This suggests that PYY may be involved in the pathophysiology of obesity . The exact mechanisms through which PYY regulates body weight are still not fully understood. One of the main ways PYY influences appetite and food intake is by acting on the hypothalamus, which is part of the brain that regulates energy balance. PYY activates neurons in the hypothalamus that suppress appetite and promote satiety . This leads to a reduction in food intake and an increase in feelings of fullness. In addition to its effects on appetite, PYY has also been shown to influence energy expenditure. Studies have demonstrated that PYY can increase energy expenditure by stimulating the sympathetic nervous system, which regulates metabolic processes such as thermogenesis and lipolysis . This suggests that PYY may be involved in regulating body weight through its effects on both food intake and energy expenditure. The role of PYY in treating obesity has been investigated in several clinical studies. One approach involves using PYY analogs, synthetic molecules that mimic the effects of endogenous PYY. Studies have demonstrated that these analogs can decrease food consumption and facilitate weight loss in both human subjects and animal models . Insulin Insulin is an essential hormone responsible for regulating blood sugar levels within the body. After we consume food, carbohydrates are broken down into glucose, which is absorbed into the bloodstream. The pancreas produces insulin, which facilitates the transportation of glucose from the bloodstream to cells, where it can be utilized as energy or stored for future use . However, in individuals with obesity, their bodies may become less responsive to the effects of insulin, leading to elevated glucose levels in the bloodstream and an augmented likelihood of developing T2D . One of the main ways that insulin resistance contributes to obesity is through its effects on fat cells. Insulin helps to regulate the storage and breakdown of fat in the body. When insulin levels are high, fat cells store glucose as fat. When insulin levels are low, fat cells break down stored fat to release energy. However, in people with insulin resistance, fat cells become less responsive to insulin and are less able to take up glucose and store fat. As a result, more glucose remains in the bloodstream, leading to higher insulin levels and increased fat storage . Another way that insulin resistance can contribute to obesity is through its effects on appetite and metabolism . Insulin helps regulate hunger and satiety by signaling to the brain that the body has enough to eat. When insulin levels are high, the brain receives signals to stop eating and start using stored energy. However, the brain may become less responsive to these signals in people with insulin resistance, leading to increased appetite and overeating . Insulin resistance can also affect the body’s metabolism or the rate at which it burns calories. When insulin levels are high, the body tends to store energy in the form of fat. However, in people with insulin resistance, the body may be less able to use stored fat for energy and instead rely on glucose as a fuel source. This can lead to lower metabolic rates and decreased calorie burning, challenging losing and maintaining a healthy weight . Omentin Omentin is a hormone that is primarily produced by adipose tissue, which is the tissue that stores fat in the body. It belongs to a group of hormones known as adipokines, which regulate metabolism and inflammation. Omentin is associated with obesity and metabolic disorders, and its levels in the body are altered in individuals with these conditions . Research has shown that omentin is essential in regulating insulin sensitivity, which is the body’s ability to respond to insulin and use glucose for energy. Insulin resistance, which is the impaired ability of cells to respond to insulin, is a common feature of obesity and is a risk factor for T2D . Omentin has been found to improve insulin sensitivity in animal studies, and lower levels of omentin have been observed in individuals with insulin resistance. In addition to its role in insulin sensitivity, omentin regulates inflammation in the body . Inflammation is a natural response of the immune system to injury or infection, but chronic inflammation is associated with various health conditions, including obesity, diabetes, and cardiovascular disease. Research has indicated that omentin possesses anti-inflammatory properties, and evidence suggests that heightened inflammation within the body is associated with lower levels of omentin . An interesting finding is that omentin levels seem to be influenced by the location of adipose tissue. Specifically, subcutaneous adipose tissue, located just beneath the skin, has been observed to produce greater amounts of omentin compared to visceral adipose tissue surrounding internal organs. This implies that how body fat is distributed could impact omentin levels and their impact on metabolism . Leptin Leptin, a hormone synthesized by adipose tissue or fat cells, significantly regulates body weight and metabolism. The amount of leptin released into the bloodstream is directly proportional to the body’s fat stores. Its primary function is communicating with the brain, inducing decreased appetite and increased energy expenditure . In people with obesity, there is often a condition called leptin resistance, in which the body becomes less responsive to the effects of leptin. This can lead to a cycle of overeating and weight gain, as the brain doesn’t receive the signal to decrease appetite or increase energy expenditure. Leptin resistance is thought to develop due to chronic overeating and high circulating leptin levels over an extended period. This leads to a desensitization of the receptors that respond to the hormone . Interestingly, while leptin resistance is commonly associated with obesity, not all people with obesity have leptin resistance, and not all people with leptin resistance are obese. Evidence suggests that other factors, such as genetics, inflammation, and environmental toxins, may play a role in developing leptin resistance . In addition to regulating appetite and energy expenditure, leptin has other bodily functions, such as immune function, fertility, and bone metabolism. Therefore, disruptions in leptin signaling can have far-reaching effects on overall health and well-being . Acylation stimulating protein (ASP) ASP is a hormone found to regulate energy metabolism and adipose tissue physiology. ASP is produced primarily by adipocytes and is secreted into the circulation in response to food intake, especially dietary fat. Once in the bloodstream, ASP binds to its receptor, C5L2, expressed in adipocytes, muscle cells, and other tissues, and initiates various cellular responses . One of the main functions of ASP is to promote the uptake and storage of dietary fat in adipose tissue. ASP can encourage the production of fatty acids, or lipogenesis, and increase the absorption of fatty acids by adipocytes. This can lead to the enlargement of adipose tissue depots and potentially contribute to the development of obesity . Research has indicated that obese individuals have higher levels of ASP in their bloodstream than those who are lean. Additionally, ASP has been linked to glucose homeostasis and insulin sensitivity regulation. Some studies have demonstrated that ASP can facilitate glucose uptake in muscle cells and adipocytes and improve insulin sensitivity in these tissues . However, the impact of ASP on glucose metabolism appears to depend on the circumstances, as some studies suggest that ASP may hinder glucose tolerance and contribute to insulin resistance. The precise mechanisms underlying the effects of ASP on energy metabolism and glucose homeostasis are not fully understood . It is thought that ASP may act in concert with other hormones and signaling pathways, such as insulin and adipokine leptin, to regulate these processes. Evidence suggests that ASP may directly affect the hypothalamus, a brain region crucial in regulating energy balance . The processes by which ASP increases triacylglycerol production are now firmly established. Stimulating the last (and most likely rate-limiting) enzyme involved in triacylglycerol production, diacylglycerol acyltransferase, has two distinct implications. The second is an increase in glucose-specific membrane transport. Increased diacylglycerol acyltransferase (EC 2.3.1.20) activity increases fatty acid incorporation into triacylglycerol and, as a result, adipocytes’ rate of fatty acid intake. The increase in specific membrane glucose transport, an additional effect of ASP, is also of considerable significance. In human skin fibroblasts, human adipocytes, and L6 myotubes, ASP increases glucose transport . Ghrelin Ghrelin is a hormone produced mainly by the stomach, although small amounts are also secreted by other organs such as the pancreas and small intestine . It stimulates appetite and promotes weight gain, making it an essential hormone in regulating energy balance and body weight. Ghrelin acts on the hypothalamus, a region in the brain that controls food intake and energy expenditure, as well as other areas involved in reward and motivation. The release of ghrelin is influenced by various factors such as fasting, stress, and sleep deprivation . It is secreted in higher amounts during fasting or calorie restriction periods, which may explain why people often experience intense hunger. Ghrelin levels also increase in stress response, which may contribute to overeating and weight gain in some individuals who use food as a coping mechanism . Lack of sleep has also increased ghrelin levels, possibly contributing to the link between sleep deprivation and obesity . Ghrelin is believed to promote weight gain by several mechanisms. First, it increases appetite and food intake, increasing energy surplus and weight gain. Second, it slows down metabolism and reduces energy expenditure, which can also contribute to weight gain. Third, it promotes fat accumulation in adipose tissue by stimulating the release of growth hormones, insulin, and other hormones involved in fat storage . Studies have shown that ghrelin levels are often higher in obese individuals than those of average weight. This suggests that ghrelin may play a role in developing obesity and related metabolic disorders . However, the relationship between ghrelin and obesity is complex, and further research is needed to understand its role in this context entirely . Peptides YY (PYY) The endocrine cells in the gastrointestinal tract’s ileum and colon secrete a hormone called Peptide YY (PYY), which plays a vital role in controlling appetite and satiety by being released after food intake. PYY belongs to the pancreatic polypeptide family and is structurally similar to neuropeptide Y (NPY) and peptide YY2 (PYY2) . Several studies have shown that PYY levels are altered in individuals with obesity. In particular, it has been observed that obese individuals have lower levels of PYY compared to normal-weight individuals. This suggests that PYY may be involved in the pathophysiology of obesity . The exact mechanisms through which PYY regulates body weight are still not fully understood. One of the main ways PYY influences appetite and food intake is by acting on the hypothalamus, which is part of the brain that regulates energy balance. PYY activates neurons in the hypothalamus that suppress appetite and promote satiety . This leads to a reduction in food intake and an increase in feelings of fullness. In addition to its effects on appetite, PYY has also been shown to influence energy expenditure. Studies have demonstrated that PYY can increase energy expenditure by stimulating the sympathetic nervous system, which regulates metabolic processes such as thermogenesis and lipolysis . This suggests that PYY may be involved in regulating body weight through its effects on both food intake and energy expenditure. The role of PYY in treating obesity has been investigated in several clinical studies. One approach involves using PYY analogs, synthetic molecules that mimic the effects of endogenous PYY. Studies have demonstrated that these analogs can decrease food consumption and facilitate weight loss in both human subjects and animal models . Recently developed treatments for obesity Obesity is a chronic disease characterized by an excessive accumulation of body fat, which can lead to a range of health problems, including diabetes, heart disease, and stroke. While diet and exercise are the primary means of managing obesity, several recently developed treatment options can help individuals lose weight and maintain a healthy lifestyle. 5.1 Bariatric surgery Bariatric surgery is a weight loss surgery that involves altering the digestive system to help individuals struggling with obesity loses weight. Obesity is a chronic condition affecting millions worldwide and can lead to various health complications such as heart disease, T2D, and sleep apnea . Bariatric surgery is often recommended for individuals with a body mass index (BMI) of 40 or higher or those with a BMI of 35 or higher with at least one obesity-related medical condition. There are different bariatric surgery procedures, each with its benefits and risks. The most common types include gastric bypass, sleeve gastrectomy, adjustable gastric banding, and biliopancreatic diversion with a duodenal switch . In gastric bypass surgery, the surgeon creates a small stomach pouch and reroutes the small intestine, limiting the amount of food consumed and absorbed by the body. Sleeve gastrectomy involves removing a portion of the stomach to reduce its size. In contrast, adjustable gastric banding involves placing an inflatable band around the top part of the stomach to restrict food intake . Bariatric surgery is an effective procedure that requires careful consideration and preparation. Before surgery, patients undergo a comprehensive evaluation to determine their suitability for the process and identify any underlying medical conditions that may affect the outcome . Patients must also undergo extensive counseling and education to help them understand the procedure’s risks and benefits and prepare them for the changes they must make to their lifestyle after the surgery. Bariatric surgery is not a magic solution to weight loss . While the surgery can help individuals lose significant weight, it requires a commitment to long-term lifestyle changes, including healthy eating habits and regular exercise. Patients undergoing bariatric surgery must also be monitored closely by their healthcare provider to ensure they meet their weight loss goals and address complications . The desire to participate in hormonal changes following bariatric surgery arises from two fundamental observations: (a) the weight loss seems to arise from reductions in appetite and food intake, implying that the surgical procedure interferes with the normal regulation of appetite and food intake, and (b) the reversal of T2D occurs a few days after surgery before any significant weight loss has occurred, implying that mechanisms other than weight loss are involved . 5.2 Endoscopic sleeve gastroplasty (ESG) Endoscopic sleeve gastroplasty (ESG) is a relatively new, minimally invasive procedure for weight loss in people with obesity. The process involves using an endoscope, a thin tube with a camera, and surgical instruments attached to it to reduce the stomach size by creating a sleeve-like shape . This limits the amount of food the stomach can hold, leading to a feeling of fullness and reduced hunger. The procedure is usually done on an outpatient basis, and patients are given general anesthesia. Once the patient is sedated, the endoscope is inserted into the mouth and down the throat to reach the stomach. The surgeon gathers and folds the stomach tissue using sutures into a narrow tube shape, creating a sleeve-like structure . The sutures are then tightened to hold the sleeve in place. ESG typically takes 60 to 90 minutes, and patients are usually discharged on the same day. Most patients can return to normal activities within a few days after the procedure, although a liquid diet is generally recommended for the first week or so. ESG is effective for weight loss in people with obesity . Studies have shown that patients typically lose between 15% and 20% of their excess body weight within 12 months after the procedure. ESG has also improved various health markers, including blood pressure, cholesterol levels, and blood sugar control. However, ESG is not without risks. Complications can occur, although they are rare. These can include bleeding, infection, and perforation of the stomach or esophagus. Patients may also experience nausea, vomiting, and abdominal pain in the first few days after the procedure. ESG is not appropriate for everyone with obesity . It is generally recommended for people with a body mass index (BMI) between 30 and 40 who cannot lose weight through diet and exercise alone. People with certain medical conditions, such as inflammatory bowel disease or previous surgeries on the stomach or intestines, may not be candidates for ESG . 5.3 Duodenal-jejunal bypasses liner The duodenal-jejunal bypass liner (DJBL) is a non-surgical weight loss treatment for obesity that involves the insertion of a temporary liner into the small intestine. The liner works by restricting the absorption of nutrients from food, leading to a reduction in calorie intake and weight loss. During the DJBL procedure, a flexible tube with a balloon at one end is inserted through the mouth and into the small intestine . Once in place, the balloon is inflated, creating a barrier that prevents food from coming into contact with the first part of the small intestine, called the duodenum. By bypassing the duodenum and restricting nutrient absorption, the DJBL promotes weight loss and helps improve metabolic conditions such as diabetes and high blood pressure . The DJBL is a reversible procedure and can be removed after six months. During this time, patients are advised to follow a structured diet and exercise program to maximize weight loss results. The DJBL is intended for patients with a body mass index (BMI) of 30 or higher who cannot lose weight through diet and exercise alone. Studies have shown that the DJBL can be an effective weight loss tool, with patients losing an average of 20-25% of their excess body weight during the six-month treatment period . Additionally, the DJBL has been shown to improve metabolic conditions such as diabetes and high blood pressure, with some patients experiencing remission. Like any medical procedure, the DJBL does carry some risks, including nausea, vomiting, and abdominal pain. In rare cases, the DJBL can lead to more severe complications such as bowel obstruction, bleeding, or perforation. Patients considering the DJBL should discuss the risks and benefits with their healthcare provider to determine if it is the proper weight loss treatment for them . 5.4 GLP-1 receptor agonists GLP-1 receptor agonists are primarily used to treat T2D but have also shown efficacy in managing obesity. These medications imitate the effects of GLP-1, a gut hormone that helps regulate appetite and blood glucose levels, resulting in a decrease in hunger and an increase in fullness, leading to reduced food intake and subsequent weight loss . Furthermore, they offer additional benefits by improving glycemic control and decreasing cardiovascular risk factors in individuals with obesity and T2D . One of the most commonly used GLP-1 receptor agonists for treating obesity is liraglutide, administered once daily by subcutaneous injection, leading to an average weight loss of 5-10% of initial body weight . Semaglutide, another GLP-1 receptor agonist administered once weekly by subcutaneous injection, is even more effective, resulting in an average weight loss of 15-20% of initial body weight . However, GLP-1 receptor agonists can cause side effects, including nausea, vomiting, and diarrhea, which can be minimized by starting with a low dose and gradually titrating. Additionally, they may increase the risk of pancreatitis and thyroid tumors, but the overall risk is low . Semaglutide is a medication recently approved by the US Food and Drug Administration (FDA) for treating obesity in adults. It is a glucagon-like peptide-1 (GLP-1) receptor agonist, miming the action of a naturally occurring hormone called GLP-1 . The intestine releases GLP-1 in reaction to food consumption, and it triggers the pancreas to secrete insulin, thus aiding in regulating blood sugar levels. Semaglutide has been found to have a dual action of regulating blood sugar levels and inducing weight loss. Semaglutide works by activating the GLP-1 receptor in the brain, which results in decreased appetite and increased feelings of fullness or satiety . This leads to a reduction in food intake, resulting in weight loss. Moreover, it has been demonstrated that semaglutide decelerates the process of gastric emptying, which refers to the speed at which food exits the stomach and moves into the small intestine. This prolongs the feeling of fullness, which helps to reduce calorie intake and promote weight loss . In clinical trials, semaglutide is effective in promoting weight loss in adults with a body mass index (BMI) of 30 or higher, which is considered obese, as well as those with a BMI of 27 or higher who have at least one weight-related health condition, such as T2D or high blood pressure . In one study, participants who received a once-weekly injection of semaglutide lost an average of 15% of their body weight over 68 weeks, compared to a 2.4% weight loss in the placebo group. Semaglutide is typically administered once a week via subcutaneous injection. The recommended starting dose is 0.25 mg per week, gradually increasing to 2.4 mg per week over 16 weeks . The medication should be combined with a reduced-calorie diet and increased physical activity for optimal results. Like any medication, semaglutide may cause side effects. The most common side effects reported in clinical trials include nausea, diarrhea, vomiting, and constipation. In rare cases, semaglutide may cause inflammation of the pancreas, which can be severe and requires immediate medical attention . 5.6 Digital health interventions Digital health interventions use digital technologies to support and improve health outcomes. In the case of obesity treatment, digital health interventions can be an effective tool to help individuals achieve and maintain a healthy weight. These interventions can include a range of technologies, such as mobile apps, wearable devices, online programs, and virtual coaching . One of the key benefits of digital health interventions for obesity treatment is their ability to provide personalized support and feedback. Many digital health programs use algorithms to track an individual’s progress, provide personalized feedback, and adjust their plan accordingly. For example, a digital health app may use data on an individual’s weight, physical activity, and dietary habits to provide personalized recommendations on achieving their weight loss goals . Another advantage of digital health interventions is their accessibility. Individuals can access digital health programs from anywhere, which can be particularly beneficial for individuals with busy schedules or limited access to traditional healthcare resources. Additionally, digital health interventions can often be more cost-effective than conventional obesity treatments, making them more accessible to a broader range of individuals. Several studies have demonstrated the effectiveness of digital health interventions for obesity treatment . A systematic review of 23 randomized controlled trials found that digital health interventions were associated with significant body weight, BMI, and waist circumference reductions. Additionally, individuals who used digital health interventions reported high satisfaction with the programs. Despite the potential benefits of digital health interventions for obesity treatment, some challenges are associated with these programs . For example, some individuals may struggle to engage with the technology or find the programs overwhelming. Additionally, digital health interventions may not be appropriate for individuals with complex medical needs or require more intensive treatment . Bariatric surgery Bariatric surgery is a weight loss surgery that involves altering the digestive system to help individuals struggling with obesity loses weight. Obesity is a chronic condition affecting millions worldwide and can lead to various health complications such as heart disease, T2D, and sleep apnea . Bariatric surgery is often recommended for individuals with a body mass index (BMI) of 40 or higher or those with a BMI of 35 or higher with at least one obesity-related medical condition. There are different bariatric surgery procedures, each with its benefits and risks. The most common types include gastric bypass, sleeve gastrectomy, adjustable gastric banding, and biliopancreatic diversion with a duodenal switch . In gastric bypass surgery, the surgeon creates a small stomach pouch and reroutes the small intestine, limiting the amount of food consumed and absorbed by the body. Sleeve gastrectomy involves removing a portion of the stomach to reduce its size. In contrast, adjustable gastric banding involves placing an inflatable band around the top part of the stomach to restrict food intake . Bariatric surgery is an effective procedure that requires careful consideration and preparation. Before surgery, patients undergo a comprehensive evaluation to determine their suitability for the process and identify any underlying medical conditions that may affect the outcome . Patients must also undergo extensive counseling and education to help them understand the procedure’s risks and benefits and prepare them for the changes they must make to their lifestyle after the surgery. Bariatric surgery is not a magic solution to weight loss . While the surgery can help individuals lose significant weight, it requires a commitment to long-term lifestyle changes, including healthy eating habits and regular exercise. Patients undergoing bariatric surgery must also be monitored closely by their healthcare provider to ensure they meet their weight loss goals and address complications . The desire to participate in hormonal changes following bariatric surgery arises from two fundamental observations: (a) the weight loss seems to arise from reductions in appetite and food intake, implying that the surgical procedure interferes with the normal regulation of appetite and food intake, and (b) the reversal of T2D occurs a few days after surgery before any significant weight loss has occurred, implying that mechanisms other than weight loss are involved . Endoscopic sleeve gastroplasty (ESG) Endoscopic sleeve gastroplasty (ESG) is a relatively new, minimally invasive procedure for weight loss in people with obesity. The process involves using an endoscope, a thin tube with a camera, and surgical instruments attached to it to reduce the stomach size by creating a sleeve-like shape . This limits the amount of food the stomach can hold, leading to a feeling of fullness and reduced hunger. The procedure is usually done on an outpatient basis, and patients are given general anesthesia. Once the patient is sedated, the endoscope is inserted into the mouth and down the throat to reach the stomach. The surgeon gathers and folds the stomach tissue using sutures into a narrow tube shape, creating a sleeve-like structure . The sutures are then tightened to hold the sleeve in place. ESG typically takes 60 to 90 minutes, and patients are usually discharged on the same day. Most patients can return to normal activities within a few days after the procedure, although a liquid diet is generally recommended for the first week or so. ESG is effective for weight loss in people with obesity . Studies have shown that patients typically lose between 15% and 20% of their excess body weight within 12 months after the procedure. ESG has also improved various health markers, including blood pressure, cholesterol levels, and blood sugar control. However, ESG is not without risks. Complications can occur, although they are rare. These can include bleeding, infection, and perforation of the stomach or esophagus. Patients may also experience nausea, vomiting, and abdominal pain in the first few days after the procedure. ESG is not appropriate for everyone with obesity . It is generally recommended for people with a body mass index (BMI) between 30 and 40 who cannot lose weight through diet and exercise alone. People with certain medical conditions, such as inflammatory bowel disease or previous surgeries on the stomach or intestines, may not be candidates for ESG . Duodenal-jejunal bypasses liner The duodenal-jejunal bypass liner (DJBL) is a non-surgical weight loss treatment for obesity that involves the insertion of a temporary liner into the small intestine. The liner works by restricting the absorption of nutrients from food, leading to a reduction in calorie intake and weight loss. During the DJBL procedure, a flexible tube with a balloon at one end is inserted through the mouth and into the small intestine . Once in place, the balloon is inflated, creating a barrier that prevents food from coming into contact with the first part of the small intestine, called the duodenum. By bypassing the duodenum and restricting nutrient absorption, the DJBL promotes weight loss and helps improve metabolic conditions such as diabetes and high blood pressure . The DJBL is a reversible procedure and can be removed after six months. During this time, patients are advised to follow a structured diet and exercise program to maximize weight loss results. The DJBL is intended for patients with a body mass index (BMI) of 30 or higher who cannot lose weight through diet and exercise alone. Studies have shown that the DJBL can be an effective weight loss tool, with patients losing an average of 20-25% of their excess body weight during the six-month treatment period . Additionally, the DJBL has been shown to improve metabolic conditions such as diabetes and high blood pressure, with some patients experiencing remission. Like any medical procedure, the DJBL does carry some risks, including nausea, vomiting, and abdominal pain. In rare cases, the DJBL can lead to more severe complications such as bowel obstruction, bleeding, or perforation. Patients considering the DJBL should discuss the risks and benefits with their healthcare provider to determine if it is the proper weight loss treatment for them . GLP-1 receptor agonists GLP-1 receptor agonists are primarily used to treat T2D but have also shown efficacy in managing obesity. These medications imitate the effects of GLP-1, a gut hormone that helps regulate appetite and blood glucose levels, resulting in a decrease in hunger and an increase in fullness, leading to reduced food intake and subsequent weight loss . Furthermore, they offer additional benefits by improving glycemic control and decreasing cardiovascular risk factors in individuals with obesity and T2D . One of the most commonly used GLP-1 receptor agonists for treating obesity is liraglutide, administered once daily by subcutaneous injection, leading to an average weight loss of 5-10% of initial body weight . Semaglutide, another GLP-1 receptor agonist administered once weekly by subcutaneous injection, is even more effective, resulting in an average weight loss of 15-20% of initial body weight . However, GLP-1 receptor agonists can cause side effects, including nausea, vomiting, and diarrhea, which can be minimized by starting with a low dose and gradually titrating. Additionally, they may increase the risk of pancreatitis and thyroid tumors, but the overall risk is low . Semaglutide is a medication recently approved by the US Food and Drug Administration (FDA) for treating obesity in adults. It is a glucagon-like peptide-1 (GLP-1) receptor agonist, miming the action of a naturally occurring hormone called GLP-1 . The intestine releases GLP-1 in reaction to food consumption, and it triggers the pancreas to secrete insulin, thus aiding in regulating blood sugar levels. Semaglutide has been found to have a dual action of regulating blood sugar levels and inducing weight loss. Semaglutide works by activating the GLP-1 receptor in the brain, which results in decreased appetite and increased feelings of fullness or satiety . This leads to a reduction in food intake, resulting in weight loss. Moreover, it has been demonstrated that semaglutide decelerates the process of gastric emptying, which refers to the speed at which food exits the stomach and moves into the small intestine. This prolongs the feeling of fullness, which helps to reduce calorie intake and promote weight loss . In clinical trials, semaglutide is effective in promoting weight loss in adults with a body mass index (BMI) of 30 or higher, which is considered obese, as well as those with a BMI of 27 or higher who have at least one weight-related health condition, such as T2D or high blood pressure . In one study, participants who received a once-weekly injection of semaglutide lost an average of 15% of their body weight over 68 weeks, compared to a 2.4% weight loss in the placebo group. Semaglutide is typically administered once a week via subcutaneous injection. The recommended starting dose is 0.25 mg per week, gradually increasing to 2.4 mg per week over 16 weeks . The medication should be combined with a reduced-calorie diet and increased physical activity for optimal results. Like any medication, semaglutide may cause side effects. The most common side effects reported in clinical trials include nausea, diarrhea, vomiting, and constipation. In rare cases, semaglutide may cause inflammation of the pancreas, which can be severe and requires immediate medical attention . Digital health interventions Digital health interventions use digital technologies to support and improve health outcomes. In the case of obesity treatment, digital health interventions can be an effective tool to help individuals achieve and maintain a healthy weight. These interventions can include a range of technologies, such as mobile apps, wearable devices, online programs, and virtual coaching . One of the key benefits of digital health interventions for obesity treatment is their ability to provide personalized support and feedback. Many digital health programs use algorithms to track an individual’s progress, provide personalized feedback, and adjust their plan accordingly. For example, a digital health app may use data on an individual’s weight, physical activity, and dietary habits to provide personalized recommendations on achieving their weight loss goals . Another advantage of digital health interventions is their accessibility. Individuals can access digital health programs from anywhere, which can be particularly beneficial for individuals with busy schedules or limited access to traditional healthcare resources. Additionally, digital health interventions can often be more cost-effective than conventional obesity treatments, making them more accessible to a broader range of individuals. Several studies have demonstrated the effectiveness of digital health interventions for obesity treatment . A systematic review of 23 randomized controlled trials found that digital health interventions were associated with significant body weight, BMI, and waist circumference reductions. Additionally, individuals who used digital health interventions reported high satisfaction with the programs. Despite the potential benefits of digital health interventions for obesity treatment, some challenges are associated with these programs . For example, some individuals may struggle to engage with the technology or find the programs overwhelming. Additionally, digital health interventions may not be appropriate for individuals with complex medical needs or require more intensive treatment . Conclusion Obesity and diabetes are complex metabolic disorders with multifactorial causes that require comprehensive management strategies. Traditional Chinese Medicine (TCM) offers a promising approach to preventing and treating these conditions, with a long history of use and a growing body of scientific evidence to support its effectiveness. TCM treatments, such as acupuncture, herbal remedies, and dietary interventions, may target various mechanisms underlying obesity and diabetes, including inflammation, oxidative stress, insulin resistance, and gut microbiota dysbiosis. However, further research is needed to elucidate the precise mechanisms of action and optimize the use of TCM to manage these disorders and ensure the safety and quality of TCM products. Given that most T2D is caused by obesity, it makes sense to favor treatment techniques that encourage weight loss. It is also necessary to consider the usage of specific ‘anti-obesity’ drugs to supplement an individual’s attempts to improve their lifestyle. The combination of obesity/diabetes medications and glucose-lowering agents, as well as the usage of some pharmaceuticals in any category for both purposes, blur the line between obesity and diabetes therapy. SGLT2i and GLP-1 RAs, for example, are already available glucose-lowering medicines that induce modest weight loss and are anticipated to play a larger role in diabetes care in the future, especially given the positive findings of their usage in recent cardiovascular outcome trials. Novel obesity-specific medications, on the other hand, offer potential in diabetes management, and, as a result, their use in diabetes treatment appears likely to increase over time. All authors contributed to the article and approved the submitted version.
Narrative competence disparities between Children’s hospital and General hospital in China: A comparative survey
f8a35efd-4cbb-4631-ad99-cc99ef4895ae
11542768
Patient-Centered Care[mh]
Rita Charon first proposed the concept of narrative medicine in 2001, also known as medicine practiced with narrative competence . The basis of the model is empathy, reflection, professionalism, and trust applied to clinical practice. Narrative competence is the ability to acknowledge, absorb, interpret, and act on the stories and plights of others, allowing medical staff to understand patients’ feelings and deliver appropriate and targeted help . Medical staff and patients are an alliance to fight against the disease, because they are “schicksalsgemeinschaft”, a German word that means “a community of destiny”. This way, medical staff can understand the patients’ narration and suffering. Professor Liping Guo translated Charon’s book into Chinese and introduced narrative medicine to China, which attracted a growing number of researchers in narrative medicine [ – ]. Narrative medicine can strengthen doctor-patient relationships , promote medical staffs’ empathic ability and professional achievement [ – ], mitigate the doctor-patient relationships , and enable medical staffs to recognize their journeys through medicine [ – ]. Narrative competence is the intrinsic motivation and an essential ability for medical staff to achieve narrative medicine. Research has shown that the medical staffs’ empathy and humanistic care abilities had been improved through narrative medicine training, which enables medical staff to consider patients’ situations from multiple perspectives [ , , ]. Medical students can analyze people and situations from various angles to gain a deeper, more profound understanding of human experiences using reflective thinking that are essential content of narrative medicine training . Additionally, previous research found that age , working length [ , , ], familiarity with narrative medicine or nursing [ , , ] are the key influencing factors of narrative competence. Those working longer with adequate narrative medicine or nursing knowledge would perform better in narration. Resilience is considered a personality trait or a dynamic process refers to positive adaptation or the ability to maintain mental health when experiencing adversity in any situation . Medical staff possess high level of resilience is crucial to perform professionally manage work, and can improve individuals solving problems, coping with stress, and staying motivated for career development . Self-efficacy is not a skill but the confidence to reach the goal, defined as individuals’ beliefs about their ability to engage in actions required to get the desired goal . Human behavior is result-oriented, nurses with high levels of self-efficacy achieve better narrative nursing . In China, doctor-patient relationships seem to be exceptionally strained, and the reason might be the shortage of medical resources but, more profoundly, the growing demand for more personalized medical services . Medical staff are the executors of narrative practice; the ability of medical staff to seek patients’ stories or narration and accept these as a resource in healthcare practices is critical to the recent development of person-centered care [ – ]. Medical staff face different types of patients in Children’s hospital and General hospital. In Children’s hospital, their patients are children, those who have less life experience, lower cognitive level, lower self-control ability, and lower cognitive degree of disease, resulting in significantly reduced compliance with disease treatment compared with adult patients. Besides, most children go to the hospital with at least one family member, in this point medical staff in Children’s hospital and General hospital meet the patients with different psychological characteristics, so we are wondering whether their narrative competence is the same. In the present study, we conducted a cross-sectional survey of Chinese medical staff by online questionnaire to confirm the disparities of narrative competence of medical staff between Children’s hospital and General hospital. 2.1 Participants A convenience sample of clinical medical staff from General hospital and Children’s hospital in Zhejiang province, China, were enrolled in the study. Inclusion criteria were: (a) being working in hospital for more than 3 months, (b) deliver direct medical care to patients and written consent to participate. Exclusion criteria were: (a) medical students, (b) medical staffs who did not deliver direct medical care to patients. The study was a cross-sectional online survey, conducted from 1 st December 2022 to 31 st March 2023. Data were collected using Questionnaire Star Software, widely used in China. IP-address-restriction technology was used to avoid ‘one person, multiple answers’. All participants were informed of the study purpose and provided informed consent (through an online form). The system had set all the options and were mandatory questions. Clinical medical staff received the survey link via WeChat (one of China’s most widely used social networking applications). The connection of this survey: https://www.wjx.cn/vj/woYS3wi.aspx . 2.2 Questionnaires Our survey questionnaire includes four parts: sociodemographic information (e.g., age, gender, marital status, department, income, familiarity with narrative medicine or nursing), Narrative Competence Scale, Resilience Scale, and Self-efficacy Scale. (1) Narrative Competence Scale (NCS) The scale was a self-reporting scale to measure the medical staffs’ narrative competence and developed initially through literature, group discussion, and a questionnaire survey by Ma in 2019 , yielding a total of 27 items and divided into listening, understanding and reflection dimension. Each item in the scale was scored using a 7-point Likert scale, from 1 (strongly non-conformity) to 7 (strongly conformity), with a summed score ranging from 27 to 189 and classified into weak (<145), intermediate (145~163), strong (≥163) . The Cronbach’s alphas for the entire and three dimensions were 0.950, 0.835, 0.912, and 0.842, respectively. The content validity for the full scale was 0.890, indicating appropriate stability and reliability of the scale . (2) Resilience Scale (RS) The RS-14 items short version of the Wagnild and Young RS-25 was used to measure the resilience of medical staff . RS-14 is the widely used resilience scale divided into personal ability and positive perception dimensions. The scale was based on a 7-point Likert scale, from 1 (strongly nonconformity) to 7 (strongly conformity), and the total score ranged from 14 to 98. A higher total score or scoring rate indicated a higher level of resilience. However, there is no universally recognized cut-off to distinguish between low and high resilience. The internal consistency of the overall scale was 0.928, and the split-half reliability was 0.890, confirming the validity and reliability of the scale . (3) Self-efficacy scale (SE) We used the Chinese version of the Self-efficacy scale translated by Jia (Jia and Li, 2010), which was developed by Sherer . It featured 17 items (6 items were positive, 11 were negative, and negative items scored in reverse). Each item was rated using a 6-point Likert scale, from 1 (strongly disagree) to 6 (strongly agree), and a total score ranging from 17 to 102. A higher total score or scoring rate indicated a higher level of self-efficacy. The split-half reliability was 0.71, and the content validity was 0.99, indicating appropriate stability and reliability of the scale . 2.3 Statistical analysis Data were analyzed using IBM SPSS version 25.0. The basic demographic characteristics were demonstrated with mean ± (standard deviation) and frequency. The distribution of data was tested by F-test. The t-test and χ 2 analysis were used to confirm the difference in the narrative competence, resilience, and self-efficacy level between two hospitals. P<0.05 is considered statistically significant. 2.4 Ethical considerations The procedures followed in this study were approved by the Ethics Committee of the First Affiliated Hospital, Zhejiang University School of Medicine. All the data collected from the subjects were kept anonymous and confidential to protect their privacy. A convenience sample of clinical medical staff from General hospital and Children’s hospital in Zhejiang province, China, were enrolled in the study. Inclusion criteria were: (a) being working in hospital for more than 3 months, (b) deliver direct medical care to patients and written consent to participate. Exclusion criteria were: (a) medical students, (b) medical staffs who did not deliver direct medical care to patients. The study was a cross-sectional online survey, conducted from 1 st December 2022 to 31 st March 2023. Data were collected using Questionnaire Star Software, widely used in China. IP-address-restriction technology was used to avoid ‘one person, multiple answers’. All participants were informed of the study purpose and provided informed consent (through an online form). The system had set all the options and were mandatory questions. Clinical medical staff received the survey link via WeChat (one of China’s most widely used social networking applications). The connection of this survey: https://www.wjx.cn/vj/woYS3wi.aspx . Our survey questionnaire includes four parts: sociodemographic information (e.g., age, gender, marital status, department, income, familiarity with narrative medicine or nursing), Narrative Competence Scale, Resilience Scale, and Self-efficacy Scale. (1) Narrative Competence Scale (NCS) The scale was a self-reporting scale to measure the medical staffs’ narrative competence and developed initially through literature, group discussion, and a questionnaire survey by Ma in 2019 , yielding a total of 27 items and divided into listening, understanding and reflection dimension. Each item in the scale was scored using a 7-point Likert scale, from 1 (strongly non-conformity) to 7 (strongly conformity), with a summed score ranging from 27 to 189 and classified into weak (<145), intermediate (145~163), strong (≥163) . The Cronbach’s alphas for the entire and three dimensions were 0.950, 0.835, 0.912, and 0.842, respectively. The content validity for the full scale was 0.890, indicating appropriate stability and reliability of the scale . (2) Resilience Scale (RS) The RS-14 items short version of the Wagnild and Young RS-25 was used to measure the resilience of medical staff . RS-14 is the widely used resilience scale divided into personal ability and positive perception dimensions. The scale was based on a 7-point Likert scale, from 1 (strongly nonconformity) to 7 (strongly conformity), and the total score ranged from 14 to 98. A higher total score or scoring rate indicated a higher level of resilience. However, there is no universally recognized cut-off to distinguish between low and high resilience. The internal consistency of the overall scale was 0.928, and the split-half reliability was 0.890, confirming the validity and reliability of the scale . (3) Self-efficacy scale (SE) We used the Chinese version of the Self-efficacy scale translated by Jia (Jia and Li, 2010), which was developed by Sherer . It featured 17 items (6 items were positive, 11 were negative, and negative items scored in reverse). Each item was rated using a 6-point Likert scale, from 1 (strongly disagree) to 6 (strongly agree), and a total score ranging from 17 to 102. A higher total score or scoring rate indicated a higher level of self-efficacy. The split-half reliability was 0.71, and the content validity was 0.99, indicating appropriate stability and reliability of the scale . The scale was a self-reporting scale to measure the medical staffs’ narrative competence and developed initially through literature, group discussion, and a questionnaire survey by Ma in 2019 , yielding a total of 27 items and divided into listening, understanding and reflection dimension. Each item in the scale was scored using a 7-point Likert scale, from 1 (strongly non-conformity) to 7 (strongly conformity), with a summed score ranging from 27 to 189 and classified into weak (<145), intermediate (145~163), strong (≥163) . The Cronbach’s alphas for the entire and three dimensions were 0.950, 0.835, 0.912, and 0.842, respectively. The content validity for the full scale was 0.890, indicating appropriate stability and reliability of the scale . The RS-14 items short version of the Wagnild and Young RS-25 was used to measure the resilience of medical staff . RS-14 is the widely used resilience scale divided into personal ability and positive perception dimensions. The scale was based on a 7-point Likert scale, from 1 (strongly nonconformity) to 7 (strongly conformity), and the total score ranged from 14 to 98. A higher total score or scoring rate indicated a higher level of resilience. However, there is no universally recognized cut-off to distinguish between low and high resilience. The internal consistency of the overall scale was 0.928, and the split-half reliability was 0.890, confirming the validity and reliability of the scale . We used the Chinese version of the Self-efficacy scale translated by Jia (Jia and Li, 2010), which was developed by Sherer . It featured 17 items (6 items were positive, 11 were negative, and negative items scored in reverse). Each item was rated using a 6-point Likert scale, from 1 (strongly disagree) to 6 (strongly agree), and a total score ranging from 17 to 102. A higher total score or scoring rate indicated a higher level of self-efficacy. The split-half reliability was 0.71, and the content validity was 0.99, indicating appropriate stability and reliability of the scale . Data were analyzed using IBM SPSS version 25.0. The basic demographic characteristics were demonstrated with mean ± (standard deviation) and frequency. The distribution of data was tested by F-test. The t-test and χ 2 analysis were used to confirm the difference in the narrative competence, resilience, and self-efficacy level between two hospitals. P<0.05 is considered statistically significant. The procedures followed in this study were approved by the Ethics Committee of the First Affiliated Hospital, Zhejiang University School of Medicine. All the data collected from the subjects were kept anonymous and confidential to protect their privacy. 3.1 Level of medical staffs’ narrative competence with different characteristics and disparities of characteristics of Children’s hospital and General hospital A total of 1063 questionnaires were filled out and collected. After preliminary screening, 60 questionnaires with illogical answers/missing data were screened out, leaving 1003 valid questionnaires with an effective rate of 94.36%, 802 medical staff from General hospital and the other 201 medical staff from Children’s hospital. Except for the gender and age, other characteristics of Children’s hospital and General hospital are different. All this information is listed in . 3.2 The ordinal logistic regression analysis The ordinal logistic regression was conducted to establish the regression model of narrative competence. Resilience, self-efficacy, and those variables that would influence narrative competence were independent variables with narrative competence as a dependent variable. We found that resilience was an independent factor that influenced narrative competence in Children’s hospital, and independent factors that influenced narrative competence in General hospital were resilience, familiarity with narrative medicine, and whether have written parallel charts before (Tables and ). 3.3 Disparities and scores of narrative competence, resilience and self-efficacy The t-test or χ 2 analysis showed that the level of narrative competence, resilience score (personal ability dimension) and self-efficacy are different between General and Children’s hospital ( ). A total of 1063 questionnaires were filled out and collected. After preliminary screening, 60 questionnaires with illogical answers/missing data were screened out, leaving 1003 valid questionnaires with an effective rate of 94.36%, 802 medical staff from General hospital and the other 201 medical staff from Children’s hospital. Except for the gender and age, other characteristics of Children’s hospital and General hospital are different. All this information is listed in . The ordinal logistic regression was conducted to establish the regression model of narrative competence. Resilience, self-efficacy, and those variables that would influence narrative competence were independent variables with narrative competence as a dependent variable. We found that resilience was an independent factor that influenced narrative competence in Children’s hospital, and independent factors that influenced narrative competence in General hospital were resilience, familiarity with narrative medicine, and whether have written parallel charts before (Tables and ). The t-test or χ 2 analysis showed that the level of narrative competence, resilience score (personal ability dimension) and self-efficacy are different between General and Children’s hospital ( ). We conducted a cross-sectional survey among 1003 medical staffs and found the narrative competence of medical staff in China is in intermediate level. Besides, independent factor that influenced narrative competence in Children’s hospital and General hospital was different. Meanwhile, the level of narrative competence, resilience score (personal ability dimension) and self-efficacy are different between General and Children’s hospital. According to the NCS, narrative competence is classified into three levels: weak (<145), intermediate (145~163), and strong (≥163) . Our study revealed that the general score of narrative competence of General hospital and Children’s hospital was 149.45±26.22 and 147.10±18.87 (intermediate level), respectively, similar to previous findings of medical staff in general hospitals in China [ , , , ], indicating that the narrative competence of medical staff in China needs to be improved. Narrative medicine is a “new” conception for medical staff; in this study, almost 51.24% (from Children’s hospital) and 38.15% (from General hospital) participants had never heard of narrative medicine before, and only 16.92% (from Children’s hospital) and 29.68% (from General hospital) medical staffs had read parallel chart before, indicating that medical staffs knew little about narrative medicine. According to knowledge, attitude/belief, and practice (KAP) theory, knowledge is the foundation for practice, and lack of knowledge of narrative medicine might be the reason that eventually leads to a lower level of narrative competence of medical staff in this study. Previous reports [ , – ] suggest that reflective writing is one approach in facilitating practitioners’ reflection on the connection between their personal story and professional practices, while only 2.49% (from Children’s hospital) and 8.98% (from General hospital) medical staffs in our study had written parallel chart before. Reflective writing promotes self-reflection by putting thoughts and feelings into words . They take note of the patients’ narration and deem this a helpful and achievable method. They would more likely anticipate integrated and complementary medical care, which means medical staff who are more familiar with narrative medicine do better in narrative practice. As we already know, narrative competence can be improved through training and practice [ , , – ]. In our study, 82.09% (Children’s hospital) and 80.67% (General hospital) participants had worked for more than 6 years. However, most of the medical staffs did not have the chance to know more about narrative medicine because most narrative curriculums are for medical students. Developing a comprehensive platform for a narrative competence training curriculum to promote the narrative practice and enrich the meanings of patients’ lives should be on the way. Researchers found that resilience is positively correlated with narrative competence, which means that resilience might be used as an internal motivation to promote the narrative competence of medical staff [ , , ]. Medical staff with good resilience are more enthusiastic and willing to devote the energy to meet the growing demand for patient-centered services and shared decision-making in a more diverse modern medical environment . High self-efficacy can help nurses to achieve better narrative nursing . Self-efficacy is an essential factor in narrowing the gap between knowledge and practice. Those with higher self-efficacy are more likely to find support from family members, colleagues and organizations, resulting in better performance in daily healthcare services. During the medical care duration, they become more confident in dealing with patients and meeting their needs, including medical services and emotional needs. Resilience and self-efficacy are different among medical staffs in Children’s hospital and General hospital, which might be another reason contributing to the difference in narrative competence. According to the result of χ 2 analysis, the narrative competence between Children’s hospital and General hospital is different, and t -test results showed that both resilience and self-efficacy are different between two hospitals. Data showed that the narrative competence of 81(40.30%) and 293(36.63%) medical staff are weak in Children’s hospital and General hospital, respectively. shows that medical staffs in General hospital are more familiar with narrative medicine, more participants have read and wrote parallel chart before, which might be the reasons leading to the disparities between Children’s hospital and General hospital. Reading and writing a parallel chart is the foundation of narrative medicine, which more likely shows those medical staff have sufficient knowledge about narrative medicine so that the gap between knowing and doing can be narrowed and the narrative competence would be different. Consequently, those medical staff who had written parallel charts before would perform better narrative competence. Clinical medical staff who are more familiar with narrative medicine may do better in narrative practice. Narrative medicine is the practice of medicine with narrative competence, which can provide a respectful environment that is empathic and wholesome. Our findings highlight that narrative competence of medical staff in China needs to be improved, and it is different between General hospital and Children’s hospital. Medical staff in General hospital are more familiar with narrative medicine. Parallel chart writing and resilience would impact the narrative competence of medical staff. To this end, developing a comprehensive platform for a narrative competence training curriculum to promote the narrative practice and enrich the meanings of patients’ lives should be on the way. There are several limitations to this study that should be considered when interpreting these results. Firstly, the study sample size was small and most are nurses, the conclusions and implications of the study are limited. Secondly, all the data used in this study were self-reported. Thirdly, narrative medicine is a tool for improving physician–patient relationship, beside narrative competence, many other factors influence the physician–patient relationship, such as social trust and physician–patient communication , etc. In future studies, these factors should be included for analysis to draw a solid conclusion. We used the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guideline for cross-sectional studies. All methods were performed following the relevant guidelines and regulations. S1 Dataset General hospital and Children’s hospital. (ZIP)
Sexual health literacy among rural women in Southern Iran
cc8cc264-3407-4a5f-a938-6b7b3a189c45
11286827
Health Literacy[mh]
Sexual health literacy (SHL) refers to a set of skills, including the ability to acquire and understand sexual knowledge and to integrate information into the decision-making process related to sexual behavior. Sexual health literacy is a spectrum of sexual health literacy that encompasses diverse areas such as gender and sexual development, puberty, pregnancy, contraceptive methods, unwanted pregnancy, sexually transmitted infections, developing sexual management skills including negotiating the quality of sexuality, sexual preferences and constraints, and the positive and romantic dimensions of sexuality , . One of the most important components of sexual health is an individual's level of sexual awareness . Sexual health literacy leads to improved ability to understand and assess the risks associated with sexual health, delaying first sexual experience, reducing and selecting low-risk sexual partners, seeking safe sexual experiences, reducing unintended pregnancies and sexually transmitted infections, and ultimately improving family and social health – . Rural women encounter unique challenges that impact their sexual health literacy. These may include inadequate access to healthcare services, limited educational opportunities, cultural taboos surrounding sexuality, and gender inequalities. Moreover, the stigma associated with seeking sexual health information or services in rural communities can further hinder women's ability to access accurate and comprehensive information . Accessing sexual health services is often more challenging for rural women compared to their urban counterparts. Geographic distance, transportation limitations, lack of healthcare facilities, and financial constraints create barriers to accessing essential sexual health services such as contraception, STI testing, and prenatal care. Additionally, the shortage of healthcare providers in rural areas exacerbates these challenges, leaving many women without access to quality sexual health care , . Socioeconomic factors significantly influence rural women's sexual health literacy. Poverty, unemployment, and limited access to education can restrict women's ability to access accurate information about sexual health and reproductive rights. Economic instability may also force women to prioritize immediate needs over preventive healthcare, leading to neglect of their sexual health needs . Jamali' et al. showed that a quarter of women have low levels of sexual health literacy, where the level of sexual health literacy was related to age, education, spouse's education and economic status of the participants. In the other study, 65.5% of the participants had low level of sexual health literacy and participants from urban schools had higher level of sexual health literacy than rural schools . The results of studies suggest that sexual health literacy is related to the rate of condom use, the likelihood of unintended pregnancy, the likelihood of engaging in high-risk sexual behavior, and sexual coercion in individuals, especially young people – . Having an optimal level of sexual health literacy increases a person's ability to analyze, judge, discuss, make decisions, and change sexual behavior, and empowers a person to ensure, maintain, and promote his or her sexual health . Sexual health literacy among rural women is a critical yet often overlooked aspect of public health discourse. In recent years, there has been a growing recognition of the multifaceted nature of sexual health and the profound impact it has on individuals, families, and communities. However, within the realm of rural populations, particularly among women, there exists a significant gap in understanding and addressing sexual health needs . Despite the importance of sexual health literacy, few studies have been conducted. A prerequisite for educational planning as well as the design of intervention studies to improve sexual health literacy is to have sufficient information about the sexual health literacy level of people in the community and to identify factors associated with it. Considering the differences between urban and rural communities in the level of access to health facilities as well as cultural, social, and other differences, etc., the design of a study to assess the sexual health literacy level of married women living in rural areas seems necessary. This study aimed to offer understanding of sexual health literacy among Iranian rural women and explore the contextual factors influencing it in rural areas. This cross-sectional study was conducted from January to May 2021 in Benaroyeh region and six sub-villages, Fars Province, Iran. That people go toward center (Benaroyeh) for health services. The sample size is based on a preliminary study with the number of 50 people and based on the sample size formula for the proportion (Estimating a proportion) and using the statistics and sample size software, taking into account the first type error (α) equal to 0.05 and the ratio (p). Equal to 0.12 according to the pilot conducted and the error (d) was equal to 0.05, a sample size of 163 was obtained, and considering a 20% dropout, 195 patients were determined for the sample size, and finally 200 patients were analyzed. The study population included married women of reproductive age in the region of Benaroyeh and research units of married women of reproductive age related to the health center. They were selected by convenience sampling method. Inclusion criteria were: married women of reproductive age 15–45 years, with minimum literacy, no medical and paramedical education and no employment in health centers. The exclusion criteria were incomplete completion of the questionnaire. The present study was conducted after obtaining the approval of the ethics committee in biomedical research of Tarbiat Modares University of Tehran and with the ethics code IR. MODARES.1398.205. All methods used in this study adhered to the Declaration of Helsinki. After assuring the participants that the information would remain confidential and obtaining written informed consent from each participant (All participants in this study were over 18.), they were asked to complete the questionnaires. There were no missing data as the research units completed the questionnaires. Data collection tools in this study included two questionnaires: (1) Reproductive Demographic Characteristics questionnaire. (2) Sexual Health Literacy for Adults (SHELA) questionnaire developed by Maasoumi et al. . This questionnaire consists of 40 questions and measures four domains, including accessibility skills (questions 1–7), reading and comprehension skills (questions 8–25), analysis and evaluation skills (questions 26–30), and application skills (questions 31–40). The response to each question is on a five-point Likert scale from “strongly disagree” to “strongly agree”. The raw score of each area is obtained from the algebraic sum of the answers to the questions of the same area, then the following formula is used to convert the raw scores of the areas into a range of 0–100. The total score of the tool for each person is calculated in such a way that the scores of all fields are added together after converting to a range of zero to 100 and divided by the number of fields, i.e. the number of four. Participants are classified into four levels based on their total score. “Inadequate” level (score 0–50), “problematic” level (score 50–66), “sufficient” level (score 66–84), and “excellent” level (score 84–100). Excellent and sufficient levels were considered desirable levels of sexual health literacy, and inadequate and not very adequate levels were considered unfavorable levels of sexual health literacy. To calculate the total score: the scores of the subtests are collected on the basis of zero to 100 and divided by the number of subtests. The content validity ratio (CVR) and content validity instrument index (CVI) were 0.84 and 0.81, respectively. The internal consistency of the instrument with Cronbach's alpha index for the identified factors ranged from 0.84 to 0.94. Intraclass correlation based on ICC index calculation ranged from 0.90 to 0.97 . Data were analyzed by descriptive and inferential statistics using SPSS 16 software. Descriptive statistics including frequency, percentage, mean and standard deviation were used to assess the sexual health literacy of women in reproductive ages. Covariance (with a significance level of 0.05) was used to investigate the relationship between demographic- reproductive factors and sexual health literacy score. The normality of the data: To check the normality of data distribution, we used statistical tests such as Shapiro–Wilk and Kolmogorov–Smirnov. In addition, Q-Q plots were also used to visually check the normality of the data. Levene's Test was used to check the homogeneity of variances. A total of 200 married women in reproductive ages were studied. The results revealed that the mean age of the research units was 31.65 ± 5.95 years and the mean age of the spouses was 38.67 ± 6.7 years. Most of the research units were housewives (81.5%) with less than university education (78%). The mean age at marriage was 18.93 ± 4.62 years and the mean duration of marriage was 12.18 ± 6.65 years. Most research units (54.5%) did not use any modern contraception method. People (51%) reported a history of 1–2 pregnancies. Most participants reported a history of vaginal delivery (43.5%). The highest percentage obtained for sexual information acquisition was the Internet and social networks (33.5%). Table provides more details on the reproductive demographic characteristics of the research units. The results of Table indicate that the mean total score of sexual health literacy is 75.64 ± 12.81. Also, 82.5% of the research units had a desirable level of sexual health literacy. The mean score for the access field was 74.04 ± 18.25, the reading and comprehension field was 77.34 ± 14.2, the analysis- evaluation field was 71.87 ± 17.76, and the application skill field was 79.02 ± 12.79. The results of univariate analysis of covariance reported in Table reveal that the mean score of sexual health literacy of people without university education is 5 scores lower than that of people with university education ( P = 0.021) and the mean score of sexual health literacy in housewives is 5.17 scores lower than in employed people ( P = 0.026). The mean score of sexual health literacy of people whose spouses were unemployed was 10.104 scores lower than that of their spouses who were self-employed ( P = 0.016) and there was no difference between the sexual health literacy scores of other occupations. For each more pregnancy, the mean score of sexual health literacy dropped by 1.21 scores ( P = 0.046). The sexual health literacy score of women who gave birth by both methods was 7.956, which was lower than that of people who had no history of delivery ( P = 0.050), and the difference between the mean score of sexual health literacy of other delivery methods was not statistically significant ( P > 0.05). This study aimed to investigate the sexual health literacy level and its relationship with demographic-reproductive factors in married women of reproductive ages. The results revealed that 82.5% of the subjects had the desired level of sexual health literacy. Despite a large portion of desirable sexual health literacy level, the average score (75.64) suggests there's room for improvement. This average score is comparable to findings in other studies on Iranian population (74.12, 68.76) , . This suggests a generally moderate level of sexual health literacy in this population. According to the findings of our study, among the four areas of sexual health literacy, people obtained lower scores in the field of access skill and analysis-evaluation skill. These results can be explained by the fact that there is no source for proper formal sex education as well as sexual health literacy for any gender and age in Iran. The finding that the internet and social networks are the primary source of sexual health information is consistent with trends observed elsewhere. However, the quality of information gleaned from these sources can be questionable, but regardless of the data accuracy, Internet and social network are the only two main sources to access sexual information by the Iranian people , , while fewer participants in the application skill are at an undesirable level. These findings indicate that most of the subjects will use the information in their sexual life, if they receive it. In our study, the mean score of sexual health literacy of people with university education was higher than the mean score of people with less education than university. In line with these results, Shahrahmani et al. showed that education has a strong relationship with the level of sexual health literacy. Due to the different educational system that manages school and university and the different educational environment and the amount of information and different educational methods, it seems that people with university education are more capable in searching, accessing and understanding content than people with less education than university. In addition, due to the absence of a university in the village, all people with university education experience independent student life in bigger cities, which can be effective on the level of sexual health literacy of people. The results of the present study, where the mean level of sexual health literacy of employed people was higher than that of housewives. In the studies of Sadeghi (2019), Baghaei (2017), as well as Masoumy (2018), employed people had a higher level of health literacy – . Employed women better understand and implement health information as well as awareness due to experiencing more occupational conflicts. Job conflict can strengthen personal strengths. When people face the threat and management of occupational conflicts, their independence, self-confidence and decision-making power increase, these powers can help people to better understand their sexual needs and rights and indirectly increase the level of sexual health literacy. In addition to individuals’ occupation, women whose husbands were unemployed had lower sexual health literacy scores. In Jamali's study (2020), there was no significant relationship between spouses' occupation and sexual health literacy level. In the present study, the percentage of unemployed spouses was about four times higher than Jamali et al.'s study, which can then lead to more accurate statistical results. Since unemployed people are more prone to economic hardship and are more inclined to meet their basic needs based on Maslow's line of needs, they do not pay enough attention to needs beyond primary level needs. Also, unemployment of husbands can cause marital incompatibility and problems in marital relationships as well as less coordination in family relationships, where people involved in these problems are more focused on basic needs and less look for health needs or improving their sexual health literacy. The results of this study also indicated that women with more pregnancies have lower sexual health literacy scores. Consistent with this finding, a study by Sadeghi et al. indicated that participants who reported more pregnancies had lower mean health literacy scores. A possible explanation for these findings can be related to more frequent pregnancy experience due to the low levels of information about contraception methods and access to information. In addition, those who reported a history of both delivery methods had lower mean sexual health literacy scores than those who had no history of delivery. The low sexual health literacy level of these people can be due to the lower level of education and more pregnancies compared to the cesarean section and normal delivery as well as people who have no delivery. Consistent with the results of our study, in the study of Dongarwar and Salihu , Participants who had a lower level of sexual and reproductive health literacy showed a 44% increase in the prevalence of pregnancy. The results indicated that there was no statistically significant relationship between age as well as economic status and sexual health literacy score of research units. Consistent with the results of the present study, in the study by Vongxay et al. , no statistically significant relationship was reported between age and sexual health literacy level. On the other hand, in Shahrehamani's study (2023), there was a statistically significant direct relationship between age plus the economic status of individuals and the level of sexual health literacy . In Simpson's study (2015), older students had higher sexual health literacy scores . The findings of both studies contradict the results of the present study. The reason for this discrepancy in the results could be the difference in the living environment of the research community. The present study was conducted in a rural community. In rural communities, there is not much cultural and class gap due to more limitations on access to health facilities, traditional lifestyles, lifestyles’ similarities, age and economic status. Further, people with different economic statuses and of different ages have access to the same facilities. In this study, Internet and social networks (33.5%) were considered the most common method of obtaining sexual information of individuals. In the study by Byansi et al. , a large number of participants used the Internet and social networks to obtain sexual information. Also, the results of a qualitative study (Vamos 2015) revealed that most people use the Internet to access sexual information . The results of both studies concur with the findings of the present study. In the Vongxay ’s study (2019), the main source of information on sexual health literacy of individuals was teachers. This study was conducted among students, with teachers being considered the most accessible sources of information for students . However, in Graf's study (2015), friends were the most common source of sexual information . The results of this study contradict with the findings of the present study. The reason for the discrepancy in these findings can be related to the sample of studies. Samples of this study were middle-aged or elderly people who had limited access to the Internet. Also, according to the year of Graf’s study (2015) and the time difference between the two studies, it can be concluded that the growth of Internet penetration and cyberspace in this interval has led more people to use the Internet. In the present study, only 16.5% of people in health centers (midwives, doctors, and health care providers) had access to sexual information. The results of a qualitative study by Rakhshaee et al. showed that the sense of shame of asking about sexual issues and the lack of routine assessment of sexual health made health care providers the last source of information for women. In this regard, Svensson et al. reported that embarrassment and taboos related to sexual health issues created a gap in knowledge and misconceptions about sexual health. Since access to information is the first important dimension of sexual health literacy , and since the Internet is recognized as an attractive source of health information for personal health management , the Internet appears to be a promising potential way to enhance sexual health literacy, which, in turn, improves health . While technological advancements offer promising avenues for disseminating sexual health information, the digital divide poses a significant challenge for rural women. Limited access to the internet, smartphones, and digital literacy skills may hinder women's ability to utilize online resources for sexual health education and information-seeking. Bridging this gap requires innovative approaches that leverage technology while addressing barriers to access and digital literacy. One of the limitations of this study was the reluctance of many women to participate in the research due to feeling ashamed and stigmatized towards sexual issues, some of them volunteered to participate in the research after the necessary explanation regarding the confidentiality of the participants' information. Another limitation of this research it is noteworthy in the studies that only the employment status of individuals at the time of sampling has been considered and the employment history of individuals has been neglected. Our study provides new insights into the sexual health literacy of married women of reproductive age in rural Iran. However, it is important to use these findings cautiously when applying them to other populations. Further research in diverse settings is needed to confirm and expand upon these results. One of the strengths of this research is that this was the first study to investigate the level of sexual health literacy and its related factors in married women living in the village. This research was the first study to investigate the sexual health literacy level and its related factors in married women living in rural areas. Although 82.5% of women had a good level of sexual health literacy, but the average score has room for improvement. The mean score of sexual health literacy was related to occupation, spouse’s occupation, education, number of pregnancies, and delivery history. Internet and social networks were the main sources of sexual information for people, and the quality of this information is in question. Due to the extensive use of the Internet by individuals, it is recommended that a site containing accurate and valid sexual health information affiliated with the Ministry of Health is designed in accordance with the cultural frameworks of the country and its link provided to individuals. Also, since one of the most reliable sources of sexual information is health centers, it is necessary to identify the barriers to utilizing these centers (including: stigma, judgment, privacy, etc.) and try to remove these barriers. These problems can be overcome by teaching counseling skills to health center staff. It is also recommended that a study be designed and conducted using an online questionnaire to assess the sexual health literacy level of individuals at all levels of community, not only people who refer to the centers for health care. Rural women's sexual health literacy is a multifaceted issue shaped by a complex interplay of socioeconomic, cultural, and structural factors. Efforts to address this issue must be holistic, encompassing educational, healthcare, policy, and community-based approaches. By enhancing sexual health literacy among rural women, we can empower them to make informed decisions about their sexual and reproductive health, thereby promoting overall well-being and gender equality in rural communities.
Parental perception of treatment options for mucopolysaccharidosis: a survey to bridge the gap for personalized medicine
a3402828-ddea-4713-93d7-9f559ea52fd9
11762465
Surgical Procedures, Operative[mh]
Mucopolysaccharidosis (MPS) encompasses a group of rare genetic disorders characterized by the accumulation of specific sugar molecules, the glycosaminoglycans (GAGs), within the lysosomes and extracellular system, leading to a range of debilitating symptoms that typically manifest in early childhood. While neuronopathic forms of MPS are associated with central nervous system (CNS) manifestations, such as cognitive impairment, both neuronopathic and non-neuronopathic subtypes exhibit somatic features that can significantly impact patients' quality of life . Despite advancements in therapeutic interventions, the treatment landscape for MPS continues to present challenges, leaving patients and their families with substantial unmet medical needs. Currently available treatments, enzyme replacement therapy (ERT), and hematopoietic stem cell transplantation (HSCT) have been promising in managing some aspects of the disease, like mobility, endurance, functional outcomes, quality of life (QoL), organomegaly, respiratory and cardiac function. However, their efficacy is limited by several factors. Both therapies are unable to address all pathological and clinical features, due to insufficient penetration into target tissues, particularly bone, joints, and brain . Further, these treatments are associated with problems such as lack of tolerance, infusion reactions, high costs, and frequent invasive delivery. For HSCT, challenges include the impact of donor type on clinical outcomes, immunosuppression, graft rejection, and a high rate of associated morbidity and mortality . In addition, to date no treatment options for eight out of the thirteen MPS types are available. The limited efficacy of these treatments has prompted a quest for novel therapeutic approaches. Gene therapy (GT), both in vivo and ex vivo, has emerged as a potential avenue for addressing MPS at its root cause. However, with this potential for groundbreaking change comes a unique set of challenges that must be carefully navigated. Safety concerns linked to viral vectors are the biggest hurdle, especially when considering the perspectives of parents, who naturally prioritize the well-being and safety of their loved ones . In the pursuit of finding effective and personalized solutions for MPS, individual treatment trials (ITTs), also known as N-of-1 trials, are a viable option. This approach not only offers an avenue for patients to access innovative therapies in the meaning of repurposing, but also serves as a platform for early, low-threshold interventions in a personalized manner . Accordingly, the repurposing of immunomodulatory drugs for individual MPS patients is a rational and available treatment strategy with preclinical and clinical proof of concept in MPS . Despite the promise and potential of ITTs with immunomodulatory drugs, the implementation among MPS-treating clinicians is low. Our preliminary work suggests that this is primarily due to the complex and time-consuming risk–benefit analysis physicians must perform, which acts as a barrier to their use . The perspectives of parents and patients are crucial for clinical studies as well as ITTs. Therefore, it is essential to gain a comprehensive understanding of how parents perceive this readily available avenue for treatment. By bridging this knowledge gap, patients and parents can be empowered with informed choices, enabling them to actively participate in the decision-making process regarding their treatment journey. Moreover, involving patients and their parents in clinical research has been recognized as vital, not only to respect personal views but also to address ethical, financial, legal, and social implications (ELSI) that arise with each treatment approach . Thus, parental perception and acceptability of treatment options are crucial to guide future research and clinical translation. Considering the knowledge gap surrounding parental attitudes toward MPS therapies, our study aims to delve into this crucial area of research. Our quantitative questionnaire-based survey targets the perception of parents of affected MPS children in the DACH region (Germany, Austria, Switzerland). This article provides first insights into the parental perception of ITTs in comparison to other innovative treatment approaches as well as MPS gold standard therapies and reports valuable findings of parental acceptance of possible treatment-related adverse events. We sought to better understand how different therapies are perceived and what factors influence MPS parents' final acceptance or reluctance. By examining their points of view, we aim to reveal valuable insights that can guide clinicians and researchers towards more patient-centered treatment approaches. Furthermore, this stakeholder perspective is an indispensable prerequisite for a rational benefit-risk assessment. Therefore, we aim to integrate these outcomes into the DAF tool for evidence-based, high-quality ITTs in MPS. Within this article, we introduce a pioneering quantitative survey technique based on the discrete choice experiment (DCE). This technique is recognized as a valuable method for complex decisions and has been applied in the orphan disease setting as well as for the evaluation of treatment perceptions . In our survey, we utilized this method by integrating innovative hypothetical MPS patients’ scenarios. This approach offers new, differentiated perspectives on parental attitudes toward both established and groundbreaking therapeutic approaches, including their willingness to accept potential adverse events related to treatments. Consequently, our method allows us to provide data for a distinctive quantitative risk–benefit model to empower MPS patients and parents with new possibilities, fostering a future where personalized and innovative approaches stand at the forefront of the battle against MPS. Thereby, embracing the potential opportunity to reshape the existing landscape of MPS treatments and to bridge the gap between scientific progress and patient-centered strategies. Instrumental development This quantitative research was carried out in accordance with the DCE approach with an incorporation of novel attributes to support decision making of caregivers in the MPS field. The respective online survey was based on a validated quantitative questionnaire with revisions and a translation into German . Therefore, the participation in the survey was likely limited to German-speaking parents residing in Switzerland. Additional survey items were included about the parental perception of different therapies and the personal assessment of potential treatment-related adverse events. Questions were written in a structured response, single choice format with a free text option where appropriate. The final survey contained 30 questions, categorized into five sections: (i) patient characteristics, (ii) parent characteristics, (iii) parental perception of various therapies a priori, (iv) parental perception regarding the likelihood of favoring different therapies, and (v) parental perception on the likelihood of accepting treatment-related adverse events (Suppl. info. ). Section “ ” evaluated the general parental attitude towards ERT, HSCT, GT, and ITTs. In contrast, Sect. “ ” assessed decision-making by using specific hypothetical scenarios involving both (a.) neuronopathic and (b.) non-neuronopathic patients. The following text passage gives insights into this section of the survey with ERT as an example: Imagine your child suffers from MPS-associated aggressive and abnormal behavior, sleep disorder, and cognitive decline with an increased need for parental care. The current symptoms are mild but clearly and substantially progressive. How likely would you choose enzyme replacement therapy for your child? Imagine your child suffers from MPS-associated skeletal abnormalities (dysostosis multiplex), short stature, gait disorders, and lack of energy and endurance. Your child will need a wheelchair and even more parental care. Current symptoms are mild but clearly and substantially progressive. How likely would you choose enzyme replacement therapy for your child? The last section of the survey focused on individual assessments of the likelihood of favoring a therapy, involving the consideration of potential treatment-related adverse events weighed against specific beneficial effects. This assessment was facilitated through the presentation of scenarios featuring both (a.) neuronopathic and (b.) non-neuronopathic patients. The following patient scenario gives insights into this section of the survey with mild infections as an example here: Imagine your child suffers from MPS-associated aggressive and abnormal behavior, sleep disorder, and cognitive decline with an increased need for parental care. The current symptoms are mild but clearly substantially progressive. There is a 50% chance that stabilization and no further progression of the disease will be achieved. How likely would you be to take the risk of mild infections with a new treatment? Imagine your child suffers from MPS-associated skeletal abnormalities (dysostosis multiplex), short stature, gait disorders, and lack of energy and endurance. Your child will need a wheelchair and even more parental care. Current symptoms are mild but clearly substantially progressive. There is a 50% chance that stabilization and no further progression of the disease process will be achieved. How likely would you be to take the risk of mild infections with a new treatment? Survey distribution The online questionnaire was published on the platform Survey Monkey® (San Mateo, CA, USA). A link was distributed via national patient organizations in the DACH region between April and July 2023. The developed cross-sectional survey was anonymous, and the contribution was voluntary. A cover letter explained the purpose of the study and informed consent was obtained before completion. Ethics approval This questionnaire-based study was prepared and carried out by the relevant principles of the International Conference on Harmonization and Good Clinical Practice (ICH-GCP) and was approved by the Ethical Committee in Salzburg, Austria (SS22-0019-0019, Feb. 24th,2023). Data analysis and synthesis Incomplete survey responses were excluded from the final sample size. For the analysis, descriptive statistics, group comparison, and bivariate correlation, via Pearson and Eta coefficient, were applied. All statistical analyses were performed using Microsoft Excel and IBM SPSS Statistics v. 27® (Chicago, IL, USA). This quantitative research was carried out in accordance with the DCE approach with an incorporation of novel attributes to support decision making of caregivers in the MPS field. The respective online survey was based on a validated quantitative questionnaire with revisions and a translation into German . Therefore, the participation in the survey was likely limited to German-speaking parents residing in Switzerland. Additional survey items were included about the parental perception of different therapies and the personal assessment of potential treatment-related adverse events. Questions were written in a structured response, single choice format with a free text option where appropriate. The final survey contained 30 questions, categorized into five sections: (i) patient characteristics, (ii) parent characteristics, (iii) parental perception of various therapies a priori, (iv) parental perception regarding the likelihood of favoring different therapies, and (v) parental perception on the likelihood of accepting treatment-related adverse events (Suppl. info. ). Section “ ” evaluated the general parental attitude towards ERT, HSCT, GT, and ITTs. In contrast, Sect. “ ” assessed decision-making by using specific hypothetical scenarios involving both (a.) neuronopathic and (b.) non-neuronopathic patients. The following text passage gives insights into this section of the survey with ERT as an example: Imagine your child suffers from MPS-associated aggressive and abnormal behavior, sleep disorder, and cognitive decline with an increased need for parental care. The current symptoms are mild but clearly and substantially progressive. How likely would you choose enzyme replacement therapy for your child? Imagine your child suffers from MPS-associated skeletal abnormalities (dysostosis multiplex), short stature, gait disorders, and lack of energy and endurance. Your child will need a wheelchair and even more parental care. Current symptoms are mild but clearly and substantially progressive. How likely would you choose enzyme replacement therapy for your child? The last section of the survey focused on individual assessments of the likelihood of favoring a therapy, involving the consideration of potential treatment-related adverse events weighed against specific beneficial effects. This assessment was facilitated through the presentation of scenarios featuring both (a.) neuronopathic and (b.) non-neuronopathic patients. The following patient scenario gives insights into this section of the survey with mild infections as an example here: Imagine your child suffers from MPS-associated aggressive and abnormal behavior, sleep disorder, and cognitive decline with an increased need for parental care. The current symptoms are mild but clearly substantially progressive. There is a 50% chance that stabilization and no further progression of the disease will be achieved. How likely would you be to take the risk of mild infections with a new treatment? Imagine your child suffers from MPS-associated skeletal abnormalities (dysostosis multiplex), short stature, gait disorders, and lack of energy and endurance. Your child will need a wheelchair and even more parental care. Current symptoms are mild but clearly substantially progressive. There is a 50% chance that stabilization and no further progression of the disease process will be achieved. How likely would you be to take the risk of mild infections with a new treatment? The online questionnaire was published on the platform Survey Monkey® (San Mateo, CA, USA). A link was distributed via national patient organizations in the DACH region between April and July 2023. The developed cross-sectional survey was anonymous, and the contribution was voluntary. A cover letter explained the purpose of the study and informed consent was obtained before completion. This questionnaire-based study was prepared and carried out by the relevant principles of the International Conference on Harmonization and Good Clinical Practice (ICH-GCP) and was approved by the Ethical Committee in Salzburg, Austria (SS22-0019-0019, Feb. 24th,2023). Incomplete survey responses were excluded from the final sample size. For the analysis, descriptive statistics, group comparison, and bivariate correlation, via Pearson and Eta coefficient, were applied. All statistical analyses were performed using Microsoft Excel and IBM SPSS Statistics v. 27® (Chicago, IL, USA). Survey distribution and response A total of 31 MPS parents from three different patient advocacy groups in the DACH region joined the online survey. Fourteen participants (45%) were excluded due to incomplete submissions. The final sample size comprised 17 complete responses—15 mothers (88%) and 2 fathers (12%). The majority of respondents lived in Germany, followed by Austria and Switzerland. The survey distribution is demonstrated in Table below. Patients' characteristics Parents of a total of 17 patients participated, with no cases of two prents from the same patient or one parent representing multiple patients. The mean average age at diagnosis was 34 months (median: 28 months, range: 10–72 months) and the current age was 17 years on average (median: 18 years, range: 4–31 years). The majority of patients had MPS Type III (n = 6), followed by MPS II, IV, VI (all n = 3), and MPS I (n = 2), no parents of patients with MPS Types VII, IX, X, or PLUS participated. Over half of the patients (59%) had severe CNS manifestations. Five patients were currently on ERT, and 5 children received neither causal nor supportive drug therapies. Antipsychotics and anticonvulsants were the most common supportive medications. Parental background and characteristics All respondents had formal schooling, and nearly half had completed academic education. Most were employed, and all had health insurance, with 18% having additional private medical insurance. The majority (88%) of respondents were White, with one person identifying as Black and another as Asian. Further details characterizing the sample are outlined in Table and Suppl. Info . Parents’ source of information for MPS treatment options (n, %) The main sources of information were patient organizations (n = 15; 88%), followed by physicians (n = 13; 76%) and other MPS parents (n = 10; 59%). A small number of parents utilized the study registry clinicaltrials.gov (n = 4; 24%) and PubMed (n = 2; 12%). General subjective perception of benefits and disadvantages of different MPS therapies Respondents demonstrated a positive attitude towards approved MPS therapies. Regardless of previous experience or knowledge, ERT received the highest rank. MPS parents perceived ERT as most beneficial, followed by HSCT (Fig. ). Both innovative treatment approaches had a high neutral rank—41% each. Parental perception of the likelihood of favoring different treatment options Parents were asked to rate their openness to innovative therapies in defined situations of limited efficacy of the established therapies. One scenario described a neuronopathic MPS course, while the other detailed a non-neuronopathic MPS course primarily characterized by symptoms affecting the mobility (Fig. ). For comparative purposes, the likelihood of deciding for ERT and HSCT was assessed. In both scenarios, all but two parents decided that it was likely or extremely likely that they would opt for ERT. Similar results apply to ITTs. A smaller number decided on HSCT and less, but still more than half of parents decided on GT. The differences between the two scenarios are minor or nonexistent, except for the percentage of extreme likelihood in favor of ERT. In the neuronopathic scenario, this percentage is much lower (35% vs. 75%), which aligns with the well-known fact that ERT does not effectively reach the CNS. Weighing of potential adverse events against expected benefits To analyze parental risk–benefit assessments regarding specific therapeutic approaches, parents were asked to evaluate which risks they would accept in exchange for a 50% chance of stabilizing either neuronopathic or non-neuronopathic disease progression. They were presented with six known risks associated with immunomodulators, which they had to weigh against the potential benefits of the therapy. The responses yielded the following ranking regarding risk tolerance, mild infections and injection site reactions (ISRs) were almost entirely tolerated as adverse effects, followed by hypertension and treatment-associated hospitalization, which approximately half of the respondents would tolerate. Severe infections and lymphatic neoplasia were acceptable to only a third or fewer of the participants. Interestingly, there was strong coherence regarding the non-acceptance ratings, showing an exactly reciprocal ranking. Specifically, lymphatic neoplasia was deemed unacceptable by 70% and 76% of respondents, while ISRs and mild infections showed no instances of non-acceptance (Fig. ). Unlike the previous decisions, there was no difference between the neuronopathic scenario and the non-neuronopathic scenario when it came to specific risks. Using the neuronopathic MPS patient scenario, we found that the majority of parents (≥ 50%) would accept mild infections and ISRs, while using the non-neuronopathic MPS patient scenario revealed, that the vast majority of parents (≥ 50%) would accept mild infections, ISRs, and hypertonia, as well as any treatment related hospitalizations (Fig. ). Figure clearly demonstrates how the general attitude ultimately influences the decision-making process. While ≥ 80% of the respondents that showed a positive attitude towards ERT, HSCT, and ITTs would opt for the respective therapy in the specific neuronopathic and non-neuronopathic case, only half of the parents who had a positive general attitude on GT also demonstrated a positive perception on the likelihood of favoring GT. A total of 31 MPS parents from three different patient advocacy groups in the DACH region joined the online survey. Fourteen participants (45%) were excluded due to incomplete submissions. The final sample size comprised 17 complete responses—15 mothers (88%) and 2 fathers (12%). The majority of respondents lived in Germany, followed by Austria and Switzerland. The survey distribution is demonstrated in Table below. Parents of a total of 17 patients participated, with no cases of two prents from the same patient or one parent representing multiple patients. The mean average age at diagnosis was 34 months (median: 28 months, range: 10–72 months) and the current age was 17 years on average (median: 18 years, range: 4–31 years). The majority of patients had MPS Type III (n = 6), followed by MPS II, IV, VI (all n = 3), and MPS I (n = 2), no parents of patients with MPS Types VII, IX, X, or PLUS participated. Over half of the patients (59%) had severe CNS manifestations. Five patients were currently on ERT, and 5 children received neither causal nor supportive drug therapies. Antipsychotics and anticonvulsants were the most common supportive medications. All respondents had formal schooling, and nearly half had completed academic education. Most were employed, and all had health insurance, with 18% having additional private medical insurance. The majority (88%) of respondents were White, with one person identifying as Black and another as Asian. Further details characterizing the sample are outlined in Table and Suppl. Info . The main sources of information were patient organizations (n = 15; 88%), followed by physicians (n = 13; 76%) and other MPS parents (n = 10; 59%). A small number of parents utilized the study registry clinicaltrials.gov (n = 4; 24%) and PubMed (n = 2; 12%). Respondents demonstrated a positive attitude towards approved MPS therapies. Regardless of previous experience or knowledge, ERT received the highest rank. MPS parents perceived ERT as most beneficial, followed by HSCT (Fig. ). Both innovative treatment approaches had a high neutral rank—41% each. Parents were asked to rate their openness to innovative therapies in defined situations of limited efficacy of the established therapies. One scenario described a neuronopathic MPS course, while the other detailed a non-neuronopathic MPS course primarily characterized by symptoms affecting the mobility (Fig. ). For comparative purposes, the likelihood of deciding for ERT and HSCT was assessed. In both scenarios, all but two parents decided that it was likely or extremely likely that they would opt for ERT. Similar results apply to ITTs. A smaller number decided on HSCT and less, but still more than half of parents decided on GT. The differences between the two scenarios are minor or nonexistent, except for the percentage of extreme likelihood in favor of ERT. In the neuronopathic scenario, this percentage is much lower (35% vs. 75%), which aligns with the well-known fact that ERT does not effectively reach the CNS. To analyze parental risk–benefit assessments regarding specific therapeutic approaches, parents were asked to evaluate which risks they would accept in exchange for a 50% chance of stabilizing either neuronopathic or non-neuronopathic disease progression. They were presented with six known risks associated with immunomodulators, which they had to weigh against the potential benefits of the therapy. The responses yielded the following ranking regarding risk tolerance, mild infections and injection site reactions (ISRs) were almost entirely tolerated as adverse effects, followed by hypertension and treatment-associated hospitalization, which approximately half of the respondents would tolerate. Severe infections and lymphatic neoplasia were acceptable to only a third or fewer of the participants. Interestingly, there was strong coherence regarding the non-acceptance ratings, showing an exactly reciprocal ranking. Specifically, lymphatic neoplasia was deemed unacceptable by 70% and 76% of respondents, while ISRs and mild infections showed no instances of non-acceptance (Fig. ). Unlike the previous decisions, there was no difference between the neuronopathic scenario and the non-neuronopathic scenario when it came to specific risks. Using the neuronopathic MPS patient scenario, we found that the majority of parents (≥ 50%) would accept mild infections and ISRs, while using the non-neuronopathic MPS patient scenario revealed, that the vast majority of parents (≥ 50%) would accept mild infections, ISRs, and hypertonia, as well as any treatment related hospitalizations (Fig. ). Figure clearly demonstrates how the general attitude ultimately influences the decision-making process. While ≥ 80% of the respondents that showed a positive attitude towards ERT, HSCT, and ITTs would opt for the respective therapy in the specific neuronopathic and non-neuronopathic case, only half of the parents who had a positive general attitude on GT also demonstrated a positive perception on the likelihood of favoring GT. Assessing the benefits and risks of therapies by patients and their relatives is critical in many medical fields, yet remains underexplored. A reliable understanding of the patient perspective is particularly critical in clinical decision-making and the development of medical innovations. While there are some insights from studies on the general attitude towards ERT, HSCT, and GT , to date, it has not been explored how the general attitude influences the actual decision for or against a specific treatment option in MPS. In this study, we have, for the first time, utilized the DCE method for this purpose and examined the relationship between the general parent assessment and the specific decision for or against established (ERT, HSCT) or innovative (ITT, GT) therapies, as well as the acceptance of relevant therapy-associated risks. A relatively high probability of choosing innovative treatment options was observed in scenarios where the effectiveness of established therapies is known to be limited. For example, in MPS IV A patients, ERT has demonstrated limited efficacy in addressing issues related to bone and joint involvement. Additionally, the tolerance for relevant adverse effects of therapy could be conclusively determined. For the first time, the utilization of the DCE approach, a highly robust method for eliciting preferences and evaluating decision-making processes in healthcare contexts, has proven to be a promising option for assessing parental decisions in the context of innovative therapies. Our findings resonate with previous DCE studies on the general attitude of approved and innovative therapies. Llyod et al., 2017 used the DCE approach in the context of ERT to identify the relative importance of various treatment attributes in the lysosomal storage disorder Fabry disease. The outcomes of their study indicated in general a positive perception of ERT and demonstrated that besides overall survival and treatment effectiveness, also the route of administration, potential adverse effects are significant drivers of choice . These results align with our findings and with the notion that approved therapies often garner more trust due to their well-known safety profiles. The general attitude among innovative therapies in previous DCE studies was consistent with our findings—mainly positive but with clear uncertainties. Specifically, the uncertainty regarding safety and long-term risks and the impact on daily life were attributes that have been closely examined within these studies on GTs . These findings echo our outcomes regarding the general parental attitude toward GT and align with broader trends observed in the literature . To our knowledge, the evaluation of the parental perception of ITTs is novel and has not been studied before. Furthermore, and even more importantly, our project yielded a more nuanced insight into parental decision-making. A noteworthy divergence arises between parents' perceptions of innovative therapies. The general attitude of the advantageousness of GTs and ITTs was very similar with a slightly beneficial rate of 53% and 59%. However, by using specific MPS patient scenarios with a clearly defined treatment outcome, parents' likelihood of a decision in favor of ITTs significantly increased (neuronopathic 82%/non-neuronopathic 94%), while parents’ likelihood of a decision in favor of GT stagnated (neuronopathic 58%/non-neuronopathic 53%). These results signify parents' willingness to explore novel personalized interventions with potential benefits and suggest individual patient-centered approaches. Furthermore, this emphasizes the need for tailored communication strategies to ensure that MPS parents possess accurate and comprehensive information regarding all therapeutic avenues. Interestingly, other authors have shown that families were more inclined to discontinue non-evidence-based treatments when they perceived no improvement in their child's functioning , emphasizing the relevance of our weighing approach. Importantly, this approach is not limited to assessing the risks associated with immunomodulatory drugs but can also be extended to treatment intended effects. Thereby facilitating a more rational decision-making process by directly comparing desired and undesired factors of therapies. We anticipate that this method holds significant relevance beyond immunomodulation, MPS, or ITTs, with implications for a wide range of clinical contexts. Considering the importance of weighing risks against benefits in clinical research, particularly in the context of clinical trials, our findings prompt critical questions regarding a meaningful and sustainable patient and parent engagement. Several studies have already indicated that besides an objective assessment of risks and benefits, also a subjective ratio of desired and undesired effects is needed . While the first is a necessity for an informed consent, the second is crucial for the recruitment and completion of a clinical trial. Our research highlights once more the need to consider the perspective of parents in paediatric clinical research, as their insights and concerns are often overlooked but hold immense significance in shaping strategies for study outcomes, success and patient experiences . Moreover, our survey revealed that 88% of respondents consider patient advocacy groups as their key source for treatment information and updates on MPS. This is consistent among all three countries. These findings underscore the critical role of patient organizations in ensuring that parents receive accurate, reliable, and up-to-date information about treatment options. Patient organizations can bridge the gap between medical professionals and patients. This trend might be more prominent in the DACH region compared to the US. An analysis by NORD® RARE in 2021, which highlights barriers to rare disease diagnosis, care, and treatment in the US, indicated that only 40% of rare disease patients use patient advocacy organizations as their main source of information in comparison to physicians (approx. 65%) and internet searches (approx. 75%) . However, the analysis by NORD® RARE also encompasses patient organizations beyond MPS representatives, which might distort the assessment. Guffon et al. (2022) further emphasized the importance of bridging the gap between medical professionals and patients through qualitative research on the challenges, unmet needs, and expectations of MPS patients and their families in France . Contrasting with the prominence of patient organizations, our study exposes an underutilization of scientific databases such as PubMed and clinicaltrials.gov. Solely 12% and 24% of the survey respondents actively use it. Interestingly, the use of these scientific databases is independent of the academic grade. Thus, the educational level might not be a proxy for parents’ capability to search and understand research articles and study protocols . While our research offers important insights, the small sample size (n = 17) and skewed gender distribution (fathers: n = 2, mothers: n = 15) limits the generalizability of the findings. The high dropout rate of 45% may be caused by the survey's length, as the dropout rate increased enormously with the beginning of Section “ ” about the parental perception of therapies. In this segment, a noticeable reduction in participants occurred, declining from the initial 27–17. Thus, a selection bias in favor of participants with higher text comprehension skills cannot be excluded. However, the constantly increasing complexity ensured a valid statement from 17 participants. One of our future aims is to capture all interested MPS parents. As the DCE approach proved to be helpful for gaining profound insights into parental perception and decision-making, the use of DCE in conjunction with interviews might be advantageous to narrow the survey's complexity. Furthermore, our study facilitates the integration of findings into a quantitative model for personalized benefit-risk assessment in MPS. The occurrence probability multiplied by the personal importance allows a rational tool which can be transferred to different therapeutic contexts. We posit this should serve as a prerequisite for the accuracy in guiding clinical decision-making processes. This research sheds light on the multifaceted landscape of parental perceptions of MPS therapies by using an innovative survey concept based on the DCE approach with specific patient scenarios for a more differentiated and reliable analysis. Thereby providing the rationale for a quantitative risk–benefit model with the necessity of incorporating parents and patients voices. Acknowledging strengths and limitations, this research paves the way for a more patient-centered and informed approach to rational personalized treatment decisions. Additional file 1. Additional file 2.
A quality improvement initiative aimed at reducing service strain and improving physician wellness in internal medicine
f94651f2-1dbd-4c30-9137-ea196547b963
11781061
Internal Medicine[mh]
Problem description The Division of General Internal Medicine (GIM) at The Ottawa Hospital (TOH) has experienced a 40% increase in admissions from 2013 to 2022. The uncapped internal medicine clinical teaching units (CTU) have been disproportionally affected by increasing service demands. Internal feedback has consistently emphasized that high patient volumes on CTU, because of raising admissions, is having ongoing negative impacts on physician wellness, trainee education, and patient care. Available knowledge Large academic centres are particularly vulnerable to capacity strain, defined as a supply-demand mismatch in resources required to provide care, as well as high physician workload . Hospital capacity strain and excessive physician workload have both been associated with physician burnout and a healthcare practitioner perception of providing low-quality care . Capacity strain has also been associated with poor patient health outcomes such as increased length of stay , prescription errors , and mortality . In academic centers, capacity strain is also perceived as negatively affecting physicians’ ability to fulfill educational and research activities , while additionally hindering trainee educational experience . In Canada, this problem has been identified as a priority area, with workload and CTU redesign featured as a keynote topic at the national internal medicine society’s annual meeting in 2023 . Rationale Solutions to hospital strain and physician workload have been explored. Ensuring adequate staffing by maintaining a stable census across providers was perceived to be effective in addressing hospital strain at large academic centers , and limiting workload with a census cap reduction was shown to increase trainee educational experience, while reducing adverse patient outcomes . Nocturnist programs and continuous models of admission were also described as mechanisms to improve clinical workload. Alternate interventions were considered when choosing our approach. Adding a new team staffed by the current pool of GIM attendings was not possible given human resource constraints. For instance, a non-teaching team that had been created to offload the surge of COVID patients during the pandemic was closed at the end of 2022 due to inability to ensure staffing. Creating a new team staffed by a GIM attending and a nurse practitioner or physician assistant (PA) was also considered, however, there was also no available hospital funding to support these uninsured services. In fact, our previously PA-run non-teaching team was closed in the fall 2022. Thus, a hospitalist service, to be staffed by non-academic family physicians and internists, funded largely by the provincial insurer rather than the hospital, was determined to be the most feasible option. Aim The primary aim was to achieve and maintain a mean daily CTU census of ≤ 75 patients (approximately 25 patients per team) at one hospital site representing a 30% relative decrease given a projected CTU census of 111.5 in 2023 with the closure of two non-teaching teams. The secondary aims were to characterize and evaluate physician wellness and trainee experience on CTU in relation to census, as well as to assess for possible adverse outcomes associated with the intervention. The Division of General Internal Medicine (GIM) at The Ottawa Hospital (TOH) has experienced a 40% increase in admissions from 2013 to 2022. The uncapped internal medicine clinical teaching units (CTU) have been disproportionally affected by increasing service demands. Internal feedback has consistently emphasized that high patient volumes on CTU, because of raising admissions, is having ongoing negative impacts on physician wellness, trainee education, and patient care. Large academic centres are particularly vulnerable to capacity strain, defined as a supply-demand mismatch in resources required to provide care, as well as high physician workload . Hospital capacity strain and excessive physician workload have both been associated with physician burnout and a healthcare practitioner perception of providing low-quality care . Capacity strain has also been associated with poor patient health outcomes such as increased length of stay , prescription errors , and mortality . In academic centers, capacity strain is also perceived as negatively affecting physicians’ ability to fulfill educational and research activities , while additionally hindering trainee educational experience . In Canada, this problem has been identified as a priority area, with workload and CTU redesign featured as a keynote topic at the national internal medicine society’s annual meeting in 2023 . Solutions to hospital strain and physician workload have been explored. Ensuring adequate staffing by maintaining a stable census across providers was perceived to be effective in addressing hospital strain at large academic centers , and limiting workload with a census cap reduction was shown to increase trainee educational experience, while reducing adverse patient outcomes . Nocturnist programs and continuous models of admission were also described as mechanisms to improve clinical workload. Alternate interventions were considered when choosing our approach. Adding a new team staffed by the current pool of GIM attendings was not possible given human resource constraints. For instance, a non-teaching team that had been created to offload the surge of COVID patients during the pandemic was closed at the end of 2022 due to inability to ensure staffing. Creating a new team staffed by a GIM attending and a nurse practitioner or physician assistant (PA) was also considered, however, there was also no available hospital funding to support these uninsured services. In fact, our previously PA-run non-teaching team was closed in the fall 2022. Thus, a hospitalist service, to be staffed by non-academic family physicians and internists, funded largely by the provincial insurer rather than the hospital, was determined to be the most feasible option. The primary aim was to achieve and maintain a mean daily CTU census of ≤ 75 patients (approximately 25 patients per team) at one hospital site representing a 30% relative decrease given a projected CTU census of 111.5 in 2023 with the closure of two non-teaching teams. The secondary aims were to characterize and evaluate physician wellness and trainee experience on CTU in relation to census, as well as to assess for possible adverse outcomes associated with the intervention. Context TOH is a tertiary care academic center with over 1300 beds at two campuses, the General and the Civic sites. Both campuses operate under the same leadership, have access to the same financial resourcing, and have similar patient acuity as well as proportions of medically inactive patients. Half of the GIM attendings and all internal medicine (IM) trainees work across both sites. Both campuses host three uncapped CTU teams, that are each comprised of one attending physician and variable house staff (0–1 senior internal medicine residents and 0–4 junior first year residents). There are also two ≤ 15-patient non-traditional GIM teams per site that are staffed by one attending and often, a senior resident. The General campus was chosen as the intervention site given historically higher census numbers. Intervention The intervention was rolled out at the General Campus site only. The GIM service continued to operate 3 CTU teams and 2 non-traditional teams (collectively referred to as “GIM”). Additionally, a new hospitalist service line was added (“hospitalist”). In phase 1, two 15-patient hospitalist teams were established in January 2023. In phase 2, a third team opened, bringing the total number of patients cared for by the hospitalist teams to 45 as of April 2023. Patients admitted to the GIM and hospitalist teams were initially assessed by the internal medicine consult service, with unstable and more medically complex patients being admitted to the GIM service. Admissions were distributed in a continuous fashion and balanced across the teams. Admission algorithms (Supplement ) and new workflows (Supplement ) were introduced. An existing divisional administrative dashboard that includes census, admission numbers, length of stay, and mortality was monitored for potential adverse outcomes throughout the implementation period. Regular feedback was sought from divisional members regarding the logistics and workflow of the intervention, and issues were addressed as required. The approximate costs required to run the program over the course of the year are presented in Supplement . Outcomes Mean daily CTU census was the primary outcome to assess the aim of improving CTU service strain. The secondary process measures of mean daily and monthly admissions, as well as the percentage of non-acute patients who remained in hospital with ongoing discharge barriers as designated by alternative level of care (ALC) status, were reported for context when evaluating the intervention. Physician wellness scores as well as reported perceptions of the work environment and hospitalist program further assessed the impact of the intervention. Finally, in-hospital mortality rates, 30-day readmission rates, patient safety incidents, and mean length of stay were chosen as balancing measures based on adverse outcomes reported in the literature . Data sources and analysis Administrative data Administrative health data was obtained for the cohort of patients who were admitted to the GIM and hospitalist services at TOH between October 1, 2022 and December 31, 2023. Patient characteristics (age, sex, Charleson Comorbidity Index ), admission details (date, campus, service), acute length of stay (LOS), days to re-admission, and in-hospital death were obtained from the hospital administrative database, MDClone. Mean daily team census and monthly ALC proportions were obtained from TOH administrative data tracking system. Finally, the number of patient safety incidents was obtained from TOH patient safety learning system (SLS). Survey data Physician wellness scores were captured using the Mini-Z physician 10-item survey that was validated among internal medicine physicians to assess three wellness outcomes: burnout, stress, and job satisfaction . The voluntary survey was initially emailed to GIM attendings and IM residents who worked at the intervention site in March 2023. Survey administration in June and December 2023 included all GIM division attendings, IM residents, and hospitalist staff. Important passive qualitative feedback on CTU educational experiences of both attendings and learners as well as perceptions of the hospitalist program were captured in the final mini-Z open-ended question. Prospective respondents were given two weeks to complete the survey, with two email reminders sent during that period. Analysis Data were aggregated by calendar quarters (Q1-Q4). Comparisons were made between the pre-implementation quarter (Q4 2022) and the intervention study period (Q1-Q4, 2023), as well as between the intervention and non-intervention sites (TOH Civic Campus), where the CTU teams are structured similarly. Attendings who worked at both sites and learners who worked at the General campus after the intervention was implemented were analyzed as intervention site respondents. Mean scores and associated 95% confidence intervals were reported for the overall total and subscale survey scores and significant differences were defined by non-overlapping confidence intervals. Pearson’s correlation coefficient was reported for the correlation between burnout and CTU census. A Chi-squared test of independence was used to test differences in proportions while a t-test was used to report differences in mean LOS over time. An inductive thematic analysis of the open-ended responses from the Mini-Z Survey was undertaken . Two reviewers (JE, MGS) initially independently created codes from anonymized survey responses and met to create the final codebook. The reviewers also determined that data saturation was met. The codebook was used to code the anonymized open ended survey responses in Excel in duplicate (MGS & JE) and any disagreements were discussed with a third reviewer (SR) until consensus was reached. The final coding reports were analyzed, and themes were identified. TOH is a tertiary care academic center with over 1300 beds at two campuses, the General and the Civic sites. Both campuses operate under the same leadership, have access to the same financial resourcing, and have similar patient acuity as well as proportions of medically inactive patients. Half of the GIM attendings and all internal medicine (IM) trainees work across both sites. Both campuses host three uncapped CTU teams, that are each comprised of one attending physician and variable house staff (0–1 senior internal medicine residents and 0–4 junior first year residents). There are also two ≤ 15-patient non-traditional GIM teams per site that are staffed by one attending and often, a senior resident. The General campus was chosen as the intervention site given historically higher census numbers. The intervention was rolled out at the General Campus site only. The GIM service continued to operate 3 CTU teams and 2 non-traditional teams (collectively referred to as “GIM”). Additionally, a new hospitalist service line was added (“hospitalist”). In phase 1, two 15-patient hospitalist teams were established in January 2023. In phase 2, a third team opened, bringing the total number of patients cared for by the hospitalist teams to 45 as of April 2023. Patients admitted to the GIM and hospitalist teams were initially assessed by the internal medicine consult service, with unstable and more medically complex patients being admitted to the GIM service. Admissions were distributed in a continuous fashion and balanced across the teams. Admission algorithms (Supplement ) and new workflows (Supplement ) were introduced. An existing divisional administrative dashboard that includes census, admission numbers, length of stay, and mortality was monitored for potential adverse outcomes throughout the implementation period. Regular feedback was sought from divisional members regarding the logistics and workflow of the intervention, and issues were addressed as required. The approximate costs required to run the program over the course of the year are presented in Supplement . Mean daily CTU census was the primary outcome to assess the aim of improving CTU service strain. The secondary process measures of mean daily and monthly admissions, as well as the percentage of non-acute patients who remained in hospital with ongoing discharge barriers as designated by alternative level of care (ALC) status, were reported for context when evaluating the intervention. Physician wellness scores as well as reported perceptions of the work environment and hospitalist program further assessed the impact of the intervention. Finally, in-hospital mortality rates, 30-day readmission rates, patient safety incidents, and mean length of stay were chosen as balancing measures based on adverse outcomes reported in the literature . Administrative data Administrative health data was obtained for the cohort of patients who were admitted to the GIM and hospitalist services at TOH between October 1, 2022 and December 31, 2023. Patient characteristics (age, sex, Charleson Comorbidity Index ), admission details (date, campus, service), acute length of stay (LOS), days to re-admission, and in-hospital death were obtained from the hospital administrative database, MDClone. Mean daily team census and monthly ALC proportions were obtained from TOH administrative data tracking system. Finally, the number of patient safety incidents was obtained from TOH patient safety learning system (SLS). Administrative health data was obtained for the cohort of patients who were admitted to the GIM and hospitalist services at TOH between October 1, 2022 and December 31, 2023. Patient characteristics (age, sex, Charleson Comorbidity Index ), admission details (date, campus, service), acute length of stay (LOS), days to re-admission, and in-hospital death were obtained from the hospital administrative database, MDClone. Mean daily team census and monthly ALC proportions were obtained from TOH administrative data tracking system. Finally, the number of patient safety incidents was obtained from TOH patient safety learning system (SLS). Physician wellness scores were captured using the Mini-Z physician 10-item survey that was validated among internal medicine physicians to assess three wellness outcomes: burnout, stress, and job satisfaction . The voluntary survey was initially emailed to GIM attendings and IM residents who worked at the intervention site in March 2023. Survey administration in June and December 2023 included all GIM division attendings, IM residents, and hospitalist staff. Important passive qualitative feedback on CTU educational experiences of both attendings and learners as well as perceptions of the hospitalist program were captured in the final mini-Z open-ended question. Prospective respondents were given two weeks to complete the survey, with two email reminders sent during that period. Data were aggregated by calendar quarters (Q1-Q4). Comparisons were made between the pre-implementation quarter (Q4 2022) and the intervention study period (Q1-Q4, 2023), as well as between the intervention and non-intervention sites (TOH Civic Campus), where the CTU teams are structured similarly. Attendings who worked at both sites and learners who worked at the General campus after the intervention was implemented were analyzed as intervention site respondents. Mean scores and associated 95% confidence intervals were reported for the overall total and subscale survey scores and significant differences were defined by non-overlapping confidence intervals. Pearson’s correlation coefficient was reported for the correlation between burnout and CTU census. A Chi-squared test of independence was used to test differences in proportions while a t-test was used to report differences in mean LOS over time. An inductive thematic analysis of the open-ended responses from the Mini-Z Survey was undertaken . Two reviewers (JE, MGS) initially independently created codes from anonymized survey responses and met to create the final codebook. The reviewers also determined that data saturation was met. The codebook was used to code the anonymized open ended survey responses in Excel in duplicate (MGS & JE) and any disagreements were discussed with a third reviewer (SR) until consensus was reached. The final coding reports were analyzed, and themes were identified. Authors JE, KW, DHS, and MGS are General Internal Medicine physicians who have personal experience with working in the study setting and know through their work how increased CTU patient numbers can impact physician wellness. SR is a researcher and has no experience working within the specific study setting. Census and administrative outcomes There was a total of 5092 admissions to internal medicine at TOH’s General campus from January to December 2023 (Supplement ). There was an initial 9.9% relative decrease in mean CTU census at the intervention site in Q1 (coinciding with phase 1) where the average census decreased from 84.5 to 76.1, followed by a further decrease in Q2 to 71.3, achieving a maximum decrease of 15.6% (phase 2) (Fig. ). CTU census then increased and plateaued in Q3 (78.3) and Q4 (78.6). In comparison, at the non-intervention site, the mean daily CTU census was 84.5 in Q4 of 2023, representing a 15.2% increase from a mean daily census of 73.4 the year prior (Q4 2022) (Supplement ). The mean number of monthly admissions to internal medicine increased by 14.3% from Q4 2022 to Q4 2023 (Table ). At the end of the study (Q4 2023), the mean daily census per CTU team was 26.2 patients with a mean of 3.0 admissions per day. The proportion of patients designated as ALC on the internal medicine services also increased over the year (Table ), with the relative increase being over six-fold greater for patients admitted to the GIM service with uncapped CTU teams compared to the hospitalist service. The proportion of 30-day readmissions ( p = 0.262), in-hospital deaths ( p = 0.854), and safety incidents associated with harms ( p = 0.622), as well as mean LOS ( p = 0.977) for all patients admitted to internal medicine were not different from pre-implementation (Q4 2022) to the end of the study period (Q4 2023) (Table ). Wellness survey outcomes The response rate for GIM attendings decreased with time and was 61.5% ( n = 16/26), 43.6% ( n = 17/39), and 41.9% ( n = 18/43) in March, June, and December. The average resident response rate was 24.0% ( n = 68/283) while the hospitalist response rate was 37.0% ( n = 20/54) across all surveys. There was an upward trend and an absolute increase in the proportion reporting job satisfaction of 11.0% among attendings and residents who worked at the intervention site over the survey period (Table ). Work stress remained stable throughout the study period, with just over half of respondents reporting significant job-related stress (Table ). The proportion of GIM attendings and IM residents reporting burnout at the intervention site initially decreased after the implementation of the third hospitalist team in Q2, but then increased again at the end of the study period (Table ). Burnout was positively correlated with rising average daily CTU census numbers (Pearson correlation coefficient 0.906) (Fig. ). This trend was observed when comparing burnout rates during phase 1 and 2 of the intervention period at the General campus as well as when analyzed across sites. The proportion reporting burnout among GIM attendings and residents who worked on CTU at the intervention campus (65.2%, n = 15) versus those who did not (94.1%, n = 16) was significantly different ( p = 0.033). The mean total and subscale wellness scores on the final survey in December are significantly greater for the hospitalist group as compared to the GIM attendings and residents (Fig. ). Correspondingly, the burnout rate was only 9.1% ( n = 1) for the hospitalist group, but 53.8% ( n = 7) and 80.0% ( n = 8) for the GIM attendings and residents, respectively. GIM attendings (38.5%, n = 5) and residents (30.0%, n = 3) also reported low control over workload as compared to hospitalists (100.0%, n = 11). Qualitative assessment Of the GIM attendings, residents, and hospitalists who worked at the intervention site, 68 responses to the Mini-Z open-ended survey question were obtained over the three survey periods, representing a written response rate of 16.2% ( n = 68). The themes identified from qualitative analysis are presented below in detail (Supplement ). Clinical service demands High patient volumes, acuity, and complexity, coupled with the lack of control over workload when on service for the clinical teaching units, were consistently reported as stressors by GIM attendings and internal medicine residents alike. Suboptimal scheduling of residents, in conjunction with high patient loads, was also reported by both groups to be the main contributors to prolonged and unpredictable workhours. This heavy clinical burden was repeatedly described as unsustainable, overwhelming, and exhausting. One attending reported: “The issue with clinical care on Medicine teams is that there are too many too sick patients on teams…. The reality of an unpredictable workforce…is an undue and unacceptable stress.” (#15mGIM). Resident comments frequently reflected similar concerns. In addition, residents reported call burden and frequent urgent call coverage requests as other key stressors. One resident captured these sentiments well: “Ongoing high cognitive burden due to high caseloads of complex patients requiring involved care/multiple follow-ups throughout the day , busy calls/consult services.” (#48dIM). There were no hospitalist responses under this theme. Similarly, GIM attendings also reported that working on smaller capped non-traditional teams was less stressful and more enjoyable than working on CTU. Academic matters Attendings and residents responded that teaching and learning were impeded significantly by high clinical service demands. One attending wrote: ” It has been a long time since I was able to do proper bedside teaching on CTU the way the students and residents deserve.” (#16dGIM). Similarly, a resident opined: “Because of very service heavy i can count the number of times we had teaching im all CTU rotations on one hand. There isn’t even enough time to finish up seeing patients and doing the notes to have any sensible teaching…” (#26mIM). Moreover, attendings indicated that non-clinical work demands, such as learner evaluations, external teaching activities, and research, were underrecognized and undercompensated, which further reduced work satisfaction. Patient safety/ quality of care Patient safety concerns were emphasized in association with high patient volumes and limited house staff on the clinical teaching units. GIM attendings and residents described the current environment on CTU using language such as dangerous and unsafe. “It becomes very unsafe when we’re expected to look after 25 + patients on our own with a couple med students and maybe 1…junior [resident] (this happens more often than not).” (#7mGIM); Although attendings emphasized patient safety and quality concerns more often than residents, residents also acknowledged these concerns: ”The amount of patients we see as residents in a daily basis is…unsafe.” (#25mIM). GIM attendings and residents indicated that high patient to nursing ratios added to safety concerns. One resident wrote: “Nursing pressures in terms of unsafe patient ratios adds significant stress to residents… being paged to change the management plan based on the nurse’s ability complete tasks when they have too many sick patients to [care] for.” (#3mIM). Lastly, excessive documentation requirements and information overload associated with the EMR were also noted to detract from patient care and potentially increase medical errors by GIM attendings, hospitalists, and residents. Human resources/ administrative support A significant source of frustration, particularly amongst GIM attendings, but also with hospitalists and residents, was the system expectation for physicians to remain responsible for managing large numbers of medically inactive patients, who remained admitted due to complex social situations and disposition challenges. One GIM attending wrote: “Relieving GIM of the care of ALC patients. They are time consuming in their own way due to social/discharge planning and allied health requirements…it does not call on GIM expertise whatsoever. “ (#10mGIM). Similarly, one hospitalist wrote: “ Many complex social issues…cause stress” (#5jH), while a resident requested: ”Less ALC [alternate level care] patients and social admissions.” (#56dIM). In addition to a call for offloading ALC patients from medical teams, solutions presented by all three groups included provision of allied health support on the weekends to assist with patient flow, re-hiring of a hospital discharge planning coordinator and/or physician assistants to take over all disposition planning responsibilities, and geographically co-locating patients to limit the time required for the medical team to communicate with multiple allied health teams and nursing leaders. Communication A shared sentiment amongst respondents from all three groups was the strain of frequent messages received, particularly in the EMR chat by nurses, allied health professionals, and clerks, which were described in numerous responses as disruptive, inefficient, non-urgent, and excessive. Proposed solutions were to limit EMR messages to only urgent issues and even potentially remove completely the ability of nurses to contact physicians via the EMR. The other primary communication issue noted by the GIM attendings was the view that other subspecialty services were reluctant to assist with patient care and accept appropriate admissions at times of high patient volumes. One attending suggested: “…all subspecialties pulling their weight and admitting appropriate patients…would alleviate some frustration.” (#30dGIM). Similar opinions were shared across time, on all three surveys. Assessment of hospitalist service The comments solicited from GIM attendings and residents regarding the hospitalist program primarily related to its effectiveness at reducing patient volumes. Both groups consistently reported that they felt the hospitalist program was beneficial. Comparing the site where the hospitalist service had been implemented to the site where it had not, one resident wrote: “Having done previous CTU blocks at the civic then doing one at the general with the hospitalist service running , I have definitely seen a huge change in workload (in a positive way) and a great amount of teaching done throughout the block which was not feasible at the civic due to the very high number of very active patients per CTU” (#46jIM), while another commented: “Large workload; to me it seems with the hospitalist program at the General it’s a bit better (i.e. better CTU numbers) while I often hear CTU numbers at the Civic going up to the high 30s and up to 40.” (#47dIM). However, there were many concerns that these benefits were temporary due to hospitalization trends as well as ongoing challenges with discharging non-medically active patients from the teaching teams. One GIM attending noted: “The hospitalist service is good…my only concern is that it feels like adding lanes on a highway to deal with traffic. It is only a short-term solution…this is what has happened on our CTU teams. They were at 18–22 in January which is perfect , but now they are back up to 30 with many social/non-medical pts , and it doesn’t feel any different compared to before.” (#13mGIM), The feedback from the hospitalist respondents regarding the program itself was generally positive, for example one respondent wrote: ”Great workplace environment , team comradery , and excellent leadership.” (#13dH). Overall, the hospitalist respondent criticisms were predominantly directed at hospital and health care system issues and not toward the program implementation. There was a total of 5092 admissions to internal medicine at TOH’s General campus from January to December 2023 (Supplement ). There was an initial 9.9% relative decrease in mean CTU census at the intervention site in Q1 (coinciding with phase 1) where the average census decreased from 84.5 to 76.1, followed by a further decrease in Q2 to 71.3, achieving a maximum decrease of 15.6% (phase 2) (Fig. ). CTU census then increased and plateaued in Q3 (78.3) and Q4 (78.6). In comparison, at the non-intervention site, the mean daily CTU census was 84.5 in Q4 of 2023, representing a 15.2% increase from a mean daily census of 73.4 the year prior (Q4 2022) (Supplement ). The mean number of monthly admissions to internal medicine increased by 14.3% from Q4 2022 to Q4 2023 (Table ). At the end of the study (Q4 2023), the mean daily census per CTU team was 26.2 patients with a mean of 3.0 admissions per day. The proportion of patients designated as ALC on the internal medicine services also increased over the year (Table ), with the relative increase being over six-fold greater for patients admitted to the GIM service with uncapped CTU teams compared to the hospitalist service. The proportion of 30-day readmissions ( p = 0.262), in-hospital deaths ( p = 0.854), and safety incidents associated with harms ( p = 0.622), as well as mean LOS ( p = 0.977) for all patients admitted to internal medicine were not different from pre-implementation (Q4 2022) to the end of the study period (Q4 2023) (Table ). The response rate for GIM attendings decreased with time and was 61.5% ( n = 16/26), 43.6% ( n = 17/39), and 41.9% ( n = 18/43) in March, June, and December. The average resident response rate was 24.0% ( n = 68/283) while the hospitalist response rate was 37.0% ( n = 20/54) across all surveys. There was an upward trend and an absolute increase in the proportion reporting job satisfaction of 11.0% among attendings and residents who worked at the intervention site over the survey period (Table ). Work stress remained stable throughout the study period, with just over half of respondents reporting significant job-related stress (Table ). The proportion of GIM attendings and IM residents reporting burnout at the intervention site initially decreased after the implementation of the third hospitalist team in Q2, but then increased again at the end of the study period (Table ). Burnout was positively correlated with rising average daily CTU census numbers (Pearson correlation coefficient 0.906) (Fig. ). This trend was observed when comparing burnout rates during phase 1 and 2 of the intervention period at the General campus as well as when analyzed across sites. The proportion reporting burnout among GIM attendings and residents who worked on CTU at the intervention campus (65.2%, n = 15) versus those who did not (94.1%, n = 16) was significantly different ( p = 0.033). The mean total and subscale wellness scores on the final survey in December are significantly greater for the hospitalist group as compared to the GIM attendings and residents (Fig. ). Correspondingly, the burnout rate was only 9.1% ( n = 1) for the hospitalist group, but 53.8% ( n = 7) and 80.0% ( n = 8) for the GIM attendings and residents, respectively. GIM attendings (38.5%, n = 5) and residents (30.0%, n = 3) also reported low control over workload as compared to hospitalists (100.0%, n = 11). Of the GIM attendings, residents, and hospitalists who worked at the intervention site, 68 responses to the Mini-Z open-ended survey question were obtained over the three survey periods, representing a written response rate of 16.2% ( n = 68). The themes identified from qualitative analysis are presented below in detail (Supplement ). Clinical service demands High patient volumes, acuity, and complexity, coupled with the lack of control over workload when on service for the clinical teaching units, were consistently reported as stressors by GIM attendings and internal medicine residents alike. Suboptimal scheduling of residents, in conjunction with high patient loads, was also reported by both groups to be the main contributors to prolonged and unpredictable workhours. This heavy clinical burden was repeatedly described as unsustainable, overwhelming, and exhausting. One attending reported: “The issue with clinical care on Medicine teams is that there are too many too sick patients on teams…. The reality of an unpredictable workforce…is an undue and unacceptable stress.” (#15mGIM). Resident comments frequently reflected similar concerns. In addition, residents reported call burden and frequent urgent call coverage requests as other key stressors. One resident captured these sentiments well: “Ongoing high cognitive burden due to high caseloads of complex patients requiring involved care/multiple follow-ups throughout the day , busy calls/consult services.” (#48dIM). There were no hospitalist responses under this theme. Similarly, GIM attendings also reported that working on smaller capped non-traditional teams was less stressful and more enjoyable than working on CTU. Academic matters Attendings and residents responded that teaching and learning were impeded significantly by high clinical service demands. One attending wrote: ” It has been a long time since I was able to do proper bedside teaching on CTU the way the students and residents deserve.” (#16dGIM). Similarly, a resident opined: “Because of very service heavy i can count the number of times we had teaching im all CTU rotations on one hand. There isn’t even enough time to finish up seeing patients and doing the notes to have any sensible teaching…” (#26mIM). Moreover, attendings indicated that non-clinical work demands, such as learner evaluations, external teaching activities, and research, were underrecognized and undercompensated, which further reduced work satisfaction. Patient safety/ quality of care Patient safety concerns were emphasized in association with high patient volumes and limited house staff on the clinical teaching units. GIM attendings and residents described the current environment on CTU using language such as dangerous and unsafe. “It becomes very unsafe when we’re expected to look after 25 + patients on our own with a couple med students and maybe 1…junior [resident] (this happens more often than not).” (#7mGIM); Although attendings emphasized patient safety and quality concerns more often than residents, residents also acknowledged these concerns: ”The amount of patients we see as residents in a daily basis is…unsafe.” (#25mIM). GIM attendings and residents indicated that high patient to nursing ratios added to safety concerns. One resident wrote: “Nursing pressures in terms of unsafe patient ratios adds significant stress to residents… being paged to change the management plan based on the nurse’s ability complete tasks when they have too many sick patients to [care] for.” (#3mIM). Lastly, excessive documentation requirements and information overload associated with the EMR were also noted to detract from patient care and potentially increase medical errors by GIM attendings, hospitalists, and residents. Human resources/ administrative support A significant source of frustration, particularly amongst GIM attendings, but also with hospitalists and residents, was the system expectation for physicians to remain responsible for managing large numbers of medically inactive patients, who remained admitted due to complex social situations and disposition challenges. One GIM attending wrote: “Relieving GIM of the care of ALC patients. They are time consuming in their own way due to social/discharge planning and allied health requirements…it does not call on GIM expertise whatsoever. “ (#10mGIM). Similarly, one hospitalist wrote: “ Many complex social issues…cause stress” (#5jH), while a resident requested: ”Less ALC [alternate level care] patients and social admissions.” (#56dIM). In addition to a call for offloading ALC patients from medical teams, solutions presented by all three groups included provision of allied health support on the weekends to assist with patient flow, re-hiring of a hospital discharge planning coordinator and/or physician assistants to take over all disposition planning responsibilities, and geographically co-locating patients to limit the time required for the medical team to communicate with multiple allied health teams and nursing leaders. Communication A shared sentiment amongst respondents from all three groups was the strain of frequent messages received, particularly in the EMR chat by nurses, allied health professionals, and clerks, which were described in numerous responses as disruptive, inefficient, non-urgent, and excessive. Proposed solutions were to limit EMR messages to only urgent issues and even potentially remove completely the ability of nurses to contact physicians via the EMR. The other primary communication issue noted by the GIM attendings was the view that other subspecialty services were reluctant to assist with patient care and accept appropriate admissions at times of high patient volumes. One attending suggested: “…all subspecialties pulling their weight and admitting appropriate patients…would alleviate some frustration.” (#30dGIM). Similar opinions were shared across time, on all three surveys. Assessment of hospitalist service The comments solicited from GIM attendings and residents regarding the hospitalist program primarily related to its effectiveness at reducing patient volumes. Both groups consistently reported that they felt the hospitalist program was beneficial. Comparing the site where the hospitalist service had been implemented to the site where it had not, one resident wrote: “Having done previous CTU blocks at the civic then doing one at the general with the hospitalist service running , I have definitely seen a huge change in workload (in a positive way) and a great amount of teaching done throughout the block which was not feasible at the civic due to the very high number of very active patients per CTU” (#46jIM), while another commented: “Large workload; to me it seems with the hospitalist program at the General it’s a bit better (i.e. better CTU numbers) while I often hear CTU numbers at the Civic going up to the high 30s and up to 40.” (#47dIM). However, there were many concerns that these benefits were temporary due to hospitalization trends as well as ongoing challenges with discharging non-medically active patients from the teaching teams. One GIM attending noted: “The hospitalist service is good…my only concern is that it feels like adding lanes on a highway to deal with traffic. It is only a short-term solution…this is what has happened on our CTU teams. They were at 18–22 in January which is perfect , but now they are back up to 30 with many social/non-medical pts , and it doesn’t feel any different compared to before.” (#13mGIM), The feedback from the hospitalist respondents regarding the program itself was generally positive, for example one respondent wrote: ”Great workplace environment , team comradery , and excellent leadership.” (#13dH). Overall, the hospitalist respondent criticisms were predominantly directed at hospital and health care system issues and not toward the program implementation. High patient volumes, acuity, and complexity, coupled with the lack of control over workload when on service for the clinical teaching units, were consistently reported as stressors by GIM attendings and internal medicine residents alike. Suboptimal scheduling of residents, in conjunction with high patient loads, was also reported by both groups to be the main contributors to prolonged and unpredictable workhours. This heavy clinical burden was repeatedly described as unsustainable, overwhelming, and exhausting. One attending reported: “The issue with clinical care on Medicine teams is that there are too many too sick patients on teams…. The reality of an unpredictable workforce…is an undue and unacceptable stress.” (#15mGIM). Resident comments frequently reflected similar concerns. In addition, residents reported call burden and frequent urgent call coverage requests as other key stressors. One resident captured these sentiments well: “Ongoing high cognitive burden due to high caseloads of complex patients requiring involved care/multiple follow-ups throughout the day , busy calls/consult services.” (#48dIM). There were no hospitalist responses under this theme. Similarly, GIM attendings also reported that working on smaller capped non-traditional teams was less stressful and more enjoyable than working on CTU. Attendings and residents responded that teaching and learning were impeded significantly by high clinical service demands. One attending wrote: ” It has been a long time since I was able to do proper bedside teaching on CTU the way the students and residents deserve.” (#16dGIM). Similarly, a resident opined: “Because of very service heavy i can count the number of times we had teaching im all CTU rotations on one hand. There isn’t even enough time to finish up seeing patients and doing the notes to have any sensible teaching…” (#26mIM). Moreover, attendings indicated that non-clinical work demands, such as learner evaluations, external teaching activities, and research, were underrecognized and undercompensated, which further reduced work satisfaction. Patient safety concerns were emphasized in association with high patient volumes and limited house staff on the clinical teaching units. GIM attendings and residents described the current environment on CTU using language such as dangerous and unsafe. “It becomes very unsafe when we’re expected to look after 25 + patients on our own with a couple med students and maybe 1…junior [resident] (this happens more often than not).” (#7mGIM); Although attendings emphasized patient safety and quality concerns more often than residents, residents also acknowledged these concerns: ”The amount of patients we see as residents in a daily basis is…unsafe.” (#25mIM). GIM attendings and residents indicated that high patient to nursing ratios added to safety concerns. One resident wrote: “Nursing pressures in terms of unsafe patient ratios adds significant stress to residents… being paged to change the management plan based on the nurse’s ability complete tasks when they have too many sick patients to [care] for.” (#3mIM). Lastly, excessive documentation requirements and information overload associated with the EMR were also noted to detract from patient care and potentially increase medical errors by GIM attendings, hospitalists, and residents. A significant source of frustration, particularly amongst GIM attendings, but also with hospitalists and residents, was the system expectation for physicians to remain responsible for managing large numbers of medically inactive patients, who remained admitted due to complex social situations and disposition challenges. One GIM attending wrote: “Relieving GIM of the care of ALC patients. They are time consuming in their own way due to social/discharge planning and allied health requirements…it does not call on GIM expertise whatsoever. “ (#10mGIM). Similarly, one hospitalist wrote: “ Many complex social issues…cause stress” (#5jH), while a resident requested: ”Less ALC [alternate level care] patients and social admissions.” (#56dIM). In addition to a call for offloading ALC patients from medical teams, solutions presented by all three groups included provision of allied health support on the weekends to assist with patient flow, re-hiring of a hospital discharge planning coordinator and/or physician assistants to take over all disposition planning responsibilities, and geographically co-locating patients to limit the time required for the medical team to communicate with multiple allied health teams and nursing leaders. A shared sentiment amongst respondents from all three groups was the strain of frequent messages received, particularly in the EMR chat by nurses, allied health professionals, and clerks, which were described in numerous responses as disruptive, inefficient, non-urgent, and excessive. Proposed solutions were to limit EMR messages to only urgent issues and even potentially remove completely the ability of nurses to contact physicians via the EMR. The other primary communication issue noted by the GIM attendings was the view that other subspecialty services were reluctant to assist with patient care and accept appropriate admissions at times of high patient volumes. One attending suggested: “…all subspecialties pulling their weight and admitting appropriate patients…would alleviate some frustration.” (#30dGIM). Similar opinions were shared across time, on all three surveys. The comments solicited from GIM attendings and residents regarding the hospitalist program primarily related to its effectiveness at reducing patient volumes. Both groups consistently reported that they felt the hospitalist program was beneficial. Comparing the site where the hospitalist service had been implemented to the site where it had not, one resident wrote: “Having done previous CTU blocks at the civic then doing one at the general with the hospitalist service running , I have definitely seen a huge change in workload (in a positive way) and a great amount of teaching done throughout the block which was not feasible at the civic due to the very high number of very active patients per CTU” (#46jIM), while another commented: “Large workload; to me it seems with the hospitalist program at the General it’s a bit better (i.e. better CTU numbers) while I often hear CTU numbers at the Civic going up to the high 30s and up to 40.” (#47dIM). However, there were many concerns that these benefits were temporary due to hospitalization trends as well as ongoing challenges with discharging non-medically active patients from the teaching teams. One GIM attending noted: “The hospitalist service is good…my only concern is that it feels like adding lanes on a highway to deal with traffic. It is only a short-term solution…this is what has happened on our CTU teams. They were at 18–22 in January which is perfect , but now they are back up to 30 with many social/non-medical pts , and it doesn’t feel any different compared to before.” (#13mGIM), The feedback from the hospitalist respondents regarding the program itself was generally positive, for example one respondent wrote: ”Great workplace environment , team comradery , and excellent leadership.” (#13dH). Overall, the hospitalist respondent criticisms were predominantly directed at hospital and health care system issues and not toward the program implementation. Our project aim of reducing mean daily CTU census to less than 75 patients was only briefly achieved in the second calendar quarter following implementation of the third hospitalist team. Census numbers then increased above target, although stabilized at approximately 7% below historical (Q4 2022) numbers for the second half of 2023. Rising census was most likely due to increasing admissions over time, a trend that has been observed nationally . Our data highlights the need for interventions that target census to be responsive to projected growth in admissions over time. Also possibly contributing to the rising census was the increased proportion of ALC patients admitted to internal medicine such that these patients typically experience longer admissions due to discharge planning barriers . Strategies that address ALC numbers, such as an increase in subacute medical beds on transitional care units, are therefore also important. Finally, the closure of a non-teaching GIM team and a PA-run GIM non-traditional team in the months leading up to the intervention (accounting for ≤ 27 beds) may have blunted the impact of the addition of 45 beds (≥ 18 absolute beds added). In addition to not achieving our goal of a sustained reduction in census, we also did not observe an improvement in wellness during the intervention period. In fact, burnout increased during the intervention period. Interestingly, we observed a strong correlation between burnout and CTU census within and across both sites. The burnout rates in this study were slightly higher than those reported for internal medicine physicians and trainees in the United States as well as for reported rates of resident burnout in other Canadian studies . Notably, these comparison studies were conducted prior to the COVID-19 pandemic, and it has been shown that physician burnout has generally increased since the pandemic . In the literature, drivers of physician burnout include excessive workloads, lack of control, inefficient work processes, and administrative burdens . In our qualitative analysis, these factors were also consistently reported among GIM attendings and residents who worked on high volume CTU teams. Moreover, compared to hospitalists, residents and GIM attendings had significantly lower overall wellness scores, higher reported rates of burnout, lower control over workload, and insufficient time for documentation. This is likely due to higher patient volumes on CTU teams, as well as lack of workload control when team census is not capped. Reported job satisfaction increased during the study period. This indicates that there are other workplace factors beyond strain that are associated with satisfaction, such as value alignment with leadership . Indeed, in this study, the mean subscale scores corresponding to workplace support were significantly greater than the mean subscale scores that corresponded to pace of work. Even so, aside from improved control of and reduction in workload, our study identified potential solutions to alleviate workplace stressors including increased support for managing complex dispositions as well as non-active medical patients. Additionally, improved communication and collaboration with nursing and other physician colleagues as well as reducing EMR burden were also identified in this study and others as solutions to improve workplace stressors . The ideal CTU census is not known and would be institution dependent, however reducing clinical workload by implementing a census cap of 15–20 patients has been shown to improve educational experience and reduce adverse patient outcomes . Through our qualitative analysis, we show that reducing census from 28.2 (Q4 ’22) to between 23.8 (Q2 ‘23) and 26.2 (Q4 ‘23) was associated with some perceived improvement of the CTU learning environment. However, given the low wellness scores, the ideal census is likely lower, and may need to be supplemented with other interventions such as enhanced team rounds or geographic wards . Nonetheless, in line with previous evidence , our qualitative analysis identified that of the contextual factors influencing attending physicians’ and trainees’ perceptions of the CTU environment, case load, patient turn-over, medical acuity, social complexity, and house staff absences, were of outmost importance. These factors should be targets of further interventions to improve physician wellness. This was a single center study and generalization of our findings to other institutions should be undertaken while carefully considering system differences. Regardless, healthcare delivery and training challenges associated with ever evolving population demographics, disease complexity, and technology are not unique to our academic centre and have been reported previously . Volunteer as well as attrition biases may have been introduced if those who were more likely to respond to the surveys over time were experiencing different levels of burnout compared to those who did not respond, and this concern would be greatest for the resident group where response rates were low. Several constraints were imposed due to the rapid rollout of the intervention to address critical service demands. First, this resulted in a lack of pre-implementation wellness data at the intervention site, as well as a delayed start in data collection at the non-intervention site, such that caution should be employed when comparing burnout rates between sites, despite the highly comparable features between sites and the similar trends observed between burnout and census at both sites. It also limits our study’s ability to fully appreciate the impact of the intervention as well as to make conclusions about how wellness trended over time. Second, the study did not include education specific measures, which when combined with low response rates, particularly in the resident group, restricts discussion about how this intervention improves learner experiences. Future studies should have a longer pre-implementation assessment phase, and strive to capture education and resident specific measures , as well as include other important workload measures such as house staff ratios, patient turnover rates, and case complexity. The addition of a non-academic hospitalist service was received positively and reduced CTU census numbers, but the improvement in service strain was limited by rising admissions with time. Burnout rates, which correlated with census, remained high, and despite half of respondents reporting job satisfaction, overall wellness scores were low. Multifaceted approaches to wellness are needed, and further research to better understand the impact of interventions aimed at improving subacute care demands, increasing supports for managing complex discharges, enhancing communication with nursing and other physician colleagues, and reducing EMR burden is required. Below is the link to the electronic supplementary material. Supplementary Material 1
BIX01294 inhibits EGFR signaling in EGFR-mutant lung adenocarcinoma cells through a BCKDHA-mediated reduction in the EGFR level
96185860-33a1-4aa4-87d3-6c5284b9d1ff
8741967
Anatomy[mh]
Lung cancer is a leading cause of cancer death worldwide and is generally classified as small cell or non-small cell lung cancer (SCLC or NSCLC) . Approximately, 80–85% of lung cancers are NSCLCs. Despite significant diagnostic and therapeutic advances in recent decades, the overall 5-year survival rate of NSCLC is still only approximately 15% . Conventional chemotherapeutic drugs such as cis -diamminedichloroplatinum (II) (cisplatin) and paclitaxel (Taxol) are generally used for lung cancer therapy . However, primary or acquired resistance of NSCLCs to these conventional chemotherapeutic drugs is common, indicating an urgent need to identify new therapeutic targets to treat these devastating cancers. Epidermal growth factor receptor (EGFR), a transmembrane glycoprotein with an extracellular epidermal growth factor binding domain and an intracellular tyrosine kinase domain, belongs to the ErbB family of receptor tyrosine kinases . The interaction of EGFR with its ligand leads to its autophosphorylation via its intrinsic tyrosine/kinase activity, triggering several signal transduction pathways . Constitutive or sustained activation of the downstream targets of EGFR is known to be associated with various cellular functions, including cell proliferation, motility, and cancer cell survival. Interestingly, approximately 10–35% of patients with lung adenocarcinoma harbor tumor-associated EGFR mutations , . EGFR mutations are located in exons 18–21, which encode a portion of the EGFR kinase domain and include exon 19 deletions (del19) and the L858R mutation in exon 21 . These mutations cause an increase in kinase activity, leading to constitutive activation of signal transduction pathways, which in turn induces cell proliferation or blocks the apoptotic response, regardless of the presence of extracellular ligands . Chemotherapeutic drugs targeting the kinase activity of EGFR, including first-generation EGFR tyrosine kinase inhibitors (TKIs) such as gefitinib and erlotinib, have been shown to be clinically successful in many NSCLC patients harboring an EGFR mutation. However, the T790M mutation leads to acquired resistance to first- and second-generation EGFR-TKIs – . Several third-generation EGFR-TKIs have recently been developed in an attempt to overcome T790M-mediated resistance, but although they proved to be effective initially, as found for first- and second-generation EGFR-TKIs, the occurrence of additional mutations such as C797S in EGFR or FGFR1 gene amplification induced acquired resistance to these new drugs . Hence, there is a strong impetus to identify new therapeutic approaches to overcome acquired resistance to EGFR-TKIs in lung cancer patients. BIX was the first small molecule G9a inhibitor to be developed and exerts antitumor effects on various cancer cells. BIX activates apoptosis via a USP9X-mediated reduction in Mcl-1 expression in bladder cancer , induces autophagy-mediated cell death via accumulation of reactive oxygen species (ROS) in breast cancer , , and upregulates p53 expression in colon cancer . In addition, BIX induces TRAIL-induced apoptosis via downregulation of survivin and upregulation of DR5 expression in renal cancer . Of note, BIX was found to suppress cell proliferation and trigger apoptotic cell death in lung cancer by downregulating Bcl-2 expression . Furthermore, BIX was shown in another report to reduce the viability of NSCLC cells through induction of autophagy . However, the precise mechanisms underlying this antitumor activity of BIX, particularly in relation to EGFR signaling, remain largely unknown. In our current study, we demonstrate for the first time that BIX has inhibitory effects on lung cancer metabolism. We found from our analyses that BIX induces apoptotic cell death only in EGFR-mutant NSCLC cells by inhibiting the activity of Jumonji histone demethylases, particularly KDM3A, rather than through blockade of the G9a histone methyltransferase. Interestingly, BIX exposure also causes a significant reduction in the BCKDHA level, leading to reduced branched-chain amino acid (BCAA) metabolism-mediated TCA cycle fueling. We further found that suppression of BCAA metabolism also dramatically inhibited EGFR signaling, which in turn induced apoptosis. Significantly, BIX overcomes acquired resistance to EGFR-TKIs and is thus a possible new therapeutic approach for EGFR-mutant NSCLC. Cell culture EGFR-WT (H460 and A549) and EGFR-mutant (H1975; L858R + T790M) NSCLC cells were acquired from the American Type Culture Collection. The PC-9 (del19) cell line was a kind gift from Dr. Kazuto Nishio (National Cancer Center Hospital Tokyo, Japan) and has been previously characterized , . PC-9/GR (a gefitinib-resistant cell line) and PC-9/ER (an erlotinib-resistant cell line) cells were established as part of a previous study . All cells were maintained at 37 °C in humidified air with 5% CO 2 and grown in RPMI 1640 medium (Thermo Scientific, Waltham, MA) supplemented with 10% fetal bovine serum (FBS), 100 U/mL penicillin, and 100 μg/mL streptomycin (Thermo Scientific). Cell cultures were routinely tested for mycoplasma contamination. Reagents and antibodies BIX01294 (8006) was purchased from Selleckchem (Houston, TX). UNC0642 (14604) was purchased from Cayman Chemical Company (Ann Arbor, MI), and JIB04 (4972) was obtained from Tocris Bioscience (Bristol, United Kingdom). Antibodies against AKT (9272), p-AKT (4060), PARP (9542), cleaved PARP (9541), caspase-3 (9662), cleaved caspase-3 (9661), ERK (9102), and p-JNK (4668) were purchased from Cell Signaling Technology (Beverly, MA). Antibodies against β-actin (sc-47778), EGFR (sc-03), p-EGFR (sc-12351), p-ERK (sc-7383), BCKDHA (sc-271538) and JNK (sc-7345) were obtained from Santa Cruz Biotechnology (Dallas, TX). The antibody against KDM3A (12835-1-AP) was purchased from Proteintech (Rosemont, IL), and that against EHMT2 (07-551) was obtained from Sigma–Aldrich (St. Louis, MO). Lentivirus-mediated shRNA delivery The RNA Interference (RNAi) Consortium clone IDs for the shRNAs used in this study were as follows: shEHMT2-1 (TRCN0000115667), shEHMT2-2 (TRCN0000115669), shBCKDHA-1 (TRCN0000028398), shBCKDHA-2 (TRCN0000028456), shKDM3A-1 (TRCN0000329990), and shKDM3A-2 (TRCN0000329992). Quantitative real-time PCR Total RNA was isolated using TRIzol reagent (QIAGEN, Hilden, Germany), and cDNA was synthesized using 2 µg aliquots of these RNA preparations and MMLV HP reverse transcriptase (Epicentre, Madison, WI). Quantitative real-time PCR was performed with SYBR Green dye using an AriaMx Real-Time PCR system (Agilent Technologies, Santa Clara, CA). The relative amounts of cDNA were determined with the comparative Ct method using 18S ribosomal RNA sequences as a control. The primer sequences for BCKDHA amplification were as follows: forward, GATTTGGAATCGGAATTGCGG; reverse, CAGAGCGATAGCGA TACTTGG. Metabolomic analysis Targeted liquid chromatography–tandem mass spectrometry (LC–MS/MS) metabolomic analysis was performed as previously described . Briefly, cells were grown to ~60% confluence in growth medium in 10 cm dishes. After 24 h, cells were washed several times with phosphate-buffered saline (PBS) and water, harvested using 1.4 mL of cold methanol/H 2 O (80/20, v/v), and lysed by vigorous vortexing. A 100 μL aliquot of a 5 μM internal standard was then added. Metabolites were recovered by liquid–liquid extraction from the aqueous phase after the addition of chloroform. The aqueous phase was dried using vacuum centrifugation, and the sample was reconstituted with 50 μL of 50% methanol prior to LC–MS/MS analysis. The LC–MS/MS system was equipped with an Agilent 1290 HPLC instrument (Agilent Technologies, Santa Clara, CA), QTRAP 5500 mass spectrometer (AB Sciex, Concord, Ontario, Canada), and reversed-phase column (Synergi Fusion-RP 50 × 2 mm). ECAR and OCR measurements The extracellular acidification rate (ECAR) and oxygen consumption rate (OCR) were measured with an XF24 extracellular flux analyzer (Seahorse Bioscience, North Billerica, MA). Briefly, cells were plated in a 24-well Seahorse plate and cultured at 37 °C in 5% CO 2 . The medium was replaced the following day with unbuffered DMEM, and the cells were incubated at 37 °C without CO 2 for 1 h. For OCR measurements, oligomycin, FCCP, and rotenone were added to final concentrations of 0.5 μg/ml, 1 μM, and 1 μM, respectively. For ECAR measurements, glucose, oligomycin, and 2-DG were added to final concentrations of 10 mM, 1 μM, and 20 mM, respectively. Apoptosis quantitation Apoptotic cell death was detected using an Annexin-V/FITC assay. Briefly, cells were harvested by trypsinization, washed with PBS, and resuspended in annexin-V binding buffer (10 mM HEPES (pH 7.4), 140 mM NaCl, 2.5 mM CaCl 2 ) containing annexin-V FITC and propidium iodide. The stained cells were then quantified and analyzed in a flow cytometer (Beckman Coulter, Brea, CA). Quantitation of intracellular ATP Intracellular ATP concentrations were measured using an ATP Colorimetric/Fluorometric Assay Kit (Biovision Incorporated, Milpitas, CA) in accordance with the manufacturer’s instructions. Briefly, cells were lysed in 100 μl of ATP assay buffer, and 50 μl aliquots of the supernatants were collected and added to a 96-well plate. We then added 50 μl of ATP assay buffer containing the ATP probe, ATP converter, and developer to each well. The absorbance was then measured at 570 nm. Xenograft experiments Female severe combined immunodeficiency mice were purchased from Joong Ah Bio. All experimental procedures were approved by the Institutional Animal Care and Use Committee of the Asan Institute for Life Sciences (protocol 2021-02-041, 2021-02-021). A total of 1 × 10 6 cells suspended in 100 μL of Hank’s buffered saline solution were injected subcutaneously into the lower flank of each mouse (five mice per group). The mice in each group were treated with BIX01294 (5 mg/kg or 10 mg/kg, intraperitoneally, 2 days a week) for 4 weeks when the tumor volume reached 50–100 mm 3 . The length ( L ) and width ( W ) of each tumor were measured using calipers, and the tumor volume (TV) was calculated as TV = ( L × W 2 )/2. Immunohistochemistry Xenograft tumors were excised from the animals and washed with PBS. Tumor Section (4 µM) fixed with 4% paraformaldehyde (PFA) and embedded in paraffin were deparaffinized, rehydrated and subjected to antigen retrieval. An antibody against total EGFR (sc-373746) was obtained from Santa Cruz Biotechnology (Dallas, TX). Slides were incubated with the primary antibody, washed with PBST (0.05% Tween 20 in PBS) and incubated with a biotinylated secondary antibody (Dako, Glostrup, Denmark). Slides were developed using a DAB detection kit (DAB; Dako) and counterstained with hematoxylin. Chromatin immunoprecipitation (ChIP) assay Chromatin immunoprecipitation was performed using a Maga ChIP G Kit (Merck Millipore, Burlington, MA) in accordance with the manufacturer’s directions. Briefly, cells (1 × 10 7 cells) were treated with 1% formaldehyde and sonicated. Soluble chromatin extracts containing DNA fragments were immunoprecipitated by using an anti-H3K9me2 antibody (ab1220, Abcam), an anti-KDM3A antibody (12835-1-ap, Proteintech), and normal rabbit/mouse IgG as negative controls (sc2027, sc2025; Santa Cruz). The immunoprecipitated DNA was then analyzed by real-time PCR. Primers specific for BCKDHA were used, as follows: sense, 5′-CGTCAGGCACATAAAGAGGC-3′; antisense, 5′-CACAGATCTAGCCAGTCCCC-3′. Statistical analysis Data are presented as the mean ± standard deviation values. All comparisons were performed using unpaired Student’s t test. EGFR-WT (H460 and A549) and EGFR-mutant (H1975; L858R + T790M) NSCLC cells were acquired from the American Type Culture Collection. The PC-9 (del19) cell line was a kind gift from Dr. Kazuto Nishio (National Cancer Center Hospital Tokyo, Japan) and has been previously characterized , . PC-9/GR (a gefitinib-resistant cell line) and PC-9/ER (an erlotinib-resistant cell line) cells were established as part of a previous study . All cells were maintained at 37 °C in humidified air with 5% CO 2 and grown in RPMI 1640 medium (Thermo Scientific, Waltham, MA) supplemented with 10% fetal bovine serum (FBS), 100 U/mL penicillin, and 100 μg/mL streptomycin (Thermo Scientific). Cell cultures were routinely tested for mycoplasma contamination. BIX01294 (8006) was purchased from Selleckchem (Houston, TX). UNC0642 (14604) was purchased from Cayman Chemical Company (Ann Arbor, MI), and JIB04 (4972) was obtained from Tocris Bioscience (Bristol, United Kingdom). Antibodies against AKT (9272), p-AKT (4060), PARP (9542), cleaved PARP (9541), caspase-3 (9662), cleaved caspase-3 (9661), ERK (9102), and p-JNK (4668) were purchased from Cell Signaling Technology (Beverly, MA). Antibodies against β-actin (sc-47778), EGFR (sc-03), p-EGFR (sc-12351), p-ERK (sc-7383), BCKDHA (sc-271538) and JNK (sc-7345) were obtained from Santa Cruz Biotechnology (Dallas, TX). The antibody against KDM3A (12835-1-AP) was purchased from Proteintech (Rosemont, IL), and that against EHMT2 (07-551) was obtained from Sigma–Aldrich (St. Louis, MO). The RNA Interference (RNAi) Consortium clone IDs for the shRNAs used in this study were as follows: shEHMT2-1 (TRCN0000115667), shEHMT2-2 (TRCN0000115669), shBCKDHA-1 (TRCN0000028398), shBCKDHA-2 (TRCN0000028456), shKDM3A-1 (TRCN0000329990), and shKDM3A-2 (TRCN0000329992). Total RNA was isolated using TRIzol reagent (QIAGEN, Hilden, Germany), and cDNA was synthesized using 2 µg aliquots of these RNA preparations and MMLV HP reverse transcriptase (Epicentre, Madison, WI). Quantitative real-time PCR was performed with SYBR Green dye using an AriaMx Real-Time PCR system (Agilent Technologies, Santa Clara, CA). The relative amounts of cDNA were determined with the comparative Ct method using 18S ribosomal RNA sequences as a control. The primer sequences for BCKDHA amplification were as follows: forward, GATTTGGAATCGGAATTGCGG; reverse, CAGAGCGATAGCGA TACTTGG. Targeted liquid chromatography–tandem mass spectrometry (LC–MS/MS) metabolomic analysis was performed as previously described . Briefly, cells were grown to ~60% confluence in growth medium in 10 cm dishes. After 24 h, cells were washed several times with phosphate-buffered saline (PBS) and water, harvested using 1.4 mL of cold methanol/H 2 O (80/20, v/v), and lysed by vigorous vortexing. A 100 μL aliquot of a 5 μM internal standard was then added. Metabolites were recovered by liquid–liquid extraction from the aqueous phase after the addition of chloroform. The aqueous phase was dried using vacuum centrifugation, and the sample was reconstituted with 50 μL of 50% methanol prior to LC–MS/MS analysis. The LC–MS/MS system was equipped with an Agilent 1290 HPLC instrument (Agilent Technologies, Santa Clara, CA), QTRAP 5500 mass spectrometer (AB Sciex, Concord, Ontario, Canada), and reversed-phase column (Synergi Fusion-RP 50 × 2 mm). The extracellular acidification rate (ECAR) and oxygen consumption rate (OCR) were measured with an XF24 extracellular flux analyzer (Seahorse Bioscience, North Billerica, MA). Briefly, cells were plated in a 24-well Seahorse plate and cultured at 37 °C in 5% CO 2 . The medium was replaced the following day with unbuffered DMEM, and the cells were incubated at 37 °C without CO 2 for 1 h. For OCR measurements, oligomycin, FCCP, and rotenone were added to final concentrations of 0.5 μg/ml, 1 μM, and 1 μM, respectively. For ECAR measurements, glucose, oligomycin, and 2-DG were added to final concentrations of 10 mM, 1 μM, and 20 mM, respectively. Apoptotic cell death was detected using an Annexin-V/FITC assay. Briefly, cells were harvested by trypsinization, washed with PBS, and resuspended in annexin-V binding buffer (10 mM HEPES (pH 7.4), 140 mM NaCl, 2.5 mM CaCl 2 ) containing annexin-V FITC and propidium iodide. The stained cells were then quantified and analyzed in a flow cytometer (Beckman Coulter, Brea, CA). Intracellular ATP concentrations were measured using an ATP Colorimetric/Fluorometric Assay Kit (Biovision Incorporated, Milpitas, CA) in accordance with the manufacturer’s instructions. Briefly, cells were lysed in 100 μl of ATP assay buffer, and 50 μl aliquots of the supernatants were collected and added to a 96-well plate. We then added 50 μl of ATP assay buffer containing the ATP probe, ATP converter, and developer to each well. The absorbance was then measured at 570 nm. Female severe combined immunodeficiency mice were purchased from Joong Ah Bio. All experimental procedures were approved by the Institutional Animal Care and Use Committee of the Asan Institute for Life Sciences (protocol 2021-02-041, 2021-02-021). A total of 1 × 10 6 cells suspended in 100 μL of Hank’s buffered saline solution were injected subcutaneously into the lower flank of each mouse (five mice per group). The mice in each group were treated with BIX01294 (5 mg/kg or 10 mg/kg, intraperitoneally, 2 days a week) for 4 weeks when the tumor volume reached 50–100 mm 3 . The length ( L ) and width ( W ) of each tumor were measured using calipers, and the tumor volume (TV) was calculated as TV = ( L × W 2 )/2. Xenograft tumors were excised from the animals and washed with PBS. Tumor Section (4 µM) fixed with 4% paraformaldehyde (PFA) and embedded in paraffin were deparaffinized, rehydrated and subjected to antigen retrieval. An antibody against total EGFR (sc-373746) was obtained from Santa Cruz Biotechnology (Dallas, TX). Slides were incubated with the primary antibody, washed with PBST (0.05% Tween 20 in PBS) and incubated with a biotinylated secondary antibody (Dako, Glostrup, Denmark). Slides were developed using a DAB detection kit (DAB; Dako) and counterstained with hematoxylin. Chromatin immunoprecipitation was performed using a Maga ChIP G Kit (Merck Millipore, Burlington, MA) in accordance with the manufacturer’s directions. Briefly, cells (1 × 10 7 cells) were treated with 1% formaldehyde and sonicated. Soluble chromatin extracts containing DNA fragments were immunoprecipitated by using an anti-H3K9me2 antibody (ab1220, Abcam), an anti-KDM3A antibody (12835-1-ap, Proteintech), and normal rabbit/mouse IgG as negative controls (sc2027, sc2025; Santa Cruz). The immunoprecipitated DNA was then analyzed by real-time PCR. Primers specific for BCKDHA were used, as follows: sense, 5′-CGTCAGGCACATAAAGAGGC-3′; antisense, 5′-CACAGATCTAGCCAGTCCCC-3′. Data are presented as the mean ± standard deviation values. All comparisons were performed using unpaired Student’s t test. BIX induces apoptosis only in EGFR-mutant NSCLC cells via a reduction in the EGFR level To evaluate the antitumor effects of BIX on NSCLC cell survival, we first examined its effects on cell death in these cells. As shown in Fig. , BIX had no significant apoptotic effect on EGFR-WT NSCLC cells (H460 and A549), whereas it caused a dramatic increase in apoptotic cell death in EGFR-mutant NSCLC cells (PC9 and H1975). Consistent with these results, treatment with BIX activated effector caspase-3 and poly (ADP-ribose) polymerase (PARP) in EGFR-mutant NSCLC cells but not in EGFR-WT NSCLC cells (Fig. ). Furthermore, apoptotic cell death induced by BIX was completely inhibited by treatment with benzyloxycarbonyl-Val-Ala-Asp-(OMe) fluoromethyl ketone (zVAD-fmk; Fig. ), indicating that BIX induces apoptotic cell death in EGFR-mutant NSCLC cells in a caspase-dependent manner. EGFR-mutant NSCLCs, which depend on EGFR for survival, rely more strongly on EGFR signaling than do their WT counterparts – . Given our finding that BIX led to robust apoptotic cell death only in EGFR-mutant NSCLCs, we speculated that it may also have an inhibitory effect on EGFR signaling. Interestingly, treatment with BIX did not significantly affect EGFR signaling in EGFR-WT NSCLCs, whereas it markedly decreased the EGFR level and inhibited the phosphorylation of the EGFR signaling components AKT and ERK in EGFR-mutant NSCLC cells (Fig. ). To further confirm the antitumor effects of BIX on the survival of EGFR-mutant NSCLC cells, we assessed the ability of these cells to grow in vivo as xenografts. As shown in Fig. , BIX treatment significantly diminished the growth of EGFR-mutant NSCLC cells (PC9) compared with EGFR-WT NSCLC cells (H460) in vivo. In addition, the size of the xenograft tumors was significantly reduced only in mice implanted with EGFR-mutant NSCLC cells following BIX treatment (Fig. ). Immunohistochemical (IHC) analysis showed that BIX treatment obviously reduced the expression of EGFR in EGFR-mutant NSCLC tumors compared with EGFR-WT NSCLC tumors (Fig. ). Taken together, these data demonstrated that BIX triggers apoptotic cell death in EGFR-mutant NSCLC cells by inhibiting EGFR signaling. Normal mitochondrial function is required for maintaining the EGFR level We previously demonstrated that inhibition of mitochondrial ATP production robustly decreases the EGFR level and subsequently induces apoptotic cell death, indicating that mitochondrial ATP production is critical for maintaining the EGFR level . In our current experiments, we thus tested whether BIX affects mitochondrial function. As shown in Fig. , the ATP level was significantly reduced in the presence of BIX. Oxygen consumption was also found to be markedly decreased upon BIX treatment (Fig. ). We next investigated the impact of BIX on mitochondrial metabolism using LC–MS/MS metabolomics analysis to explore its direct effects on mitochondrial ATP production. BIX treatment led to significant reductions in the levels of the tricarboxylic acid cycle (TCA) cycle intermediates (Fig. ). To test whether the BIX-mediated reduction in the EGFR level as a result of insufficient maintenance of mitochondrial ATP production, we attempted to reverse this reduction with α-ketoglutarate supplementation and indeed observed significant restoration of the EGFR level, which was decreased following BIX exposure (Fig. ). Hence, our results suggested that BIX inhibits mitochondrial metabolism, which in turn suppresses EGFR signaling. BIX inhibits the supply of a carbon source for the TCA cycle through suppression of BCAA metabolism Considering the findings of a previous study indicating that enhanced glycolysis is indispensable for maintaining the EGFR level in EGFR-mutant NSCLC cells by fueling the TCA cycle , we next tested whether BIX inhibits glucose metabolism. As shown in Fig. , the glycolytic metabolite levels were unchanged upon BIX treatment. Consistent with this result, BIX had no significant effect on the extracellular acidification rate (Fig. ), suggesting that it has no impact on glucose metabolism. Given that BCAAs fuel the TCA cycle through the transfer of respective amino groups , we next examined the ATP level upon the knockdown of branched-chain α-keto acid dehydrogenase a (BCKDHA), the enzymatic catalyst for the second step of the BCAA catabolic pathway. This was done to investigate the functional role of BCKDHA in TCA metabolism. We found that BCKDHA knockdown led to a significant decrease in the ATP level (Fig. ). The oxygen consumption rate also decreased upon BCKDHA knockdown (Fig. ). In addition, the levels of TCA cycle intermediates were significantly changed upon BCKDHA knockdown (Fig. ), suggesting that BCAA metabolism is an important source of carbon for the TCA cycle. We further found that BIX reduced BCKDHA mRNA and protein levels only in EGFR-mutant NSCLC cells, indicating that this small molecule inhibitor affects BCAA metabolism in these NSCLC cells (Fig. ). In addition, BCKDHA knockdown dramatically decreased the EGFR level and inhibited AKT and ERK phosphorylation (Fig. ). Furthermore, the EGFR level was significantly increased in a dose-dependent manner upon treatment with (S)-CPP, an inhibitor of the branched-chain α-ketoacid dehydrogenase complex (BCKDC) kinase that blocks BCKDH activity (Fig. ). These data collectively demonstrated that BCAA metabolism, as a source of fuel for the TCA cycle, is critical for maintaining the EGFR level. BIX decreases the BCKDHA level by inhibiting Jumonji histone demethylase activity but not G9α activity We next explored the mechanisms by which BIX regulates the BCKDHA level. We first tested whether another selective inhibitor of the G9a histone methyltransferase, UNC0624, can reduce the BCKDHA level, because BIX is a well-known G9a inhibitor. Unexpectedly, both the BCKDHA and EGFR levels were unchanged by UNC0624 treatment (Fig. ). Consistent with this result, knockdown of the G9a histone methyltransferase, i.e., EHMT2, had no effect on the level of either BCKDHA or EGFR (Fig. ), indicating that BIX does not downregulate BCKDHA through its G9a histone methyltransferase inhibitory function. It has been reported that BIX selectively inhibits a family of histone H3 lysine 9 Jumonji demethylases , and we thus tested whether JIB04, a pan-selective Jumonji histone demethylase inhibitor, downregulates BCKDHA. As shown in Fig. , the BCKDHA mRNA level was significantly reduced in a dose-dependent manner upon JIB04 treatment only in EGFR-mutant NSCLC cells. Consistent with this result, JIB04 treatment reduced both the BCKDHA and EGFR protein levels in these NSCLC cells but did not reduce the protein level of BCKDHA in EGFR-WT NSCLC cells (Fig. ). Furthermore, knockdown of one of the Jumonji demethylases, KDM3A (lysine demethylase 3A), led to robust decreases in both the BCKDHA and EGFR levels (Fig. ). Given that JIB04 but not UNC0624 reduced both the BCKDHA and EGFR levels, we speculated that only JIB04 induces apoptotic cell death in EGFR-mutant NSCLC cells. Indeed, treatment with UNC0624 had no significant apoptotic effect on EGFR-mutant NSCLC cells, whereas JIB04 treatment resulted in a dramatic increase in apoptotic cell death in EGFR-mutant NSCLC cells (Fig. ). Moreover, PARP and caspase-3 were cleaved only upon JIB04 treatment (Fig. ). Taken together, these data demonstrated that BIX triggers apoptotic cell death by inhibiting Jumonji demethylase activity. For further confirmation of the KDM3A-mediated reduction in BCKDHA expression, we first compared the expression level of KDM3A between EGFR-mutant and EGFR-WT NSCLC cells. Compared with EGFR-WT NSCLC cells, EGFR-mutant NSCLC cells exhibited increased expression of KDM3A (Fig. ). We next performed chromatin immunoprecipitation (ChIP) assay to assess histone H3 modification in the BCKDHA promoter and the binding of KDM3A to the BCKDHA gene promoter. As shown in Fig. , H3K9me2 was elevated upon BIX treatment, and BIX treatment inhibited KDM3A binding to the BCKDHA promoter. Therefore, these results indicated that the BIX-mediated BCKDHA reduction results from its inhibition of KDM3A. BIX overcomes acquired resistance to EGFR-TKIs We previously reported that EGFR knockdown had no significant effect on the proliferation of cells with MET- or AXL-mediated EGFR-TKI resistance (HCC827/GR and HCC827/ER cells), whereas it markedly inhibited the proliferation of cells with T790M-mediated EGFR-TKI resistance (PC-9/GR and PC-9/ER cells) . Because BIX reduces the EGFR level and inhibits EGFR signaling, we suspected that it may affect PC-9/GR and PC-9/ER cell survival. As shown in Fig. , BIX treatment did not induce apoptotic cell death in HCC827/GR and HCC827/ER cells but did so in PC-9/GR and PC-9/ER cells. Consistent with these results, PARP and caspase-3 were cleaved only in PC-9/GR and PC-9/ER cells upon BIX treatment (Fig. ). We next tested whether the cell death caused by BIX was caspase-dependent. Treatment with zVAD-fmk significantly inhibited the apoptotic response (Fig. ) and the cleavage of PARP and caspase-3 (Fig. ) induced by BIX. These data thus demonstrated that BIX treatment can overcome acquired resistance to EGFR-TKIs. BIX induces apoptosis in EGFR-TKI-resistant NSCLC cells by inhibiting epigenetic regulation of BCKDHA expression We next investigated the mechanisms by which BIX overcomes acquired resistance to EGFR-TKIs. Consistent with our initial observation (Fig. ), treatment with BIX markedly decreased the EGFR level in a dose-dependent manner and inhibited the phosphorylation of the EGFR signaling components AKT and ERK in PC9/GR and PC9/ER cells (Fig. ). We then explored the involvement of BCKDHA in BIX-mediated apoptotic cell death in PC-9/GR and PC-9/ER cells. Treatment with BIX reduced the BCKDHA transcript level in a dose-dependent manner in PC9/GR and PC9/ER cells (Fig. ). BIX treatment also led to dose-dependent decreases in the protein levels of BCKDHA and EGFR (Fig. ). Furthermore, and consistent with our prior observations (Fig. c, d), we found that inhibition of Jumonji histone demethylase activity with JIB04 resulted in a decrease in the mRNA level of BCKDHA (Fig. ) and reduced the protein levels of BCKDHA and EGFR in a dose-dependent manner in PC9/GR and PC9/ER cells (Fig. ). Hence, it appears that BIX overcomes acquired resistance to EGFR-TKIs via a Jumonji demethylase-mediated reduction in the EGFR level. Collectively, the findings of our study showed that BIX induces apoptotic cell death in EGFR-mutant NSCLC cells but not in their wild-type counterparts. We also found that BIX overcomes acquired resistance to EGFR-TKIs. BIX reduced the BCKDHA level by inhibiting Jumonji histone demethylase activity, leading to a reduction in the supply of a carbon source for the TCA cycle through suppression of BCAA metabolism. Inhibition of BCAA-derived mitochondrial ATP production led to a decrease in the EGFR level, which in turn induced apoptosis. Thus, BIX or Jumonji histone demethylase-mediated regulation of BCAA metabolism may provide effective future strategies for EGFR-mutant NSCLC therapy and for overcoming EGFR-TKI resistance in NSCLC patients (Fig. ). To evaluate the antitumor effects of BIX on NSCLC cell survival, we first examined its effects on cell death in these cells. As shown in Fig. , BIX had no significant apoptotic effect on EGFR-WT NSCLC cells (H460 and A549), whereas it caused a dramatic increase in apoptotic cell death in EGFR-mutant NSCLC cells (PC9 and H1975). Consistent with these results, treatment with BIX activated effector caspase-3 and poly (ADP-ribose) polymerase (PARP) in EGFR-mutant NSCLC cells but not in EGFR-WT NSCLC cells (Fig. ). Furthermore, apoptotic cell death induced by BIX was completely inhibited by treatment with benzyloxycarbonyl-Val-Ala-Asp-(OMe) fluoromethyl ketone (zVAD-fmk; Fig. ), indicating that BIX induces apoptotic cell death in EGFR-mutant NSCLC cells in a caspase-dependent manner. EGFR-mutant NSCLCs, which depend on EGFR for survival, rely more strongly on EGFR signaling than do their WT counterparts – . Given our finding that BIX led to robust apoptotic cell death only in EGFR-mutant NSCLCs, we speculated that it may also have an inhibitory effect on EGFR signaling. Interestingly, treatment with BIX did not significantly affect EGFR signaling in EGFR-WT NSCLCs, whereas it markedly decreased the EGFR level and inhibited the phosphorylation of the EGFR signaling components AKT and ERK in EGFR-mutant NSCLC cells (Fig. ). To further confirm the antitumor effects of BIX on the survival of EGFR-mutant NSCLC cells, we assessed the ability of these cells to grow in vivo as xenografts. As shown in Fig. , BIX treatment significantly diminished the growth of EGFR-mutant NSCLC cells (PC9) compared with EGFR-WT NSCLC cells (H460) in vivo. In addition, the size of the xenograft tumors was significantly reduced only in mice implanted with EGFR-mutant NSCLC cells following BIX treatment (Fig. ). Immunohistochemical (IHC) analysis showed that BIX treatment obviously reduced the expression of EGFR in EGFR-mutant NSCLC tumors compared with EGFR-WT NSCLC tumors (Fig. ). Taken together, these data demonstrated that BIX triggers apoptotic cell death in EGFR-mutant NSCLC cells by inhibiting EGFR signaling. We previously demonstrated that inhibition of mitochondrial ATP production robustly decreases the EGFR level and subsequently induces apoptotic cell death, indicating that mitochondrial ATP production is critical for maintaining the EGFR level . In our current experiments, we thus tested whether BIX affects mitochondrial function. As shown in Fig. , the ATP level was significantly reduced in the presence of BIX. Oxygen consumption was also found to be markedly decreased upon BIX treatment (Fig. ). We next investigated the impact of BIX on mitochondrial metabolism using LC–MS/MS metabolomics analysis to explore its direct effects on mitochondrial ATP production. BIX treatment led to significant reductions in the levels of the tricarboxylic acid cycle (TCA) cycle intermediates (Fig. ). To test whether the BIX-mediated reduction in the EGFR level as a result of insufficient maintenance of mitochondrial ATP production, we attempted to reverse this reduction with α-ketoglutarate supplementation and indeed observed significant restoration of the EGFR level, which was decreased following BIX exposure (Fig. ). Hence, our results suggested that BIX inhibits mitochondrial metabolism, which in turn suppresses EGFR signaling. Considering the findings of a previous study indicating that enhanced glycolysis is indispensable for maintaining the EGFR level in EGFR-mutant NSCLC cells by fueling the TCA cycle , we next tested whether BIX inhibits glucose metabolism. As shown in Fig. , the glycolytic metabolite levels were unchanged upon BIX treatment. Consistent with this result, BIX had no significant effect on the extracellular acidification rate (Fig. ), suggesting that it has no impact on glucose metabolism. Given that BCAAs fuel the TCA cycle through the transfer of respective amino groups , we next examined the ATP level upon the knockdown of branched-chain α-keto acid dehydrogenase a (BCKDHA), the enzymatic catalyst for the second step of the BCAA catabolic pathway. This was done to investigate the functional role of BCKDHA in TCA metabolism. We found that BCKDHA knockdown led to a significant decrease in the ATP level (Fig. ). The oxygen consumption rate also decreased upon BCKDHA knockdown (Fig. ). In addition, the levels of TCA cycle intermediates were significantly changed upon BCKDHA knockdown (Fig. ), suggesting that BCAA metabolism is an important source of carbon for the TCA cycle. We further found that BIX reduced BCKDHA mRNA and protein levels only in EGFR-mutant NSCLC cells, indicating that this small molecule inhibitor affects BCAA metabolism in these NSCLC cells (Fig. ). In addition, BCKDHA knockdown dramatically decreased the EGFR level and inhibited AKT and ERK phosphorylation (Fig. ). Furthermore, the EGFR level was significantly increased in a dose-dependent manner upon treatment with (S)-CPP, an inhibitor of the branched-chain α-ketoacid dehydrogenase complex (BCKDC) kinase that blocks BCKDH activity (Fig. ). These data collectively demonstrated that BCAA metabolism, as a source of fuel for the TCA cycle, is critical for maintaining the EGFR level. We next explored the mechanisms by which BIX regulates the BCKDHA level. We first tested whether another selective inhibitor of the G9a histone methyltransferase, UNC0624, can reduce the BCKDHA level, because BIX is a well-known G9a inhibitor. Unexpectedly, both the BCKDHA and EGFR levels were unchanged by UNC0624 treatment (Fig. ). Consistent with this result, knockdown of the G9a histone methyltransferase, i.e., EHMT2, had no effect on the level of either BCKDHA or EGFR (Fig. ), indicating that BIX does not downregulate BCKDHA through its G9a histone methyltransferase inhibitory function. It has been reported that BIX selectively inhibits a family of histone H3 lysine 9 Jumonji demethylases , and we thus tested whether JIB04, a pan-selective Jumonji histone demethylase inhibitor, downregulates BCKDHA. As shown in Fig. , the BCKDHA mRNA level was significantly reduced in a dose-dependent manner upon JIB04 treatment only in EGFR-mutant NSCLC cells. Consistent with this result, JIB04 treatment reduced both the BCKDHA and EGFR protein levels in these NSCLC cells but did not reduce the protein level of BCKDHA in EGFR-WT NSCLC cells (Fig. ). Furthermore, knockdown of one of the Jumonji demethylases, KDM3A (lysine demethylase 3A), led to robust decreases in both the BCKDHA and EGFR levels (Fig. ). Given that JIB04 but not UNC0624 reduced both the BCKDHA and EGFR levels, we speculated that only JIB04 induces apoptotic cell death in EGFR-mutant NSCLC cells. Indeed, treatment with UNC0624 had no significant apoptotic effect on EGFR-mutant NSCLC cells, whereas JIB04 treatment resulted in a dramatic increase in apoptotic cell death in EGFR-mutant NSCLC cells (Fig. ). Moreover, PARP and caspase-3 were cleaved only upon JIB04 treatment (Fig. ). Taken together, these data demonstrated that BIX triggers apoptotic cell death by inhibiting Jumonji demethylase activity. For further confirmation of the KDM3A-mediated reduction in BCKDHA expression, we first compared the expression level of KDM3A between EGFR-mutant and EGFR-WT NSCLC cells. Compared with EGFR-WT NSCLC cells, EGFR-mutant NSCLC cells exhibited increased expression of KDM3A (Fig. ). We next performed chromatin immunoprecipitation (ChIP) assay to assess histone H3 modification in the BCKDHA promoter and the binding of KDM3A to the BCKDHA gene promoter. As shown in Fig. , H3K9me2 was elevated upon BIX treatment, and BIX treatment inhibited KDM3A binding to the BCKDHA promoter. Therefore, these results indicated that the BIX-mediated BCKDHA reduction results from its inhibition of KDM3A. We previously reported that EGFR knockdown had no significant effect on the proliferation of cells with MET- or AXL-mediated EGFR-TKI resistance (HCC827/GR and HCC827/ER cells), whereas it markedly inhibited the proliferation of cells with T790M-mediated EGFR-TKI resistance (PC-9/GR and PC-9/ER cells) . Because BIX reduces the EGFR level and inhibits EGFR signaling, we suspected that it may affect PC-9/GR and PC-9/ER cell survival. As shown in Fig. , BIX treatment did not induce apoptotic cell death in HCC827/GR and HCC827/ER cells but did so in PC-9/GR and PC-9/ER cells. Consistent with these results, PARP and caspase-3 were cleaved only in PC-9/GR and PC-9/ER cells upon BIX treatment (Fig. ). We next tested whether the cell death caused by BIX was caspase-dependent. Treatment with zVAD-fmk significantly inhibited the apoptotic response (Fig. ) and the cleavage of PARP and caspase-3 (Fig. ) induced by BIX. These data thus demonstrated that BIX treatment can overcome acquired resistance to EGFR-TKIs. We next investigated the mechanisms by which BIX overcomes acquired resistance to EGFR-TKIs. Consistent with our initial observation (Fig. ), treatment with BIX markedly decreased the EGFR level in a dose-dependent manner and inhibited the phosphorylation of the EGFR signaling components AKT and ERK in PC9/GR and PC9/ER cells (Fig. ). We then explored the involvement of BCKDHA in BIX-mediated apoptotic cell death in PC-9/GR and PC-9/ER cells. Treatment with BIX reduced the BCKDHA transcript level in a dose-dependent manner in PC9/GR and PC9/ER cells (Fig. ). BIX treatment also led to dose-dependent decreases in the protein levels of BCKDHA and EGFR (Fig. ). Furthermore, and consistent with our prior observations (Fig. c, d), we found that inhibition of Jumonji histone demethylase activity with JIB04 resulted in a decrease in the mRNA level of BCKDHA (Fig. ) and reduced the protein levels of BCKDHA and EGFR in a dose-dependent manner in PC9/GR and PC9/ER cells (Fig. ). Hence, it appears that BIX overcomes acquired resistance to EGFR-TKIs via a Jumonji demethylase-mediated reduction in the EGFR level. Collectively, the findings of our study showed that BIX induces apoptotic cell death in EGFR-mutant NSCLC cells but not in their wild-type counterparts. We also found that BIX overcomes acquired resistance to EGFR-TKIs. BIX reduced the BCKDHA level by inhibiting Jumonji histone demethylase activity, leading to a reduction in the supply of a carbon source for the TCA cycle through suppression of BCAA metabolism. Inhibition of BCAA-derived mitochondrial ATP production led to a decrease in the EGFR level, which in turn induced apoptosis. Thus, BIX or Jumonji histone demethylase-mediated regulation of BCAA metabolism may provide effective future strategies for EGFR-mutant NSCLC therapy and for overcoming EGFR-TKI resistance in NSCLC patients (Fig. ). BIX01294 has been reported to exert antitumor effects against a variety of cancers – , including lung cancer , , , but the precise mechanisms of these effects, particularly in NSCLCs, remain unclear. In our present study, we proposed a unique mechanism by which BIX01294 has antitumor effects, particularly on EGFR-mutant NSCLCs, by blocking BCAA metabolism-mediated maintenance of the EGFR level through inhibition of the activity of Jumonji histone demethylases, particularly KDM3A. BIX01294 is a specific inhibitor of the G9a histone methyltransferase , which plays important role in DNA replication, damage and repair, and in gene expression by regulating DNA methylation . Moreover, given that G9a has been shown to be overexpressed in many tumor cells and is associated with the occurrence and development of tumors, it has become a promising antitumor target , and many small molecule inhibitors of G9a have been developed for evaluation as cancer therapeutics. BIX01294 was also developed as a small molecule G9a inhibitor, and most studies of this agent have reported that its antitumor effects are mediated through this function. However, BIX01294 has also been shown to selectively inhibit Jumonji histone demethylase activity . Importantly, we found in our present study that the BIX01294-mediated reduction in the BCKDHA level is due to its inhibition of Jumonji histone demethylases and not its suppression of G9a (Fig. ). Similar to G9a, a number of Jumonji C family members have been found to be overexpressed in many types of cancer . Of note, the Jumonji histone demethylase KDM3A is highly expressed in breast, prostate, colon, kidney, and liver cancers . KDM3A is also overexpressed in NSCLC, as it is essential for NSCLC growth . In addition, a recent study reported that KDM3A regulates the expression of EGFR through Kruppel-like factor 5 and SMAD family member 4 . Consistent with these earlier data, we observed here that both JIB04 treatment and KDM3A knockdown led to a significant reduction in the EGFR level and to inhibition of EGFR signaling. Thus, our data and those from other studies suggest that BIX01294-mediated KDM3A inhibition may serve as an attractive therapeutic target for EGFR-mutant NSCLCs. We previously reported that EGFR mutation-mediated upregulation of glycolysis is required for maintaining the level of EGFR as a TCA cycle fuel and that inhibition of glucose-derived mitochondrial ATP production leads to a significant decrease in the EGFR level, which in turn results in the activation of apoptosis, suggesting that the maintenance of proper mitochondrial function is critical for the survival of EGFR-mutant NSCLCs by sustaining EGFR stability . We also observed that BIX01294 treatment impairs mitochondrial metabolism (Fig. ). Based on our previous study, here, we tested whether the inhibition of mitochondrial metabolism by BIX01294 is due to blockade of glucose metabolism, but we found that BIX01294 has no significant effect on glucose metabolism (Fig. ). Cancer cells utilize a variety of carbon sources, including glucose, glutamine, other amino acids, and fatty acids, to replenish the TCA cycle . Indeed, NSCLC tumors display enhanced uptake of BCAAs, which provide substrates for the TCA cycle, and inhibition of BCAA catabolism significantly suppresses NSCLC tumor growth . In our current study, we observed that the BIX01294-mediated reduction in BCKDHA expression impaired mitochondrial metabolism and resulted in a significant decrease in the EGFR level, which induces apoptotic cell death (Figs. and ). This indicates that BIX01294 causes downregulation of EGFR by decreasing the BCAA-mediated fuel source for the TCA cycle. In addition, we found in our present study that BIX01294 markedly decreased the EGFR level in EGFR-mutant NSCLCs but not in EGFR-WT NSCLCs. We believe that this selectivity arises because EGFR-mutant NSCLCs have higher basal levels of KDM3A than EGFR-WT NSCLCs (Fig. ). For this reason, we think that BIX01294-mediated KDM3A inhibition leads to a significant decrease in both the BCKDHA and EGFR levels only in EGFR-mutant NSCLCs. EGFR-mutant NSCLCs, which depend on EGFR for growth and survival, rely more strongly on EGFR signaling than their wild-type counterparts , . Given the importance of EGFR signaling for cell growth and survival, EGFR-TKIs have been developed and proven to be effective in patients diagnosed with EGFR-mutant NSCLC. However, acquired EGFR-TKI resistance is typical in these cases . Although second- and third-generation EGFR-TKIs have been developed to try to overcome this acquired drug resistance, acquired resistance to these additional agents develops in most patients. Our current study provides reliable evidence that BIX01294 treatment can overcome T790M-mediated resistance and that EGFR stability requires the maintenance of proper mitochondrial function via BCAA-mediated fueling of the TCA cycle.
The infrastructure of electrophysiology centers impacts the management of cardiac tamponade—Results from a national survey
d986e2f8-8673-45a0-aba6-3bf98d0d1e0b
10577558
Physiology[mh]
INTRODUCTION Catheter ablation has become an established treatment option for a variety of cardiac arrhythmias. Nevertheless, periprocedural complications may occur and potentially result in life‐threatening conditions. Cardiac tamponade, in general, is one of the most feared complications in interventional electrophysiology (EP) and the most common major complication during atrial fibrillation (AF) ablation. , Previous studies focused on risk factors such as age and comorbidities and on predictors such as technologies used, as well as on overall and procedure‐specific outcomes. , However, the number of ablation centers, especially low‐volume centers without cardiac surgery, and contemporaneous evaluation of the number of more complex procedures have increased. Thus, the management of cardiac tamponade is still not standardized, and therefore various uncertainties, for example, on epicardial puncture technique, reversal of heparin effects, the use of autotransfusion, and the timing of involvement of cardiac surgery in severe cases, as well as subsequent monitoring and medication, still remain. This analysis sought to evaluate the institutional infrastructure and the impact of center volume and onsite cardiac surgery on the acute management of cardiac tamponade and subsequent therapy during electrophysiological procedures. The underlying survey interrogated the management of cardiac tamponade in German ablation centers via a standardized questionnaire including queries on precautions and peri‐ and postprocedural management of cardiac tamponade. METHODS The underlying physician‐based survey was conducted on May 2020 by sending out postal questionnaires to a total of 341 identified hospitals according to an official list containing all German centers performing EP procedures. The questionnaire, consisting of 46 questions, is provided in the Supporting information: Material. The postal questionnaires were sent to all identified EP centers, followed by up to two reminders. In case a center did not answer within 3 months, up to two reminders were given out. The results were obtained anonymously. All centers were asked to complete all questions. Since it was a paper‐based questionnaire, this was not mandatory. Ethics approval for the current anonymized survey was not necessary according to institutional ethical standards. The general results of this survey have been previously published. For this substudy, the results of the survey were analyzed for low‐volume (0–250 procedures per year), mid‐volume (250–500 procedures), and high‐volume (>500 procedures) centers, as well as for centers with and without onsite cardiac surgery, to evaluate the infrastructure of German EP centers and the impact of center volume and onsite cardiac surgery on the management of cardiac tamponade and subsequent therapy The results are displayed as categorical data and described with absolute and relative frequencies. All calculations were performed with R version 3.6.0 (2019). RESULTS 3.1 Baseline data A total of 341 German ablation centers were identified and 189/341 (55%) data sets from all responding centers were included in our analysis. Of 189 participating centers, 69 (36.5%) perform 0–250 EP procedures per year, 69 (36.5%) perform 250–500 procedures per year, and 51 (27%) perform >500 procedures per year. The survey revealed that in total 61/189 (32%) EP centers have onsite cardiac surgery, whereas 128/189 (68%) centers have not. It is worth noting that only 4/69 (6%) of all participating low‐volume centers and 22/69 (32%) of mid‐volume centers stated to have onsite cardiac surgery, while the majority of the high‐volume centers (34/51 (67%)) stated to have onsite cardiac surgery. All centers without onsite cardiac surgery are stated to collaborate with external cardiac surgery institutions. 3.2 Ablation spectrum This subanalysis revealed that, irrespective of center volume, the large majority of all participating centers perform diagnostic EP studies and ablation of supraventricular tachycardia (SVT), as well as atrial flutter and AF ablation. AF ablation is the most commonly performed procedure in low‐volume (41%), mid‐volume (41%), and high‐volume (46%) centers, as well as at centers with (42%) and without (43%) onsite cardiac surgery. However, ablation of ventricular tachycardia (VT), especially epicardial VT ablation, is mainly carried out at high‐volume centers and centers with onsite cardiac surgery. Left atrial appendage (LAA) closure is performed at 50/69 (72%) low‐volume centers, at 62/69 (90%) mid‐volume centers, and at 51/51 (100%) high‐volume centers. Also, 103/128 (81%) EP centers without onsite cardiac surgery reported to performing LAA closure. Details on ablation spectra at low‐, mid‐, and high‐volume centers, as well as centers with or without onsite cardiac surgery, are given in Figure . 3.3 Infrastructure A total of 30/69 (43%) low‐volume, 53/69 (77%) mid‐volume, and 50/51 (98%) high‐volume centers have dedicated EP laboratories. Also, the number of centers with dedicated EP‐nursing teams rises with center volume (28% for low‐volume, 42% for mid‐volume, and 76% for high‐volume centers). Also, significantly more centers with onsite cardiac surgery (57/61, 93%) when compared to centers without onsite cardiac surgery (76/128, 60%) reported to have dedicated EP laboratories. Furthermore, 37/61 (60%) centers with onsite cardiac surgery, and only 45/128 (35%) of participating centers without onsite cardiac surgery have dedicated EP‐nursing teams. The technical infrastructure in view of EP mapping and ablation platforms varies between low, mid‐, and high‐volume centers (Figure ). Interestingly, radiofrequency (RF) current in combination with or without a three‐dimensional (3D) mapping system is still the most commonly available and applied ablation strategy, irrespective of center volume. Centers without onsite cardiac surgery also do not use balloon‐based ablation tools more often when compared to RF current with or without 3D mapping systems (Figure ). 3.4 Patient selection In most of the low‐volume (59%), mid‐volume (86%), and high‐volume (65%) centers, as well as centers with (74%) and without (69%) onsite cardiac surgery, the body mass index (BMI) does not serve as a general exclusion criterion for EP procedures. However, irrespective of center volume and onsite cardiac surgery, most institutions have BMI limits only for left atrial and/or ventricular procedures. Age as a general exclusion criterion for EP procedures was reported by only 2/69 (3%) low‐volume, 1/69 (1%) mid‐volume, and 0/51 (0%) high‐volume centers. None of the participating centers with onsite cardiac surgery and only 3/128 (2%) without onsite cardiac surgery reported age as a general exclusion criterion. Furthermore, only 17/69 (25%) of the participating low‐volume, 9/69 (13%) of all mid‐volume, and 6/51 (11%) of the high‐volume centers, as well as 25/128 (20%) of centers without and 7/61 (12%) of centers with onsite cardiac surgery, reported age limits only for left atrial and/or ventricular procedures. 3.5 Periprocedural safety A total of 30/69 (44%) low‐volume, 41/69 (59%) mid‐volume, and 38/51 (75%) high‐volume centers use fluoroscopy alone to guide transseptal puncture. In 65/128 (51%) centers without and 45/61 (74%) of centers with onsite cardiac surgery, transseptal puncture was performed without any additional imaging modality apart from fluoroscopy. An overview of additional imaging and monitoring modalities used for transseptal puncture by low‐, mid‐, and high‐volume EP centers, as well as at centers with or without onsite cardiac surgery, is provided in Figure . In 93/189 (49%) and 139/189 (74%) centers, AF and VT ablations were performed under continuous invasive blood pressure measurement with significant differences depending on center volume. In 36/69 (52%) of the participating low‐volume, 40/69 (58%) of the mid‐volume, but only 17/51 (33%) of the high‐volume EP centers invasive blood pressure management was demonstrated during AF ablation procedures. During VT ablation, only 27/69 (39%) of the participating low‐volume, but almost all mid‐volume (63/69, 91%) and high‐volume centers (49/51, 96%) perform invasive blood pressure monitoring. A majority of centers without onsite cardiac surgery perform invasive blood pressure management during AF ablation (71/128, 55%), whereas EP centers with onsite cardiac surgery often refrain from doing so. During VT ablation, however, invasive blood pressure monitoring is used more frequently in centers with when compared to centers without onsite cardiac surgery (93% vs. 62%). The majority of participating EP centers (180/189, 95%) have echocardiography permanently available onsite in the EP laboratory. Furthermore, almost all centers (187/189, 98%) have a dedicated pericardiocentesis set prepared for emergencies in the cath lab. In these regards, there are no relevant differences to report for low‐, mid‐, and high‐volume centers or EP centers with and without onsite cardiac surgery. In general, 93/189 (49%) centers have regular trainings with the EP team to prepare for emergency intervention in case of cardiac tamponade, although this is more likely to be the case at high‐volume centers (71%) when compared to mid‐volume (46%) and low‐volume centers (36%). Also, slightly more centers with onsite cardiac surgery (59%) have special emergency trainings with the EP team when compared to centers without onsite cardiac surgery (45%). 3.6 Acute management of cardiac tamponade Whenever pericardial tamponade occurs, 49/189 (26%) centers immediately contact an institutional resuscitation team (33% of all low‐volume, 17% of all mid‐volume, and 29% of all high‐volume centers). Another 25/189 (13%) centers always inform a cardiac surgeon in case of pericardial tamponade (10% of all low‐volume, 15% of all mid‐volume, and 16% of all high‐volume centers). Of note, twice as many centers with onsite cardiac surgery do always inform a surgeon in case of pericardial tamponade when compared to those without onsite cardiac surgery (21% vs. 9%). Fluoroscopy as the primary and only imaging modality for pericardiocentesis is used in 53/189 (28%) of all participating institutions, and there are no differences to report for low‐, mid‐, and high‐volume centers or centers with and without onsite cardiac surgery. However, echocardiography is more often used for pericardiocentesis—in addition to fluoroscopy—at low‐volume centers when compared to mid‐ and high‐volume centers (84% vs. 63%), and at centers without when compared to those with onsite cardiac surgery (75% vs. 61%). When cardiac tamponade occurs, protamine is routinely administered in most of all low‐volume (58/69, 84%), mid‐volume (60/69, 87%), and high‐volume (45/51, 88%) EP centers, and 56/61 (92%) of centers with and 107/128 (84%) without onsite cardiac surgery. Higher volume centers and centers with onsite cardiac surgery more often administer protamine once all blood is aspirated from the pericardial space, whereas low‐volume centers and centers without cardiac surgery administer protamine immediately when cardiac tamponade is diagnosed. Only 6/69 (9%) of all low‐volume, 6/69 (9%) of the mid‐volume, and 5/51 (10%) of the high‐volume centers routinely apply a specific new oral anticoagulant antidote for adjunct treatment of cardiac tamponade. In this regard, there is also no difference to report for EP centers with when compared to those without onsite cardiac surgery (10% vs. 9%). A total of 56/189 (30%) centers routinely administer clotting factors (prothrombin complex concentrate [PPSB], aPPSB, recombinant FVIIa); this is twice as common for low‐volume (35%) and mid‐volume (36%) when compared to high‐volume centers (14%). In this regard, there is no difference between centers with and without onsite cardiac surgery (26% vs. 29%). With regard to autotransfusion of aspirated blood, 9/69 (13%) of the participating low‐volume, 17/69 (25%) of mid‐volume, and 10/51 (20%) of high‐volume centers responded that they reinfuse blood only before protamine administration, while 5/69 (7%) of all low‐volume, 11/69 (16%) of the mid‐volume, and 15/51 (29%) of the high‐volume centers autotransfuse after protamine administration. However, most of all participating low‐volume centers (47/69, 68%) and almost half of mid‐volume centers (34/69, 49%) do not perform autotransfusion, while only a minority of all participating high‐volume centers (20/51, 39%) answered to not perform autotransfusion at all. A majority of all participating centers without onsite cardiac surgery (74/128, (58%)), but only 27/61 (44%) of centers with onsite cardiac surgery denied performing autotransfusion. Fifteen out of 189 (10%) centers report other approaches. Three centers (2%) did not answer. Most of the low‐volume (45/69, 65%), mid‐volume (45/69, 65%), and high‐volume (28/51, 55%) EP centers, as well as EP centers with (33/61, 54%) and without (85/128, 66%) onsite cardiac surgery, chose surgical treatment if the bleeding did not stop after all conventional treatment options. Fifteen out of 69 (22%) of the low‐volume, 10/69 (15%) of the mid‐volume, and 9/51 (18%) of the high‐volume centers, as well as 12/61 (20%) of centers with and 22/128 (17%) centers without onsite cardiac surgery, chose surgical treatment after a certain amount of blood was aspirated and if bleeding continued. The remaining centers have a different approach, which was not further specified. 3.7 Monitoring and subsequent therapy Most of all participating centers monitor their patients in an intensive care unit (ICU) once pericardial tamponade is successfully treated, irrespective of center volume (low‐volume: 81%; mid‐volume: 80%; high‐volume: 78%) and onsite cardiac surgery (with onsite cardiac surgery: 77%; without onsite cardiac surgery: 81%). Thirteen out of 69 (19%) of the low‐volume, 26/69 (38%) of the mid‐volume, and 22/51 (43%) of the high‐volume centers, as well as 24/61 (49%) of the centers with and 36/128 (30%) of the centers without onsite cardiac surgery, stated that they routinely apply nonsteroidal anti‐inflammatory drugs (NSAIDs), colchicine, or cortisone after pericardial tamponade. Figure gives details on the center's strategies for oral anticoagulant (OAC) restart after pericardiocentesis. Baseline data A total of 341 German ablation centers were identified and 189/341 (55%) data sets from all responding centers were included in our analysis. Of 189 participating centers, 69 (36.5%) perform 0–250 EP procedures per year, 69 (36.5%) perform 250–500 procedures per year, and 51 (27%) perform >500 procedures per year. The survey revealed that in total 61/189 (32%) EP centers have onsite cardiac surgery, whereas 128/189 (68%) centers have not. It is worth noting that only 4/69 (6%) of all participating low‐volume centers and 22/69 (32%) of mid‐volume centers stated to have onsite cardiac surgery, while the majority of the high‐volume centers (34/51 (67%)) stated to have onsite cardiac surgery. All centers without onsite cardiac surgery are stated to collaborate with external cardiac surgery institutions. Ablation spectrum This subanalysis revealed that, irrespective of center volume, the large majority of all participating centers perform diagnostic EP studies and ablation of supraventricular tachycardia (SVT), as well as atrial flutter and AF ablation. AF ablation is the most commonly performed procedure in low‐volume (41%), mid‐volume (41%), and high‐volume (46%) centers, as well as at centers with (42%) and without (43%) onsite cardiac surgery. However, ablation of ventricular tachycardia (VT), especially epicardial VT ablation, is mainly carried out at high‐volume centers and centers with onsite cardiac surgery. Left atrial appendage (LAA) closure is performed at 50/69 (72%) low‐volume centers, at 62/69 (90%) mid‐volume centers, and at 51/51 (100%) high‐volume centers. Also, 103/128 (81%) EP centers without onsite cardiac surgery reported to performing LAA closure. Details on ablation spectra at low‐, mid‐, and high‐volume centers, as well as centers with or without onsite cardiac surgery, are given in Figure . Infrastructure A total of 30/69 (43%) low‐volume, 53/69 (77%) mid‐volume, and 50/51 (98%) high‐volume centers have dedicated EP laboratories. Also, the number of centers with dedicated EP‐nursing teams rises with center volume (28% for low‐volume, 42% for mid‐volume, and 76% for high‐volume centers). Also, significantly more centers with onsite cardiac surgery (57/61, 93%) when compared to centers without onsite cardiac surgery (76/128, 60%) reported to have dedicated EP laboratories. Furthermore, 37/61 (60%) centers with onsite cardiac surgery, and only 45/128 (35%) of participating centers without onsite cardiac surgery have dedicated EP‐nursing teams. The technical infrastructure in view of EP mapping and ablation platforms varies between low, mid‐, and high‐volume centers (Figure ). Interestingly, radiofrequency (RF) current in combination with or without a three‐dimensional (3D) mapping system is still the most commonly available and applied ablation strategy, irrespective of center volume. Centers without onsite cardiac surgery also do not use balloon‐based ablation tools more often when compared to RF current with or without 3D mapping systems (Figure ). Patient selection In most of the low‐volume (59%), mid‐volume (86%), and high‐volume (65%) centers, as well as centers with (74%) and without (69%) onsite cardiac surgery, the body mass index (BMI) does not serve as a general exclusion criterion for EP procedures. However, irrespective of center volume and onsite cardiac surgery, most institutions have BMI limits only for left atrial and/or ventricular procedures. Age as a general exclusion criterion for EP procedures was reported by only 2/69 (3%) low‐volume, 1/69 (1%) mid‐volume, and 0/51 (0%) high‐volume centers. None of the participating centers with onsite cardiac surgery and only 3/128 (2%) without onsite cardiac surgery reported age as a general exclusion criterion. Furthermore, only 17/69 (25%) of the participating low‐volume, 9/69 (13%) of all mid‐volume, and 6/51 (11%) of the high‐volume centers, as well as 25/128 (20%) of centers without and 7/61 (12%) of centers with onsite cardiac surgery, reported age limits only for left atrial and/or ventricular procedures. Periprocedural safety A total of 30/69 (44%) low‐volume, 41/69 (59%) mid‐volume, and 38/51 (75%) high‐volume centers use fluoroscopy alone to guide transseptal puncture. In 65/128 (51%) centers without and 45/61 (74%) of centers with onsite cardiac surgery, transseptal puncture was performed without any additional imaging modality apart from fluoroscopy. An overview of additional imaging and monitoring modalities used for transseptal puncture by low‐, mid‐, and high‐volume EP centers, as well as at centers with or without onsite cardiac surgery, is provided in Figure . In 93/189 (49%) and 139/189 (74%) centers, AF and VT ablations were performed under continuous invasive blood pressure measurement with significant differences depending on center volume. In 36/69 (52%) of the participating low‐volume, 40/69 (58%) of the mid‐volume, but only 17/51 (33%) of the high‐volume EP centers invasive blood pressure management was demonstrated during AF ablation procedures. During VT ablation, only 27/69 (39%) of the participating low‐volume, but almost all mid‐volume (63/69, 91%) and high‐volume centers (49/51, 96%) perform invasive blood pressure monitoring. A majority of centers without onsite cardiac surgery perform invasive blood pressure management during AF ablation (71/128, 55%), whereas EP centers with onsite cardiac surgery often refrain from doing so. During VT ablation, however, invasive blood pressure monitoring is used more frequently in centers with when compared to centers without onsite cardiac surgery (93% vs. 62%). The majority of participating EP centers (180/189, 95%) have echocardiography permanently available onsite in the EP laboratory. Furthermore, almost all centers (187/189, 98%) have a dedicated pericardiocentesis set prepared for emergencies in the cath lab. In these regards, there are no relevant differences to report for low‐, mid‐, and high‐volume centers or EP centers with and without onsite cardiac surgery. In general, 93/189 (49%) centers have regular trainings with the EP team to prepare for emergency intervention in case of cardiac tamponade, although this is more likely to be the case at high‐volume centers (71%) when compared to mid‐volume (46%) and low‐volume centers (36%). Also, slightly more centers with onsite cardiac surgery (59%) have special emergency trainings with the EP team when compared to centers without onsite cardiac surgery (45%). Acute management of cardiac tamponade Whenever pericardial tamponade occurs, 49/189 (26%) centers immediately contact an institutional resuscitation team (33% of all low‐volume, 17% of all mid‐volume, and 29% of all high‐volume centers). Another 25/189 (13%) centers always inform a cardiac surgeon in case of pericardial tamponade (10% of all low‐volume, 15% of all mid‐volume, and 16% of all high‐volume centers). Of note, twice as many centers with onsite cardiac surgery do always inform a surgeon in case of pericardial tamponade when compared to those without onsite cardiac surgery (21% vs. 9%). Fluoroscopy as the primary and only imaging modality for pericardiocentesis is used in 53/189 (28%) of all participating institutions, and there are no differences to report for low‐, mid‐, and high‐volume centers or centers with and without onsite cardiac surgery. However, echocardiography is more often used for pericardiocentesis—in addition to fluoroscopy—at low‐volume centers when compared to mid‐ and high‐volume centers (84% vs. 63%), and at centers without when compared to those with onsite cardiac surgery (75% vs. 61%). When cardiac tamponade occurs, protamine is routinely administered in most of all low‐volume (58/69, 84%), mid‐volume (60/69, 87%), and high‐volume (45/51, 88%) EP centers, and 56/61 (92%) of centers with and 107/128 (84%) without onsite cardiac surgery. Higher volume centers and centers with onsite cardiac surgery more often administer protamine once all blood is aspirated from the pericardial space, whereas low‐volume centers and centers without cardiac surgery administer protamine immediately when cardiac tamponade is diagnosed. Only 6/69 (9%) of all low‐volume, 6/69 (9%) of the mid‐volume, and 5/51 (10%) of the high‐volume centers routinely apply a specific new oral anticoagulant antidote for adjunct treatment of cardiac tamponade. In this regard, there is also no difference to report for EP centers with when compared to those without onsite cardiac surgery (10% vs. 9%). A total of 56/189 (30%) centers routinely administer clotting factors (prothrombin complex concentrate [PPSB], aPPSB, recombinant FVIIa); this is twice as common for low‐volume (35%) and mid‐volume (36%) when compared to high‐volume centers (14%). In this regard, there is no difference between centers with and without onsite cardiac surgery (26% vs. 29%). With regard to autotransfusion of aspirated blood, 9/69 (13%) of the participating low‐volume, 17/69 (25%) of mid‐volume, and 10/51 (20%) of high‐volume centers responded that they reinfuse blood only before protamine administration, while 5/69 (7%) of all low‐volume, 11/69 (16%) of the mid‐volume, and 15/51 (29%) of the high‐volume centers autotransfuse after protamine administration. However, most of all participating low‐volume centers (47/69, 68%) and almost half of mid‐volume centers (34/69, 49%) do not perform autotransfusion, while only a minority of all participating high‐volume centers (20/51, 39%) answered to not perform autotransfusion at all. A majority of all participating centers without onsite cardiac surgery (74/128, (58%)), but only 27/61 (44%) of centers with onsite cardiac surgery denied performing autotransfusion. Fifteen out of 189 (10%) centers report other approaches. Three centers (2%) did not answer. Most of the low‐volume (45/69, 65%), mid‐volume (45/69, 65%), and high‐volume (28/51, 55%) EP centers, as well as EP centers with (33/61, 54%) and without (85/128, 66%) onsite cardiac surgery, chose surgical treatment if the bleeding did not stop after all conventional treatment options. Fifteen out of 69 (22%) of the low‐volume, 10/69 (15%) of the mid‐volume, and 9/51 (18%) of the high‐volume centers, as well as 12/61 (20%) of centers with and 22/128 (17%) centers without onsite cardiac surgery, chose surgical treatment after a certain amount of blood was aspirated and if bleeding continued. The remaining centers have a different approach, which was not further specified. Monitoring and subsequent therapy Most of all participating centers monitor their patients in an intensive care unit (ICU) once pericardial tamponade is successfully treated, irrespective of center volume (low‐volume: 81%; mid‐volume: 80%; high‐volume: 78%) and onsite cardiac surgery (with onsite cardiac surgery: 77%; without onsite cardiac surgery: 81%). Thirteen out of 69 (19%) of the low‐volume, 26/69 (38%) of the mid‐volume, and 22/51 (43%) of the high‐volume centers, as well as 24/61 (49%) of the centers with and 36/128 (30%) of the centers without onsite cardiac surgery, stated that they routinely apply nonsteroidal anti‐inflammatory drugs (NSAIDs), colchicine, or cortisone after pericardial tamponade. Figure gives details on the center's strategies for oral anticoagulant (OAC) restart after pericardiocentesis. DISCUSSION Evidence on the best management of pericardial tamponade during EP procedures is lacking. Nevertheless, most German EP centers have institutional standards, which were recently interrogated during a national survey on the management of cardiac tamponade in German EP centers via a standardized postal questionnaire including queries on infrastructure, precautions, periprocedural safety, acute management of cardiac tamponade, and subsequent therapy. This subanalysis reveals further interesting insights on the impact of ablation volume of centers and onsite cardiac surgery on the ablation spectrum, patient selection, technical and personnel infrastructure, periprocedural safety conditions, acute management of cardiac tamponade, and subsequent therapy. The main findings of the underlying analysis are as follows: 1. Whereas VT ablation, and especially epicardial VT ablation, is mostly performed at high‐volume centers and centers with onsite cardiac surgery, no differences were observed for the spectrum of other ablation procedures. 2. Irrespective of center volume and onsite cardiac surgery, neither BMI nor age was reported to be a general exclusion criterion for ablation procedures. 3. High‐volume centers and centers with onsite cardiac surgery more often have dedicated EP laboratories and dedicated EP‐nursing teams. 4. In case of cardiac tamponade, protamine is routinely administered, while autotransfusion in general is more often performed at high‐volume centers and centers with onsite cardiac surgery. 5. High‐volume centers and centers with onsite cardiac surgery more often routinely apply NSAIDs, colchicine, or cortisone after pericardial tamponade. 4.1 Ablation spectrum and platforms This subanalysis demonstrates that center volume and onsite cardiac surgery do not impact the ablation spectrum in general. The only exception to report is, however, that VT ablation, in particular with epicardial access, is mainly performed in high‐volume centers and centers with onsite cardiac surgery. The recent 2019 HRS/EHRA/APHRS/LAHRS consensus document of VT ablation recommends onsite surgical backup for the conduction of epicardial VT procedures. An epicardial access and ablation bear various risks such as injury to adjacent organs like the liver, colon, or coronary arteries, puncture or laceration of the right ventricle as well as of the aorta, which might necessitate urgent surgical repair. , In 2020, Fink et al. retrospectively analyzed 34 982 consecutive patients undergoing diagnostic EP studies and/or catheter ablation of arrhythmias to identify predictors for tamponades with severe course. In their study, the frequency of tamponade was mainly dependent on the type of procedure that was performed and was highest in patients undergoing VT ablation with epicardial access (9.4%). Furthermore, among others, endocardial VT ablation and procedures requiring epicardial approach were found to be independent predictors of severe tamponade (e.g., requiring surgical repair, associated with periprocedural death). In this analysis, the necessity for surgical repair was >12% of patients with cardiac tamponade, in >10% of patients undergoing LA procedures, and >21% in patients undergoing endocardial or epicardial VT ablation, which underlines the need for an in‐house surgical backup during complex endocardial and epicardial VT ablation and support the current strategy in German EP centers. The survey furthermore demonstrated that, irrespective of center volume and onsite cardiac surgery, AF ablation is the most commonly performed procedure (~40% of all procedures). In addition, within this survey, most centers also reported RF current in combination with or without 3D mapping system to be the most widely applied ablation strategy. One might therefore conclude that even low‐volume centers and centers without onsite cardiac surgery more often perform RF‐based AF ablation than balloon‐based AF ablation, although there is existing data demonstrating reduction of cardiac tamponade frequency when using balloon devices. This might be due to the fact that RF‐based ablation is still the most established ablation strategy with the greatest wealth of experience. However, it remains unclear whether the reported AF ablation procedures were first‐do procedures, which might have been suitable for balloon‐based ablation, or re‐do procedures, which are usually conducted using RF, as more complex ablation strategies might have been necessary in these cases. 4.2 Intrastructure This subanalysis revealed that high‐volume centers and centers with onsite cardiac surgery more often have dedicated EP laboratories and dedicated EP‐nursing teams. In addition, these centers also more often reported to performing regular trainings with the EP team to prepare for emergency intervention in case of cardiac tamponade. A potential benefit of a more specialized infrastructure and purposefully trained staff might be an improvement not only in processes but also in the context of complications. However, as groups were divided according to center volume, and therefore overall caseload, the impact of individual operator's experience on the clinical outcome cannot finally be judged. 4.3 Patient selection One of the most important steps to prevent major complications during catheter ablation is patient selection, and higher BMI, age, and international normalized ratio levels are considered as risk factors by physicians worldwide. Most of the low‐, mid‐, and high‐volume centers and centers with and without onsite cardiac surgery reported that BMI did not serve as a general exclusion criterion for any kind of EP procedure. Nevertheless, irrespective of center volume and onsite cardiac surgery, still most institutions have BMI limits for left atrial and/or ventricular procedures. Although the prevalence of obesity is increasing, data on periprocedural complication rates of catheter ablation for arrhythmias are scarce. Recently, Schenker et al. published that obesity did not have a significant impact on the incidence of periprocedural complications after catheter ablation. They retrospectively analyzed 1000 consecutive patients undergoing catheter ablation of arrhythmias and found a major complication rate of 3.1% without a significant impact of BMI on the rate of major adverse events. However, radiation exposure and procedure duration were shown to be increased in obese patients, and ablation outcomes might be worse when compared to patients with normal weight. However, the improvement in quality of life might be consistent across all BMIs. Importantly, although several reports have shown that catheter ablation can be safely performed in the elderly, , , many German EP centers discard catheter ablation in patients aged >75 years and nearly all centers having participated in this survey refuse to do left atrial/ventricular procedures in octogenarians. This subanalysis demonstrates that this applies to low‐, mid‐, and high‐volume centers, as well as centers with and without onsite cardiac surgery. Thus, although various data exist with regard to age and its association with peri‐interventional risk, more robust data is obviously needed to enhance validity. 4.4 Autotransfusion Autotransfusion might be of particular importance in massive bleeding caused by steam pop or perforation requiring continuous aspiration of blood for a longer time period and can prevent the necessity of foreign blood transfusion and associated risks such as allergies or severe infection, and might in rare situations enable for completion of the ablation procedure. , , However, there is no evidence in favor of the superiority of autotransfusion, and no general recommendation for autotransfusion or the timepoint for autotransfusion during the management of cardiac tamponade. Furthermore, direct retransfusion of pericardial blood carries the risks of hemolysis or thromboembolism. However, a majority of all participating high‐volume centers (61%), half of all mid‐volume centers (51%), but only the minority of low‐volume centers (32%), answered to performing autotransfusion in case of cardiac tamponade during EP procedures. Most of all participating centers without onsite cardiac surgery (74/128, (58%)), whereas only 27/61 (44%) of centers with onsite cardiac surgery denied to performing autotransfusion. Further randomized trials are needed to assess the benefit of autotransfusion in the management of cardiac tamponade. Ablation spectrum and platforms This subanalysis demonstrates that center volume and onsite cardiac surgery do not impact the ablation spectrum in general. The only exception to report is, however, that VT ablation, in particular with epicardial access, is mainly performed in high‐volume centers and centers with onsite cardiac surgery. The recent 2019 HRS/EHRA/APHRS/LAHRS consensus document of VT ablation recommends onsite surgical backup for the conduction of epicardial VT procedures. An epicardial access and ablation bear various risks such as injury to adjacent organs like the liver, colon, or coronary arteries, puncture or laceration of the right ventricle as well as of the aorta, which might necessitate urgent surgical repair. , In 2020, Fink et al. retrospectively analyzed 34 982 consecutive patients undergoing diagnostic EP studies and/or catheter ablation of arrhythmias to identify predictors for tamponades with severe course. In their study, the frequency of tamponade was mainly dependent on the type of procedure that was performed and was highest in patients undergoing VT ablation with epicardial access (9.4%). Furthermore, among others, endocardial VT ablation and procedures requiring epicardial approach were found to be independent predictors of severe tamponade (e.g., requiring surgical repair, associated with periprocedural death). In this analysis, the necessity for surgical repair was >12% of patients with cardiac tamponade, in >10% of patients undergoing LA procedures, and >21% in patients undergoing endocardial or epicardial VT ablation, which underlines the need for an in‐house surgical backup during complex endocardial and epicardial VT ablation and support the current strategy in German EP centers. The survey furthermore demonstrated that, irrespective of center volume and onsite cardiac surgery, AF ablation is the most commonly performed procedure (~40% of all procedures). In addition, within this survey, most centers also reported RF current in combination with or without 3D mapping system to be the most widely applied ablation strategy. One might therefore conclude that even low‐volume centers and centers without onsite cardiac surgery more often perform RF‐based AF ablation than balloon‐based AF ablation, although there is existing data demonstrating reduction of cardiac tamponade frequency when using balloon devices. This might be due to the fact that RF‐based ablation is still the most established ablation strategy with the greatest wealth of experience. However, it remains unclear whether the reported AF ablation procedures were first‐do procedures, which might have been suitable for balloon‐based ablation, or re‐do procedures, which are usually conducted using RF, as more complex ablation strategies might have been necessary in these cases. Intrastructure This subanalysis revealed that high‐volume centers and centers with onsite cardiac surgery more often have dedicated EP laboratories and dedicated EP‐nursing teams. In addition, these centers also more often reported to performing regular trainings with the EP team to prepare for emergency intervention in case of cardiac tamponade. A potential benefit of a more specialized infrastructure and purposefully trained staff might be an improvement not only in processes but also in the context of complications. However, as groups were divided according to center volume, and therefore overall caseload, the impact of individual operator's experience on the clinical outcome cannot finally be judged. Patient selection One of the most important steps to prevent major complications during catheter ablation is patient selection, and higher BMI, age, and international normalized ratio levels are considered as risk factors by physicians worldwide. Most of the low‐, mid‐, and high‐volume centers and centers with and without onsite cardiac surgery reported that BMI did not serve as a general exclusion criterion for any kind of EP procedure. Nevertheless, irrespective of center volume and onsite cardiac surgery, still most institutions have BMI limits for left atrial and/or ventricular procedures. Although the prevalence of obesity is increasing, data on periprocedural complication rates of catheter ablation for arrhythmias are scarce. Recently, Schenker et al. published that obesity did not have a significant impact on the incidence of periprocedural complications after catheter ablation. They retrospectively analyzed 1000 consecutive patients undergoing catheter ablation of arrhythmias and found a major complication rate of 3.1% without a significant impact of BMI on the rate of major adverse events. However, radiation exposure and procedure duration were shown to be increased in obese patients, and ablation outcomes might be worse when compared to patients with normal weight. However, the improvement in quality of life might be consistent across all BMIs. Importantly, although several reports have shown that catheter ablation can be safely performed in the elderly, , , many German EP centers discard catheter ablation in patients aged >75 years and nearly all centers having participated in this survey refuse to do left atrial/ventricular procedures in octogenarians. This subanalysis demonstrates that this applies to low‐, mid‐, and high‐volume centers, as well as centers with and without onsite cardiac surgery. Thus, although various data exist with regard to age and its association with peri‐interventional risk, more robust data is obviously needed to enhance validity. Autotransfusion Autotransfusion might be of particular importance in massive bleeding caused by steam pop or perforation requiring continuous aspiration of blood for a longer time period and can prevent the necessity of foreign blood transfusion and associated risks such as allergies or severe infection, and might in rare situations enable for completion of the ablation procedure. , , However, there is no evidence in favor of the superiority of autotransfusion, and no general recommendation for autotransfusion or the timepoint for autotransfusion during the management of cardiac tamponade. Furthermore, direct retransfusion of pericardial blood carries the risks of hemolysis or thromboembolism. However, a majority of all participating high‐volume centers (61%), half of all mid‐volume centers (51%), but only the minority of low‐volume centers (32%), answered to performing autotransfusion in case of cardiac tamponade during EP procedures. Most of all participating centers without onsite cardiac surgery (74/128, (58%)), whereas only 27/61 (44%) of centers with onsite cardiac surgery denied to performing autotransfusion. Further randomized trials are needed to assess the benefit of autotransfusion in the management of cardiac tamponade. LIMITATIONS The underlying survey was only conducted in Germany with a response of roughly 50% of the addressed EP centers. Of note, any survey is limited by recording perceptions and not prospective raw data. Nevertheless, the underlying analysis provides first insights into the differences in infrastructure and management of cardiac tamponade depending on center volume and onsite cardiac surgery, However, further data are needed to reveal potential impact on clinical outcome. CONCLUSION Center volume and onsite cardiac surgery did not impact patient selection. However, institutional infrastructure, periprocedural safety precautions, and the acute management of cardiac tamponade are still inhomogeneous, especially with regard to the conduction of pericardiocentesis, handling of hemostasis, and subsequent therapy. Laura Rottner received travel grants from EPD Solutions/Philips (KODEX‐EPD). Andreas Metzner received speaker's honoraria and travel grants from Medtronic, Biosense Webster, Bayer, Boehringer Ingelheim, EPD Solutions/Philips (KODEX‐EPD), and Cardiofocus. Andreas Rillig received travel grants from Biosense, Medtronic, St. Jude Medical, Cardiofocus, EP Solutions, Ablamap, and EPD Solutions/Philips (KODEX‐EPD) and lecture and consultant fees from St. Jude Medical, Medtronic, Biosense, Cardiofocus, Novartis, and Boehringer Ingelheim. Bruno Reißmann received speaker's honoraria and travel grants from Medtronic. Jan‐Per Wenzel received funding from the German Foundation of Heart Research (F/29/19) unrelated to the project and travel grants from Boston Scientific unrelated. Daniel Steven received speaker's honoraria and travel grants from Medtronic, Biosense Webster, Bayer, and Boston Scientific. Philipp Sommer received financial support for advisory board activities from Abbott, Biosense Webster, Boston Scientific, and Medtronic (no personal compensation). Paulus Kirchhof received research support for basic, translational, and clinical research projects from European Union, British Heart Foundation, Leducq Foundation, Medical Research Council (UK), and German Centre for Cardiovascular Research, from several drug and device companies active in AF, and has received honoraria from several such companies in the past, but not in the last three years. He is listed as an inventor on two patents held by the University of Birmingham (Atrial Fibrillation Therapy WO 2015140571 and Markers for Atrial Fibrillation WO 2016012783). Supporting information. Click here for additional data file.
The cell biology of fertilization: Gamete attachment and fusion
0aa8e1a3-4c91-49f7-8435-5646c6d45285
8406655
Anatomy[mh]
During sexual reproduction, the oocyte and sperm fuse to generate a new and unique embryo. The journey of a sperm to an egg ends in the ampulla of the female oviduct. From there, the sperm must overcome a number of physical and biochemical barriers. After undergoing the acrosome reaction and binding the ova, the sperm penetrates through the cumulus oophorus cells and the zona pellucida (ZP) to reach the perivitelline space (PVS) and oocyte membrane. Upon fusion of the sperm and egg membranes, the sperm nucleus and organelles are incorporated into the egg cytoplasm. An understanding of the mechanisms of mammalian fertilization is crucial to treat infertility and develop new methods of birth control. Infertility affects 15% of couples globally, and in one third of these cases, the underlying cause is unknown ( ). Developments in assisted reproductive technologies have provided couples with new options to conceive but may have epigenetic side effects ( ). Furthermore, only 40% of couples manage to have a child despite 2 yr of treatment. Safety, efficacy, and acceptability of contraceptives are also critically important, but many current female contraceptive methods have side effects that limit long-term use ( ), while male contraceptives are limited to condoms or vasectomy ( ). A better understanding of the molecular players involved in fertilization is necessary to drive innovation in both assisted reproductive technologies and contraception. In this review, we will first briefly review the events that prepare the gametes for fertilization. We will then discuss how recent studies of genetically altered mice and structural biology efforts have shed light on the molecular mechanisms of sperm–egg attachment and fusion. We will also discuss the gaps in current knowledge and suggest new perspectives and future directions in the search for other protein factors involved at the gamete fusion synapse. Fertilization requires proper gametogenesis (oogenesis in the female and spermatogenesis in the male), which produces haploid cells and introduces diversity. Primordial germ cells (PGC) are the embryonic precursors to spermatocytes and ova. The cells produced by the first few divisions of the fertilized egg are totipotent and capable of differentiating into any cell type, including germ cells. PGCs originate within the primary ectoderm of the embryo and then migrate into the yolk sac. Between weeks 4 and 6, the PGCs migrate back into the posterior body wall of the embryo, where they stimulate cells of the adjacent coelomic epithelium and mesonephros to form primitive sex cords and induce the formation of the genital ridges and gonads. The sex (gonadal) cords surround the PGCs and give rise to the tissue that will nourish and regulate the development of the maturing sex cells (ovarian follicles in the female and Sertoli cells in the male). Egg Oogenesis is a complex differentiation process by which mature functional ova develop from germ cells ( ; ). In humans, oogenesis begins in the ovary at 6–8 wk of fetus development, when PGCs differentiate into oogonia. By the 12th week, several million oogonia enter prophase, the first meiotic division and become dormant until shortly before ovulation ( ). Due to their large and watery nuclei, these cells are referred to as germinal vesicles ( ). These primary oocytes become enclosed by follicle cells to form primordial follicles. The number of primordial follicles peaks at ∼7 million by the fifth month of fetal life, with ∼700,000 left at birth and 400,000 by puberty ( ). All of the egg cells that the ovaries will release are already present at birth. During each menstrual cycle, hormones from the hypothalamic–pituitary–gonadal axis restart the division of the primary oocytes in meiosis I and follicular development ( ). Primary follicles develop into secondary follicles, containing each growing oocyte surrounded by two or more layers of proliferating follicle cells. ZP glycoproteins are secreted by the oocyte of the primary follicle and possibly the follicular cells ( ). Although these glycoproteins form a physical barrier between the follicle cells and the oocyte, follicle cells and the oocyte remain connected through transzonal cytoplasmic projections from the follicle cells until fertilization ( ). A reciprocal dialog between the oocyte and its surrounding follicular cells coordinates the different phases of follicular development and the maintenance of meiotic arrest ( ). Oocyte-derived microvilli control female fertility by optimizing ovarian follicle selection in mice ( ). The epithelium of 5–12 primary follicles proliferates to form a multilayered capsule around the oocyte. A few of these growing follicles continue to enlarge in response to follicle-stimulating hormone (FSH; ). A single follicle becomes dominant, and the others degenerate by atresia ( ). Meiosis of the oocyte in the mature preovulatory follicle is blocked until a surge in levels of FSH and luteinizing hormone that occurs midway through the menstrual cycle. The membrane of the germinal vesicle nucleus breaks down, the chromosomes align in metaphase, and the oocyte expels its first polar body. The secondary oocyte then begins the second meiotic division, which is arrested at the meiotic metaphase II stage until ovulation ( ). Ovulation depends on the breakdown of the follicle wall and occurs ∼38 h after the increase in levels of FSH and luteinizing hormone ( ). The disruption of the follicle wall expulses the oocyte, which is captured by the fimbriated mouth of the oviduct and moved into the ampulla. The oocyte retains its ability to be fertilized for ∼24 h and completes meiosis only if it is fertilized. Sperm In contrast to oogenesis, which is complete before birth, spermatogenesis is a continuous process that begins at puberty ( ). In humans, spermatogenesis takes 74 d to complete; thus, multiple spermatogenesis events occur simultaneously to allow for continual sperm production. Spermatogenesis occurs in the testis in a stepwise manner, beginning with diploid spermatogonia at the basal surface of seminiferous tubules and ending with mature elongated spermatozoa that are released in tubule lumens in a process called spermiation ( ; ). During spermatogenesis, mitosis results in gene amplification, meiosis results in genome reduction, and finally maturation occurs ( ). At this stage, sperm are not motile and are fertilization incompetent. Two additional sperm maturational processes are required outside the testis. First, sperm undergo a maturation process during epididymal transit ( ) involving posttranslational modifications of previously synthesized proteins and acquisition of proteins from the epididymal epithelium ( ; see text box). After ejaculation into the female reproductive tract, dilution triggers additional changes in sperm, collectively termed capacitation (see text box), that prepare the sperm for the acrosome reaction. Epididymal maturation Sperm exchange with the epididymal epithelium occurs by direct interaction with epithelial cells, by interaction with soluble proteins in the epididymal fluid or via extracellular exosome-like vesicles released by epithelial cells called epididymosomes ( ). The purposes of this exchange are to redistribute sperm proteins and change the composition and lipid balance of the sperm membrane. These changes take place during the transit from the epididymis initial segment, through the caput and the corpus, to the cauda where sperm are stored ( ). Epididymal transit lasts 10–12 d in mammals, but storage is dependent on sexual activity. Since fertilization is not immediate, fertilizing capacities of the spermatozoa are preserved by decapacitation factors that are active in the epididymis. An example of a decapacitation factor is SPINK3, which is secreted by seminal vesicles; it impairs sperm membrane hyperpolarization and calcium influx through CatSper ( ). Epididymal plasma and sperm represent only a small fraction (5%) of semen in men ( ). Two thirds of the volume of semen comes from the seminal vesicles and the other third from the prostate. These secretions protect the sperm and prevent early maturation. Sperm capacitation More than 70 yr ago, Austin and Chang described capacitation as the changes required for sperm to fertilize oocytes in vivo ( ; ). Once sperm enter the female reproductive tract, they undergo capacitation. Capacitation results in hyperactivation of sperm movement and initiation of the acrosome reaction ( ; ). During capacitation, stabilizing or decapacitation factors that are adsorbed on the sperm plasma membrane are removed ( ). These agents that initiate removal of decapacitation factors are electrolytes, energy substrates, and proteins such as seminal plasma protein or albumin. Removal of decapacitation factors increases sperm plasma membrane fluidity, allowing an increase in the permeability to calcium, chloride, and bicarbonate ions ( ). Sperm motility depends on the membrane potential, intracellular pH, and balance of intracellular ions (reviewed in ). The most important ion for this function is Ca 2+ ( ). This secondary messenger is an important signaling pathways activator that regulates sperm motility ( ). The activation of soluble adenyl cyclases generates cyclic adenosine monophosphate that in turn activates serine/threonine protein kinase A, which induces a cascade of protein phosphorylation initiating the induction of sperm motility ( ). Protein phosphorylation, sperm hyperactivation, and the acrosome reaction are used in vitro to evaluate capacitation. Capacitation can be induced in vitro by incubation in medium containing calcium, bicarbonate ions, and serum albumin ( ). Mammalian sperm capacitation occurs during sperm migration in the female tract. Mammalian males ejaculate millions of sperm cells into the female reproductive tract, but only a few hundred sperm at most reach the oocytes. This massive elimination process likely prevents polyspermy (reviewed in ). Selection of human sperm during the journey begins in the acidic environment of the vagina. In the cervix, only morphologically normal sperm can migrate. Some sperm immediately pass into the cervical mucus, whereas the remaining sperm becomes a part of the coagulum. The next selection occurs at the uterus–tubal junction, the connection between the uterus and the oviduct that represents a major obstacle for sperm migration ( ). Experiments in mice indicate that sperm motility alone is insufficient for sperm migration through the uterus–tubal junction ( ). Uterine contractions facilitate sperm transport as do molecular interactions. Several proteins, such as ADAM3 and other ADAM family members, are known to be involved in this step in mice ( ; ); most ADAM proteins have human orthologues. Spermatogenesis takes place in a species-specific cycle called the seminiferous epithelial cycle and is regulated in particular through the hypothalamic–pituitary–testicular axis. Indeed, at puberty, the testes (interstitial steroidogenic Leydig cells) secrete an increased amount of testosterone, which triggers growth of the testes, maturation of the seminiferous tubules, and the commencement of spermatogenesis. The Sertoli cells are the major somatic cells present in the seminiferous tubules and are considered to be the main regulators of spermatogenesis. They orchestrate spermatogenesis by supporting spermatogonial stem cells, determining the testis size, organizing meiotic and postmeiotic development and sperm output, supporting androgen production by maintaining the development and function of Leydig cells, and regulating other aspects of testis function like peritubular myoid cells, immune cells, and the vasculature, which participate in the maintenance of the spermatogonial stem cell niche. Oogenesis is a complex differentiation process by which mature functional ova develop from germ cells ( ; ). In humans, oogenesis begins in the ovary at 6–8 wk of fetus development, when PGCs differentiate into oogonia. By the 12th week, several million oogonia enter prophase, the first meiotic division and become dormant until shortly before ovulation ( ). Due to their large and watery nuclei, these cells are referred to as germinal vesicles ( ). These primary oocytes become enclosed by follicle cells to form primordial follicles. The number of primordial follicles peaks at ∼7 million by the fifth month of fetal life, with ∼700,000 left at birth and 400,000 by puberty ( ). All of the egg cells that the ovaries will release are already present at birth. During each menstrual cycle, hormones from the hypothalamic–pituitary–gonadal axis restart the division of the primary oocytes in meiosis I and follicular development ( ). Primary follicles develop into secondary follicles, containing each growing oocyte surrounded by two or more layers of proliferating follicle cells. ZP glycoproteins are secreted by the oocyte of the primary follicle and possibly the follicular cells ( ). Although these glycoproteins form a physical barrier between the follicle cells and the oocyte, follicle cells and the oocyte remain connected through transzonal cytoplasmic projections from the follicle cells until fertilization ( ). A reciprocal dialog between the oocyte and its surrounding follicular cells coordinates the different phases of follicular development and the maintenance of meiotic arrest ( ). Oocyte-derived microvilli control female fertility by optimizing ovarian follicle selection in mice ( ). The epithelium of 5–12 primary follicles proliferates to form a multilayered capsule around the oocyte. A few of these growing follicles continue to enlarge in response to follicle-stimulating hormone (FSH; ). A single follicle becomes dominant, and the others degenerate by atresia ( ). Meiosis of the oocyte in the mature preovulatory follicle is blocked until a surge in levels of FSH and luteinizing hormone that occurs midway through the menstrual cycle. The membrane of the germinal vesicle nucleus breaks down, the chromosomes align in metaphase, and the oocyte expels its first polar body. The secondary oocyte then begins the second meiotic division, which is arrested at the meiotic metaphase II stage until ovulation ( ). Ovulation depends on the breakdown of the follicle wall and occurs ∼38 h after the increase in levels of FSH and luteinizing hormone ( ). The disruption of the follicle wall expulses the oocyte, which is captured by the fimbriated mouth of the oviduct and moved into the ampulla. The oocyte retains its ability to be fertilized for ∼24 h and completes meiosis only if it is fertilized. In contrast to oogenesis, which is complete before birth, spermatogenesis is a continuous process that begins at puberty ( ). In humans, spermatogenesis takes 74 d to complete; thus, multiple spermatogenesis events occur simultaneously to allow for continual sperm production. Spermatogenesis occurs in the testis in a stepwise manner, beginning with diploid spermatogonia at the basal surface of seminiferous tubules and ending with mature elongated spermatozoa that are released in tubule lumens in a process called spermiation ( ; ). During spermatogenesis, mitosis results in gene amplification, meiosis results in genome reduction, and finally maturation occurs ( ). At this stage, sperm are not motile and are fertilization incompetent. Two additional sperm maturational processes are required outside the testis. First, sperm undergo a maturation process during epididymal transit ( ) involving posttranslational modifications of previously synthesized proteins and acquisition of proteins from the epididymal epithelium ( ; see text box). After ejaculation into the female reproductive tract, dilution triggers additional changes in sperm, collectively termed capacitation (see text box), that prepare the sperm for the acrosome reaction. Epididymal maturation Sperm exchange with the epididymal epithelium occurs by direct interaction with epithelial cells, by interaction with soluble proteins in the epididymal fluid or via extracellular exosome-like vesicles released by epithelial cells called epididymosomes ( ). The purposes of this exchange are to redistribute sperm proteins and change the composition and lipid balance of the sperm membrane. These changes take place during the transit from the epididymis initial segment, through the caput and the corpus, to the cauda where sperm are stored ( ). Epididymal transit lasts 10–12 d in mammals, but storage is dependent on sexual activity. Since fertilization is not immediate, fertilizing capacities of the spermatozoa are preserved by decapacitation factors that are active in the epididymis. An example of a decapacitation factor is SPINK3, which is secreted by seminal vesicles; it impairs sperm membrane hyperpolarization and calcium influx through CatSper ( ). Epididymal plasma and sperm represent only a small fraction (5%) of semen in men ( ). Two thirds of the volume of semen comes from the seminal vesicles and the other third from the prostate. These secretions protect the sperm and prevent early maturation. Sperm capacitation More than 70 yr ago, Austin and Chang described capacitation as the changes required for sperm to fertilize oocytes in vivo ( ; ). Once sperm enter the female reproductive tract, they undergo capacitation. Capacitation results in hyperactivation of sperm movement and initiation of the acrosome reaction ( ; ). During capacitation, stabilizing or decapacitation factors that are adsorbed on the sperm plasma membrane are removed ( ). These agents that initiate removal of decapacitation factors are electrolytes, energy substrates, and proteins such as seminal plasma protein or albumin. Removal of decapacitation factors increases sperm plasma membrane fluidity, allowing an increase in the permeability to calcium, chloride, and bicarbonate ions ( ). Sperm motility depends on the membrane potential, intracellular pH, and balance of intracellular ions (reviewed in ). The most important ion for this function is Ca 2+ ( ). This secondary messenger is an important signaling pathways activator that regulates sperm motility ( ). The activation of soluble adenyl cyclases generates cyclic adenosine monophosphate that in turn activates serine/threonine protein kinase A, which induces a cascade of protein phosphorylation initiating the induction of sperm motility ( ). Protein phosphorylation, sperm hyperactivation, and the acrosome reaction are used in vitro to evaluate capacitation. Capacitation can be induced in vitro by incubation in medium containing calcium, bicarbonate ions, and serum albumin ( ). Mammalian sperm capacitation occurs during sperm migration in the female tract. Mammalian males ejaculate millions of sperm cells into the female reproductive tract, but only a few hundred sperm at most reach the oocytes. This massive elimination process likely prevents polyspermy (reviewed in ). Selection of human sperm during the journey begins in the acidic environment of the vagina. In the cervix, only morphologically normal sperm can migrate. Some sperm immediately pass into the cervical mucus, whereas the remaining sperm becomes a part of the coagulum. The next selection occurs at the uterus–tubal junction, the connection between the uterus and the oviduct that represents a major obstacle for sperm migration ( ). Experiments in mice indicate that sperm motility alone is insufficient for sperm migration through the uterus–tubal junction ( ). Uterine contractions facilitate sperm transport as do molecular interactions. Several proteins, such as ADAM3 and other ADAM family members, are known to be involved in this step in mice ( ; ); most ADAM proteins have human orthologues. Spermatogenesis takes place in a species-specific cycle called the seminiferous epithelial cycle and is regulated in particular through the hypothalamic–pituitary–testicular axis. Indeed, at puberty, the testes (interstitial steroidogenic Leydig cells) secrete an increased amount of testosterone, which triggers growth of the testes, maturation of the seminiferous tubules, and the commencement of spermatogenesis. The Sertoli cells are the major somatic cells present in the seminiferous tubules and are considered to be the main regulators of spermatogenesis. They orchestrate spermatogenesis by supporting spermatogonial stem cells, determining the testis size, organizing meiotic and postmeiotic development and sperm output, supporting androgen production by maintaining the development and function of Leydig cells, and regulating other aspects of testis function like peritubular myoid cells, immune cells, and the vasculature, which participate in the maintenance of the spermatogonial stem cell niche. Sperm exchange with the epididymal epithelium occurs by direct interaction with epithelial cells, by interaction with soluble proteins in the epididymal fluid or via extracellular exosome-like vesicles released by epithelial cells called epididymosomes ( ). The purposes of this exchange are to redistribute sperm proteins and change the composition and lipid balance of the sperm membrane. These changes take place during the transit from the epididymis initial segment, through the caput and the corpus, to the cauda where sperm are stored ( ). Epididymal transit lasts 10–12 d in mammals, but storage is dependent on sexual activity. Since fertilization is not immediate, fertilizing capacities of the spermatozoa are preserved by decapacitation factors that are active in the epididymis. An example of a decapacitation factor is SPINK3, which is secreted by seminal vesicles; it impairs sperm membrane hyperpolarization and calcium influx through CatSper ( ). Epididymal plasma and sperm represent only a small fraction (5%) of semen in men ( ). Two thirds of the volume of semen comes from the seminal vesicles and the other third from the prostate. These secretions protect the sperm and prevent early maturation. More than 70 yr ago, Austin and Chang described capacitation as the changes required for sperm to fertilize oocytes in vivo ( ; ). Once sperm enter the female reproductive tract, they undergo capacitation. Capacitation results in hyperactivation of sperm movement and initiation of the acrosome reaction ( ; ). During capacitation, stabilizing or decapacitation factors that are adsorbed on the sperm plasma membrane are removed ( ). These agents that initiate removal of decapacitation factors are electrolytes, energy substrates, and proteins such as seminal plasma protein or albumin. Removal of decapacitation factors increases sperm plasma membrane fluidity, allowing an increase in the permeability to calcium, chloride, and bicarbonate ions ( ). Sperm motility depends on the membrane potential, intracellular pH, and balance of intracellular ions (reviewed in ). The most important ion for this function is Ca 2+ ( ). This secondary messenger is an important signaling pathways activator that regulates sperm motility ( ). The activation of soluble adenyl cyclases generates cyclic adenosine monophosphate that in turn activates serine/threonine protein kinase A, which induces a cascade of protein phosphorylation initiating the induction of sperm motility ( ). Protein phosphorylation, sperm hyperactivation, and the acrosome reaction are used in vitro to evaluate capacitation. Capacitation can be induced in vitro by incubation in medium containing calcium, bicarbonate ions, and serum albumin ( ). Mammalian sperm capacitation occurs during sperm migration in the female tract. Mammalian males ejaculate millions of sperm cells into the female reproductive tract, but only a few hundred sperm at most reach the oocytes. This massive elimination process likely prevents polyspermy (reviewed in ). Selection of human sperm during the journey begins in the acidic environment of the vagina. In the cervix, only morphologically normal sperm can migrate. Some sperm immediately pass into the cervical mucus, whereas the remaining sperm becomes a part of the coagulum. The next selection occurs at the uterus–tubal junction, the connection between the uterus and the oviduct that represents a major obstacle for sperm migration ( ). Experiments in mice indicate that sperm motility alone is insufficient for sperm migration through the uterus–tubal junction ( ). Uterine contractions facilitate sperm transport as do molecular interactions. Several proteins, such as ADAM3 and other ADAM family members, are known to be involved in this step in mice ( ; ); most ADAM proteins have human orthologues. The acrosome is a secretory vesicle located on the anterior region of sperm that originates from the spermatid Golgi apparatus. An acrosomal granule is formed by the fusion of proacrosomal vesicles in the vicinity of the nucleus. The region increases in size and spreads over the anterior part of the nucleus. The acrosome reaction is driven by SNARE complexes and results in the exocytosis of the contents of the acrosome upon fusion of the plasma membrane with the outer acrosomal membrane ( ; reviewed in ; ). The timing of the acrosome reaction is critical. Only sperm that have undergone this reaction are fertilization competent, but when a high proportion of sperm undergo the acrosome reaction prematurely, success of in vitro fertilization is low ( ). Several studies indicate that only a fraction of sperm is capable of undergoing spontaneous acrosome reaction. In human and mice sperm samples, 15–20% of cells undergo spontaneous acrosome reaction ( ), whereas only 20–30% undergo progesterone-induced acrosome reaction ( ), suggesting physiological heterogeneity of sperm population. In addition, Inoue et al. demonstrated that acrosome-reacted mouse spermatozoa recovered from the PVS can fertilize other eggs ( ). Based on in vitro data, it was thought that the acrosome reaction occurs when the sperm contacts the ZP, particularly the ZP3 protein ( ). Using transgenic mice that express fluorescent markers in the acrosome ( ) and the midpiece mitochondria ( ), real-time observation of acrosomal exocytosis was possible. These experiments showed that most mouse spermatozoa capable of fertilization had undergone the acrosome reaction before contact with the oocyte ZP ( ). Most spermatozoa begin to react in the isthmus of the oviduct before reaching the ampulla ( ; ). Contact with the ZP in vitro probably makes it possible to complete a partial acrosome reaction. The most important function of the acrosome reaction is to induce changes in the sperm membrane ( ). The relocations of IZUMO1 and SPACA6, proteins essential for sperm–egg fusion, that occur after the acrosome reaction are illustrative examples of these changes ( ; ; ). The presence of these proteins on the sperm membrane, in addition to the classic markers Pisum sativum agglutinin, Peanut agglutinin lectins, or CD46, can be used as markers for the acrosome reaction ( ). The acrosome and its disruption are both crucial for effective fertilization, as low fertilization rates are observed upon intracytoplasmic sperm injection of acrosome-intact sperm ( ) or round spermatozoa lacking acrosomes ( ). The ZP is a physical barrier between the oocyte and the follicular cells that forms from glycoproteins secreted from the primary follicles ( ). The human ZP consists of four glycoproteins (hZP1–hZP4; ). Mice, which have been used for most of the ZP studies in mammals, express only three ZP glycoproteins (mZP1–mZP3; ). Analysis of mouse lines expressing human ZP proteins demonstrated that only hZP2 is important in human sperm–egg binding ( ). Experiments using purified native or recombinant human ZP proteins have shown that hZP1, hZP3, and hZP4 bind to the capacitated human spermatozoa and induce the acrosome reaction ( ). ZP1 is required for the structural integrity of the ZP ( ). To better understand the roles of ZP glycoproteins, further studies, particularly on ZP protein glycosylation, are needed. The species-specific binding of the ZP to sperm is presumably related to these carbohydrate moieties ( ). The sialyl-Lewis(x) sequence is the major carbohydrate ligand for human sperm–egg binding ( ). The current hypothesis that hZP1, hZP3, and hZP4 bind to capacitated sperm and hZP2 binds to sperm with intact acrosomes will need to be revisited due to the recent demonstration that the acrosome reaction takes place before ZP contact. Regardless, the role of the ZP in preventing polyspermy is clear. Indeed, ZP hardening is due to ZP2 cleavage by ovastacin, a protease released into the PVS by cortical granules after the first sperm–egg fusion ( ). After penetration of the ZP, the sperm enters the PVS and can attach and fuse with the egg plasma membrane. The development of genetic knockout animal models has proven critical in determining the importance of various sperm and egg proteins in sperm–egg attachment and fusion. Surprisingly, genetic knockout studies revealed that many factors originally thought to be important for fertilization were in fact not necessary (reviewed in , ). The proteins from sperm and egg that are essential for sperm–egg membrane interaction and fusion are listed in and are discussed individually from a structural and functional perspective in the sections below. Sperm IZUMO1 In 2005, Inoue et al. discovered that homozygous Izumo1 −/− mice are healthy and show normal mating behavior, but males are infertile. IZUMO1 is named after a shrine in Japan that honors the deity for marriage ( ). The spermatozoa of Izumo1 −/− mice can undergo acrosomal reaction and penetrate the ZP but fail to fuse with oocytes. When the fusion step is bypassed using intracytoplasmic sperm injection, Izumo1 −/− spermatozoa can fertilize oocytes, resulting in offspring; thus, IZUMO1 is only necessary at the adhesion/fusion stage of fertilization. An anti-IZUMO1 antibody, OBF13, completely abolishes gamete fusion by blocking IZUMO1 from binding to its receptor. There are four IZUMO family members ( ), but in mice, IZUMO1 is the only paralog that is essential to fertilization ( ). IZUMO1 is a type I transmembrane protein consisting of 350 residues that is expressed exclusively in sperm ( ; ; ). As sperm transit through the epididymis, IZUMO1 undergoes posttranslational modifications. In immature spermatozoa isolated from the proximal caput region of the epididymis, IZUMO1 is localized to both the acrosome and flagella of spermatozoa and is phosphorylated at two sites (S339 and S346; ). In the cauda epididymis, IZUMO1 is found predominantly in the acrosome of spermatozoa and is phosphorylated at seven residues (S346, S352, S356, S366, T372, S374, and S375; ). Cell-based fluorescence studies show that after the acrosomal reaction, IZUMO1 is relocated to the membrane surface in the equatorial segment of the acrosome ( ). Three crystal structures of the human and mouse IZUMO1 ectodomain were recently published ( ; ; ). In one structure, IZUMO1 is in an upright conformation; however, other crystallographic structures are angled at the hinge region in a “boomerang” shape, which is also observed in solution small-angle x-ray scattering studies ( ). The structural discrepancy is not unusual, because the crystal lattice can induce distortions. The crystal structures of the human and mouse IZUMO1 ectodomain show that the extracellular region is organized into two domains, an N-terminal four-helix bundle (4HB) and an Ig-superfamily (IgSF) domain ( ; ; ; ). The two domains are connected by a β-hairpin that serves as a flexible hinge. There are five disulfide bonds, one buried at the protein core and four others that are solvent exposed on the surface. Three disulfide bonds connect the N-terminal 4HB domain to the hinge region, and the fourth links the hinge region to the IgSF domain. Interestingly, IZUMO1 shows marked similarities to two protozoan Plasmodium sp. parasite proteins: TRAP, which plays a critical role in gliding motility and host invasion ( ), and SPECT1, which plays a role in host cell fusion and hepatocyte invasion ( ; see text box). Perspectives: Similarity to Plasmodium host invasion proteins The β-hinge region of IZUMO1 is highly similar to an extensible β-ribbon region in TRAP (root-mean-square deviation [RMSD] 1.4 Å; ). The TRAP β-ribbon has been proposed to undergo conformational changes upon binding to a host cell to mediate sporozoite gliding and host cell invasion. The IZUMO1 4HB domain shows structural similarities to Plasmodium berghei SPECT1 (RMSD 3.3 Å; ; ). SPECT1 is required for cell traversal of sporozoites. Both SPECT1 and IZUMO1 adopt 4HBs with the same connectivity. In SPECT1, the 4HB is proposed to be a metastable structure that transitions from a solvent-accessible to a membrane-associated state. It has also been proposed that SPECT1 interacts with SPECT2, which has a membrane-attack complex/perforin domain, to form a pore. How the two proteins cooperatively mediate pore formation remains to be determined, but the similarity of IZUMO1 to proteins involved in parasite entry is intriguing. Oocyte JUNO In 2014, Bianchi et al. made the groundbreaking discovery of the oocyte receptor for IZUMO1 ( ). The group iteratively cloned and expressed the entire mouse oocyte cDNA library in mammalian cells and tested each clone for IZUMO1 binding using avidity-based extracellular interaction screening (AVEXIS; ). Folate receptor δ (or folate receptor 4), which was aptly renamed JUNO, after the Roman goddess of marriage and fertility, was the only protein that bound to IZUMO1. Mouse JUNO shares 58% sequence identity with human folate receptors FOLR-α and FOLR-β but does not bind to folate ( ; ). Juno −/− mice show normal development and mating behaviors, but females are infertile, and eggs from Juno −/− mice are unable to fuse with wild-type sperm ( ). Moreover, an anti-JUNO antibody incubated with human zona-free oocytes effectively blocks fertilization ( ). While JUNO is primarily expressed on the surface of oocytes, it is also expressed on CD4 + CD25 + regulatory T cells, albeit at a much lower level ( ). JUNO is highly expressed in unfertilized eggs but upon fusion with sperm is rapidly shed from the cell surface into extracellular vesicles ( ). By the anaphase II stage, which takes place 30–40 min after fertilization, JUNO is barely detectable at the cell surface ( ). The rapid removal of JUNO from the egg surface may help prevent the entry of more than one sperm into an oocyte. JUNO is a glycoprotein of 250 residues with a C-terminal glycosylphosphatidylinositol anchor. The crystal structures of human and mouse JUNO, both alone and in complex with human IZUMO1, were determined in 2016 ( ; ; ; ). The overall structure of human JUNO resembles structures of FOLR-α and FOLR-β with RMSDs of 1.1 Å and 1.0 Å, respectively. Like the folate receptors, JUNO has a compact, globular shape with five α helices, three 3 10 helices, and four short β strands stabilized by eight conserved disulfide bonds ( ). Despite its structural homology to folate receptors, five key residues in JUNO (A93, G121, Q122, R154, and G155) are not conserved compared with the folate-binding sites of FOLR-α and FOLR-β ( ). The aromatic and charged residues that in FOLR-α and FOLR-β anchor folate in the binding site through hydrogen bonds are replaced by alanine or glycine in JUNO, resulting in a larger cavity that cannot bind folate. Recombinant IZUMO1 binds to oocytes (and to nongamete human cells transfected with JUNO) but does not bind to oocytes that have been preincubated with an anti-JUNO antibody ( ). The cocrystal structure of IZUMO1 in complex with JUNO reveals a 1:1 stoichiometry with a binding interface of ∼910 Å 2 ( ). Biolayer interferometry, surface plasmon resonance, and isothermal titration calorimetry revealed that the complex of JUNO and IZUMO1 has a dissociation constant between 48 and 91 nM ( ; ). The tight binding affinity results from an additive effect of extensive van der Waals, hydrophobic, and aromatic interactions, as well as two salt bridges. IZUMO1 binds to JUNO primarily via the β-hairpin hinge, with four residues from the 4HB domain and two from the IgSF domain also contributing to the binding ( ). In JUNO, the binding site is an elongated surface formed by the flanking regions of helices α1–α3 and loops L1–L3. The IZUMO1–JUNO interaction is not strictly species specific, as there is cross-species interaction between human IZUMO1 and hamster JUNO (see text box). Perspectives: Cross-species interactions Fertilization is a species-specific event, as sperm typically cannot fertilize eggs from a different species. The ZP provides an effective barrier against cross-species fertilization, but beyond this glycoprotein layer, IZUMO1-JUNO recognition is promiscuous. Human sperm cannot penetrate the hamster ZP, but they can fuse with zona-free hamster eggs ( ). Indeed, zona-free hamster eggs have been used to assess human sperm quality in fertility treatments. Using the ELISA-based AVEXIS platform, human IZUMO1 was confirmed to bind to hamster JUNO in solution ( ). Like human IZUMO1, mouse and pig IZUMO1 also bind to hamster JUNO in solution ( ). The results are consistent with the ability of human, mouse, and pig sperm to fuse with zona-free hamster eggs ( ; ). Hamster and human JUNO are highly similar, with a sequence identity of 73%; however, eight residues at the IZUMO1–JUNO interface are not conserved. To understand the cross-species specificity, a homology model of hamster JUNO was generated based on the crystal structure of human JUNO. Despite key substitutions in hamster JUNO, the IZUMO1-binding site preserves the same structural architecture and physiochemical characteristics as human JUNO, with the exception of E45. In human JUNO, E45 forms a key salt bridge at the IZUMO1–JUNO interface. How an E45L substitution in hamster JUNO is able to maintain binding remains unclear. It was previously shown that E45 is critical for human JUNO recognition to IZUMO1, as E45A or E45K mutations severely reduced the interaction ( ). It may be possible that other interactions between hamster JUNO and human IZUMO1 compensate for the loss of the critical salt bridge. A crystal structure of hamster JUNO in complex with human IZUMO1 would provide important insight into the molecular basis of cross-species specificity in IZUMO1–JUNO recognition. The binding sites on both IZUMO1 and JUNO have been verified by alanine-substitution experiments ( ). W62 and L81 in JUNO and W148 in IZUMO1 play critical roles at the interface, as substitution of these residues by alanine dramatically reduces binding affinity ( ). These residues are strictly conserved across mammalian species. To verify the biological relevance of the IZUMO1–JUNO interface, the binding of oocytes to COS-7 cells expressing wild-type IZUMO1 or to mutants with one or more mutations to residues proposed to be important in JUNO binding was tested. Mutating W148, K154, H157, I158, R160, or L163 in IZUMO1 significantly reduced oocyte binding. COS-7 cells that expressed IZUMO1 with multiple mutations at the JUNO-binding interface showed a complete lack of binding to oocytes ( ). These results confirmed the JUNO-binding residues identified in the crystal structures and biophysical studies ( ). Oocyte CD9 The importance of CD9 in sperm–egg fusion was first described in 1999 ( ) and confirmed in 2000 ( ; ; ; ). CD9 is expressed on the plasma membrane of oocytes, and an anti-CD9 antibody inhibits sperm–egg fusion in a dose-dependent manner ( ). Interestingly, anti-CD9 antibodies do not block sperm from binding to oocytes but instead prevent the fusion of sperm and egg membranes ( ; ). These findings are consistent with mouse studies, which showed that CD9 −/− mice develop normally and that male mice are fertile but female mice have dramatically reduced fertility ( ; ; ). When the sperm–egg fusion step is bypassed by injecting capacitated sperm into the cytoplasm of CD9 −/− oocytes, the fertilized eggs show normal implantation efficiencies, and embryos develop normally. CD9 belongs to the tetraspanin superfamily and is 228 amino acids long. It has four membrane-spanning domains (TM1–TM4) linked by a short extracellular loop (SEL) between TM1 and TM2 and a large extracellular loop (LEL) between TM3 and TM4 ( ). The transmembrane regions are highly conserved among tetraspanins, with sequence divergence only in the extracellular loops. The first structure of CD9 was recently determined to 2.7 Å resolution ( ). The four transmembrane helices tilt toward the cytoplasmic membrane interface to form a cone-shaped structure that creates a spacious cavity in the intramembranous region ( ). This is reminiscent of the CD81 structure; CD9 and CD81 have ∼60% sequence similarity and the same overall fold (RMSD of 1.9 Å; ; ). Previous studies have demonstrated the localization of tetraspanins in curved regions of cell membranes ( ; ). As CD9 clusters at the contact region of the egg and sperm membranes, the tight array of cone-shaped CD9 may increase the curvature of the oocyte membrane, effectively causing it to protrude. CD9 -knockout oocytes produce short and sparse microvillus structures with a large radius of curvature of microvillar tips, which results in impaired fusion ability with spermatozoa ( ). The relative lengths of the SEL and LEL of tetraspanins control access to the intramembranous cavity. In silico analysis revealed that the LEL undergoes a conformational change between the open and closed states during binding partner recognition ( ). In the closed-state CD9 structure, the LEL is weakly associated with the SEL. In the open state, the LEL moves away from the SEL, thereby allowing access to the intramembranous cavity. The importance of the LEL in fertilization was demonstrated in a domain-swapping experiment, in which the LEL of a fertilization-incompetent tetraspanin CD53 was swapped with its equivalent section from CD9. The CD9–CD53 LEL chimera had dramatically reduced fertilization competency, whereas the chimera with the LEL from CD9, CD53–CD9 LEL , was ∼50% competent ( ). This suggests that additional regions such as the SEL and transmembrane domains are also important in fertilization. Alanine-substitution experiments on residues within the LEL have produced conflicting results. Mutation of the 173 SFQ 175 motif in the murine CD9 LEL suggests that these residues are essential for fertilization ( ). However, the murine 173 SFQ 175 LEL region is not conserved in human CD9 ( 175 TFT 177 ). A triple-alanine mutations of this region in both murine and human CD9 revealed that contrary to previous findings, both mutants rescued fertilization in CD9 −/− oocytes ( ). Further studies are required to probe the roles of specific CD9 LEL, SEL, and transmembrane residues in fertilization. Like other tetraspanins, CD9 can act as a scaffolding protein to bring together multiple protein partners to execute a biological function. For instance, CD9 associates with Igs, integrins, and other adhesion receptors and proteins (reviewed in ). Recently, interaction studies using human sperm and mouse oocytes revealed that IZUMO1 and JUNO colocalize with CD9 on the surface of the egg during sperm–egg attachment. Along with sperm IZUMO1, egg CD9 accumulates at adhesion area surroundings, suggestive of a cis interaction with egg JUNO ( ; ). In the same context, by measuring the force necessary to break contact between one sperm and an egg, it was suggested that CD9 induces the clustering of sperm receptors on the oocyte membrane, generating fusion-competent sites ( ). Single-particle cryoelectron microscopy studies of CD9 in complex with EWI-2 provide insights into how CD9 engages with its targets. EWI-2 belongs to the IgSF, with four to eight predicted IgSF domains and a single-pass transmembrane anchor. EWI-2 is a major binding partner to both CD9 and CD81 ( ; ; ). An anti-IgSF8 antibody had moderate inhibitory effects on sperm–egg binding, suggesting that mouse EWI-2 participates in gamete interactions ( ). Cryoelectron microscopy revealed that a 2:2 heterotetrameric arrangement of the extracellular domains of two EWI-2 molecules forms a tight dimer and that the EWI-2 transmembrane helix is sandwiched by two CD9 molecules. The transmembrane helix of EWI-2 interacts with TM3 and TM4 of CD9 via hydrophobic residues ( ). The nonspecific nature of the transmembrane hydrophobic interactions in the CD9–EWI-2 complex may explain the promiscuous nature of tetraspanins. In 2005, Inoue et al. discovered that homozygous Izumo1 −/− mice are healthy and show normal mating behavior, but males are infertile. IZUMO1 is named after a shrine in Japan that honors the deity for marriage ( ). The spermatozoa of Izumo1 −/− mice can undergo acrosomal reaction and penetrate the ZP but fail to fuse with oocytes. When the fusion step is bypassed using intracytoplasmic sperm injection, Izumo1 −/− spermatozoa can fertilize oocytes, resulting in offspring; thus, IZUMO1 is only necessary at the adhesion/fusion stage of fertilization. An anti-IZUMO1 antibody, OBF13, completely abolishes gamete fusion by blocking IZUMO1 from binding to its receptor. There are four IZUMO family members ( ), but in mice, IZUMO1 is the only paralog that is essential to fertilization ( ). IZUMO1 is a type I transmembrane protein consisting of 350 residues that is expressed exclusively in sperm ( ; ; ). As sperm transit through the epididymis, IZUMO1 undergoes posttranslational modifications. In immature spermatozoa isolated from the proximal caput region of the epididymis, IZUMO1 is localized to both the acrosome and flagella of spermatozoa and is phosphorylated at two sites (S339 and S346; ). In the cauda epididymis, IZUMO1 is found predominantly in the acrosome of spermatozoa and is phosphorylated at seven residues (S346, S352, S356, S366, T372, S374, and S375; ). Cell-based fluorescence studies show that after the acrosomal reaction, IZUMO1 is relocated to the membrane surface in the equatorial segment of the acrosome ( ). Three crystal structures of the human and mouse IZUMO1 ectodomain were recently published ( ; ; ). In one structure, IZUMO1 is in an upright conformation; however, other crystallographic structures are angled at the hinge region in a “boomerang” shape, which is also observed in solution small-angle x-ray scattering studies ( ). The structural discrepancy is not unusual, because the crystal lattice can induce distortions. The crystal structures of the human and mouse IZUMO1 ectodomain show that the extracellular region is organized into two domains, an N-terminal four-helix bundle (4HB) and an Ig-superfamily (IgSF) domain ( ; ; ; ). The two domains are connected by a β-hairpin that serves as a flexible hinge. There are five disulfide bonds, one buried at the protein core and four others that are solvent exposed on the surface. Three disulfide bonds connect the N-terminal 4HB domain to the hinge region, and the fourth links the hinge region to the IgSF domain. Interestingly, IZUMO1 shows marked similarities to two protozoan Plasmodium sp. parasite proteins: TRAP, which plays a critical role in gliding motility and host invasion ( ), and SPECT1, which plays a role in host cell fusion and hepatocyte invasion ( ; see text box). Perspectives: Similarity to Plasmodium host invasion proteins The β-hinge region of IZUMO1 is highly similar to an extensible β-ribbon region in TRAP (root-mean-square deviation [RMSD] 1.4 Å; ). The TRAP β-ribbon has been proposed to undergo conformational changes upon binding to a host cell to mediate sporozoite gliding and host cell invasion. The IZUMO1 4HB domain shows structural similarities to Plasmodium berghei SPECT1 (RMSD 3.3 Å; ; ). SPECT1 is required for cell traversal of sporozoites. Both SPECT1 and IZUMO1 adopt 4HBs with the same connectivity. In SPECT1, the 4HB is proposed to be a metastable structure that transitions from a solvent-accessible to a membrane-associated state. It has also been proposed that SPECT1 interacts with SPECT2, which has a membrane-attack complex/perforin domain, to form a pore. How the two proteins cooperatively mediate pore formation remains to be determined, but the similarity of IZUMO1 to proteins involved in parasite entry is intriguing. Plasmodium host invasion proteins The β-hinge region of IZUMO1 is highly similar to an extensible β-ribbon region in TRAP (root-mean-square deviation [RMSD] 1.4 Å; ). The TRAP β-ribbon has been proposed to undergo conformational changes upon binding to a host cell to mediate sporozoite gliding and host cell invasion. The IZUMO1 4HB domain shows structural similarities to Plasmodium berghei SPECT1 (RMSD 3.3 Å; ; ). SPECT1 is required for cell traversal of sporozoites. Both SPECT1 and IZUMO1 adopt 4HBs with the same connectivity. In SPECT1, the 4HB is proposed to be a metastable structure that transitions from a solvent-accessible to a membrane-associated state. It has also been proposed that SPECT1 interacts with SPECT2, which has a membrane-attack complex/perforin domain, to form a pore. How the two proteins cooperatively mediate pore formation remains to be determined, but the similarity of IZUMO1 to proteins involved in parasite entry is intriguing. In 2014, Bianchi et al. made the groundbreaking discovery of the oocyte receptor for IZUMO1 ( ). The group iteratively cloned and expressed the entire mouse oocyte cDNA library in mammalian cells and tested each clone for IZUMO1 binding using avidity-based extracellular interaction screening (AVEXIS; ). Folate receptor δ (or folate receptor 4), which was aptly renamed JUNO, after the Roman goddess of marriage and fertility, was the only protein that bound to IZUMO1. Mouse JUNO shares 58% sequence identity with human folate receptors FOLR-α and FOLR-β but does not bind to folate ( ; ). Juno −/− mice show normal development and mating behaviors, but females are infertile, and eggs from Juno −/− mice are unable to fuse with wild-type sperm ( ). Moreover, an anti-JUNO antibody incubated with human zona-free oocytes effectively blocks fertilization ( ). While JUNO is primarily expressed on the surface of oocytes, it is also expressed on CD4 + CD25 + regulatory T cells, albeit at a much lower level ( ). JUNO is highly expressed in unfertilized eggs but upon fusion with sperm is rapidly shed from the cell surface into extracellular vesicles ( ). By the anaphase II stage, which takes place 30–40 min after fertilization, JUNO is barely detectable at the cell surface ( ). The rapid removal of JUNO from the egg surface may help prevent the entry of more than one sperm into an oocyte. JUNO is a glycoprotein of 250 residues with a C-terminal glycosylphosphatidylinositol anchor. The crystal structures of human and mouse JUNO, both alone and in complex with human IZUMO1, were determined in 2016 ( ; ; ; ). The overall structure of human JUNO resembles structures of FOLR-α and FOLR-β with RMSDs of 1.1 Å and 1.0 Å, respectively. Like the folate receptors, JUNO has a compact, globular shape with five α helices, three 3 10 helices, and four short β strands stabilized by eight conserved disulfide bonds ( ). Despite its structural homology to folate receptors, five key residues in JUNO (A93, G121, Q122, R154, and G155) are not conserved compared with the folate-binding sites of FOLR-α and FOLR-β ( ). The aromatic and charged residues that in FOLR-α and FOLR-β anchor folate in the binding site through hydrogen bonds are replaced by alanine or glycine in JUNO, resulting in a larger cavity that cannot bind folate. Recombinant IZUMO1 binds to oocytes (and to nongamete human cells transfected with JUNO) but does not bind to oocytes that have been preincubated with an anti-JUNO antibody ( ). The cocrystal structure of IZUMO1 in complex with JUNO reveals a 1:1 stoichiometry with a binding interface of ∼910 Å 2 ( ). Biolayer interferometry, surface plasmon resonance, and isothermal titration calorimetry revealed that the complex of JUNO and IZUMO1 has a dissociation constant between 48 and 91 nM ( ; ). The tight binding affinity results from an additive effect of extensive van der Waals, hydrophobic, and aromatic interactions, as well as two salt bridges. IZUMO1 binds to JUNO primarily via the β-hairpin hinge, with four residues from the 4HB domain and two from the IgSF domain also contributing to the binding ( ). In JUNO, the binding site is an elongated surface formed by the flanking regions of helices α1–α3 and loops L1–L3. The IZUMO1–JUNO interaction is not strictly species specific, as there is cross-species interaction between human IZUMO1 and hamster JUNO (see text box). Perspectives: Cross-species interactions Fertilization is a species-specific event, as sperm typically cannot fertilize eggs from a different species. The ZP provides an effective barrier against cross-species fertilization, but beyond this glycoprotein layer, IZUMO1-JUNO recognition is promiscuous. Human sperm cannot penetrate the hamster ZP, but they can fuse with zona-free hamster eggs ( ). Indeed, zona-free hamster eggs have been used to assess human sperm quality in fertility treatments. Using the ELISA-based AVEXIS platform, human IZUMO1 was confirmed to bind to hamster JUNO in solution ( ). Like human IZUMO1, mouse and pig IZUMO1 also bind to hamster JUNO in solution ( ). The results are consistent with the ability of human, mouse, and pig sperm to fuse with zona-free hamster eggs ( ; ). Hamster and human JUNO are highly similar, with a sequence identity of 73%; however, eight residues at the IZUMO1–JUNO interface are not conserved. To understand the cross-species specificity, a homology model of hamster JUNO was generated based on the crystal structure of human JUNO. Despite key substitutions in hamster JUNO, the IZUMO1-binding site preserves the same structural architecture and physiochemical characteristics as human JUNO, with the exception of E45. In human JUNO, E45 forms a key salt bridge at the IZUMO1–JUNO interface. How an E45L substitution in hamster JUNO is able to maintain binding remains unclear. It was previously shown that E45 is critical for human JUNO recognition to IZUMO1, as E45A or E45K mutations severely reduced the interaction ( ). It may be possible that other interactions between hamster JUNO and human IZUMO1 compensate for the loss of the critical salt bridge. A crystal structure of hamster JUNO in complex with human IZUMO1 would provide important insight into the molecular basis of cross-species specificity in IZUMO1–JUNO recognition. The binding sites on both IZUMO1 and JUNO have been verified by alanine-substitution experiments ( ). W62 and L81 in JUNO and W148 in IZUMO1 play critical roles at the interface, as substitution of these residues by alanine dramatically reduces binding affinity ( ). These residues are strictly conserved across mammalian species. To verify the biological relevance of the IZUMO1–JUNO interface, the binding of oocytes to COS-7 cells expressing wild-type IZUMO1 or to mutants with one or more mutations to residues proposed to be important in JUNO binding was tested. Mutating W148, K154, H157, I158, R160, or L163 in IZUMO1 significantly reduced oocyte binding. COS-7 cells that expressed IZUMO1 with multiple mutations at the JUNO-binding interface showed a complete lack of binding to oocytes ( ). These results confirmed the JUNO-binding residues identified in the crystal structures and biophysical studies ( ). Fertilization is a species-specific event, as sperm typically cannot fertilize eggs from a different species. The ZP provides an effective barrier against cross-species fertilization, but beyond this glycoprotein layer, IZUMO1-JUNO recognition is promiscuous. Human sperm cannot penetrate the hamster ZP, but they can fuse with zona-free hamster eggs ( ). Indeed, zona-free hamster eggs have been used to assess human sperm quality in fertility treatments. Using the ELISA-based AVEXIS platform, human IZUMO1 was confirmed to bind to hamster JUNO in solution ( ). Like human IZUMO1, mouse and pig IZUMO1 also bind to hamster JUNO in solution ( ). The results are consistent with the ability of human, mouse, and pig sperm to fuse with zona-free hamster eggs ( ; ). Hamster and human JUNO are highly similar, with a sequence identity of 73%; however, eight residues at the IZUMO1–JUNO interface are not conserved. To understand the cross-species specificity, a homology model of hamster JUNO was generated based on the crystal structure of human JUNO. Despite key substitutions in hamster JUNO, the IZUMO1-binding site preserves the same structural architecture and physiochemical characteristics as human JUNO, with the exception of E45. In human JUNO, E45 forms a key salt bridge at the IZUMO1–JUNO interface. How an E45L substitution in hamster JUNO is able to maintain binding remains unclear. It was previously shown that E45 is critical for human JUNO recognition to IZUMO1, as E45A or E45K mutations severely reduced the interaction ( ). It may be possible that other interactions between hamster JUNO and human IZUMO1 compensate for the loss of the critical salt bridge. A crystal structure of hamster JUNO in complex with human IZUMO1 would provide important insight into the molecular basis of cross-species specificity in IZUMO1–JUNO recognition. The importance of CD9 in sperm–egg fusion was first described in 1999 ( ) and confirmed in 2000 ( ; ; ; ). CD9 is expressed on the plasma membrane of oocytes, and an anti-CD9 antibody inhibits sperm–egg fusion in a dose-dependent manner ( ). Interestingly, anti-CD9 antibodies do not block sperm from binding to oocytes but instead prevent the fusion of sperm and egg membranes ( ; ). These findings are consistent with mouse studies, which showed that CD9 −/− mice develop normally and that male mice are fertile but female mice have dramatically reduced fertility ( ; ; ). When the sperm–egg fusion step is bypassed by injecting capacitated sperm into the cytoplasm of CD9 −/− oocytes, the fertilized eggs show normal implantation efficiencies, and embryos develop normally. CD9 belongs to the tetraspanin superfamily and is 228 amino acids long. It has four membrane-spanning domains (TM1–TM4) linked by a short extracellular loop (SEL) between TM1 and TM2 and a large extracellular loop (LEL) between TM3 and TM4 ( ). The transmembrane regions are highly conserved among tetraspanins, with sequence divergence only in the extracellular loops. The first structure of CD9 was recently determined to 2.7 Å resolution ( ). The four transmembrane helices tilt toward the cytoplasmic membrane interface to form a cone-shaped structure that creates a spacious cavity in the intramembranous region ( ). This is reminiscent of the CD81 structure; CD9 and CD81 have ∼60% sequence similarity and the same overall fold (RMSD of 1.9 Å; ; ). Previous studies have demonstrated the localization of tetraspanins in curved regions of cell membranes ( ; ). As CD9 clusters at the contact region of the egg and sperm membranes, the tight array of cone-shaped CD9 may increase the curvature of the oocyte membrane, effectively causing it to protrude. CD9 -knockout oocytes produce short and sparse microvillus structures with a large radius of curvature of microvillar tips, which results in impaired fusion ability with spermatozoa ( ). The relative lengths of the SEL and LEL of tetraspanins control access to the intramembranous cavity. In silico analysis revealed that the LEL undergoes a conformational change between the open and closed states during binding partner recognition ( ). In the closed-state CD9 structure, the LEL is weakly associated with the SEL. In the open state, the LEL moves away from the SEL, thereby allowing access to the intramembranous cavity. The importance of the LEL in fertilization was demonstrated in a domain-swapping experiment, in which the LEL of a fertilization-incompetent tetraspanin CD53 was swapped with its equivalent section from CD9. The CD9–CD53 LEL chimera had dramatically reduced fertilization competency, whereas the chimera with the LEL from CD9, CD53–CD9 LEL , was ∼50% competent ( ). This suggests that additional regions such as the SEL and transmembrane domains are also important in fertilization. Alanine-substitution experiments on residues within the LEL have produced conflicting results. Mutation of the 173 SFQ 175 motif in the murine CD9 LEL suggests that these residues are essential for fertilization ( ). However, the murine 173 SFQ 175 LEL region is not conserved in human CD9 ( 175 TFT 177 ). A triple-alanine mutations of this region in both murine and human CD9 revealed that contrary to previous findings, both mutants rescued fertilization in CD9 −/− oocytes ( ). Further studies are required to probe the roles of specific CD9 LEL, SEL, and transmembrane residues in fertilization. Like other tetraspanins, CD9 can act as a scaffolding protein to bring together multiple protein partners to execute a biological function. For instance, CD9 associates with Igs, integrins, and other adhesion receptors and proteins (reviewed in ). Recently, interaction studies using human sperm and mouse oocytes revealed that IZUMO1 and JUNO colocalize with CD9 on the surface of the egg during sperm–egg attachment. Along with sperm IZUMO1, egg CD9 accumulates at adhesion area surroundings, suggestive of a cis interaction with egg JUNO ( ; ). In the same context, by measuring the force necessary to break contact between one sperm and an egg, it was suggested that CD9 induces the clustering of sperm receptors on the oocyte membrane, generating fusion-competent sites ( ). Single-particle cryoelectron microscopy studies of CD9 in complex with EWI-2 provide insights into how CD9 engages with its targets. EWI-2 belongs to the IgSF, with four to eight predicted IgSF domains and a single-pass transmembrane anchor. EWI-2 is a major binding partner to both CD9 and CD81 ( ; ; ). An anti-IgSF8 antibody had moderate inhibitory effects on sperm–egg binding, suggesting that mouse EWI-2 participates in gamete interactions ( ). Cryoelectron microscopy revealed that a 2:2 heterotetrameric arrangement of the extracellular domains of two EWI-2 molecules forms a tight dimer and that the EWI-2 transmembrane helix is sandwiched by two CD9 molecules. The transmembrane helix of EWI-2 interacts with TM3 and TM4 of CD9 via hydrophobic residues ( ). The nonspecific nature of the transmembrane hydrophobic interactions in the CD9–EWI-2 complex may explain the promiscuous nature of tetraspanins. The use of CRISPR-Cas9 technology has led to the recent identification of six new factors essential for mammalian fertilization: SPACA6, TMEM95, SOF1, FIMP, and DCST1/DCST2. Sperm SPACA6 In 2014, Lorenzetti et al. characterized a mutant mouse line that had a deletion removing Spaca6 ( ). Male homozygous knockout mice were infertile with a phenotype that closely resembles that of Izumo1 -deficient mice. Subsequent studies by two other groups confirmed that Spaca6 deletion in male mice results in infertility, although mating behavior is normal and sperm are motile and morphologically normal ( ; ). Fertility could be restored by a transgene ( ). In a human zona-free in vitro fertilization assay, an anti-SPACA6 antibody reduced fertilization rates by threefold ( ). Recovery of oocytes from female mice that were mated with Spaca6 −/− male mice revealed that the spermatozoa were trapped in the PVS. This indicates that knockout spermatozoa migrate through the female genital tract to the oocyte and penetrate the ZP but fail to fuse with the oocyte membrane. When Spaca6 −/− sperm was injected into the cytoplasm of oocytes to bypass the membrane fusion step, fertilization was successful, and the fertilized eggs showed normal embryonic development, suggesting that SPACA6 does not play a critical role downstream of sperm–egg fusion. SPACA6 is primarily expressed in testis, with low levels of expression in the epididymis, seminal vesicle, and ovary ( ; ). Orthologues of SPACA6 have been annotated in bull, hamster, human, mouse, rat, and zebrafish ( ). In fresh spermatozoa, SPACA6 is not detected on the plasma membrane; rather, it is localized underneath the membrane of the sperm head. After the acrosomal reaction, SPACA6 relocates to the equatorial segment of the sperm head, with reduced levels detected in the midpiece, and completely diminishes from the neck region ( ). Immunofluorescence staining revealed that the localization of IZUMO1 is unaffected in Spaca6 −/− sperm before and after the acrosomal reaction ( ). To verify this result, Spaca6 −/− male mice were mated with female mice and oocytes were extracted and immunostained with an anti-IZUMO1 antibody revealing that IZUMO1 distribution in Spaca6 −/− spermatozoa was identical to that in wild-type spermatozoa ( ). This confirmed that SPACA6 does not affect IZUMO1 localization. Both IZUMO1 and SPACA6 belong to the IgSF and are expressed in sperm and localized to the equatorial segment upon the acrosomal reaction ( ). The phenotypes of knockout mutants are highly similar as well. The similarities also extend into their domain organization. Both proteins have an N-terminal signal peptide, followed by a helical domain, a single IgSF domain, a single N- linked glycosylation site, a monotopic transmembrane helix, and a short cytoplasmic tail ( ). Despite these similarities, the proteins are not redundant, as both cell-based and mouse studies show that one cannot compensate for the lack of the other ( ). Moreover, SPACA6 does not accumulate at the interface with IZUMO1, and SPACA6-expressing COS-7 or HEK293T cells do not bind to the surface of the oocyte ( ; ). No interaction was detected between SPACA6 and IZUMO1 by coimmunoprecipitation from testis extracts ( ). How SPACA6 interacts with other sperm and oocyte proteins to mediate sperm–egg adhesion and fusion remains to be determined. Sperm TMEM95 A genome-wide analysis designed to reveal genetic associations with infertility in bulls revealed the essential role of TMEM95 in fertility ( ). A nonsense mutation that introduces a premature stop codon in Tmem95 diminishes male fertility, although it does not significantly affect sperm morphology or motility ( ). TMEM95 is conserved in primary sequence among bull, hamster, mouse, rat, and humans ( ; ). In bulls, TMEM95 is expressed in spermatozoa and is localized on the acrosome, on the equatorial segment and on the connecting piece ( ). Bull spermatozoa with a knockout mutation in Tmem95 are unable to fuse with oocytes, suggesting that TMEM95 is required for sperm–oocyte fusion ( ). RT-PCR analysis revealed that in mice Tmem95 is expressed exclusively in testis. Expression begins on day 21 postpartum when spermiogenesis begins. Tmem95 −/− mice that carry a 1,919-bp deletion in the Tmem95 locus have normal mating behavior but males are infertile ( ). The Tmem95 −/− spermatozoa have normal morphology and motility and bind to oocytes; however, the mutant sperm have impaired ability to fuse with oocytes and accumulate in the PVS. Expression of a Tmem95 transgene in Tmem95 −/− male mice restored fertility ( ). The fertility of Tmem95 −/− female mice is unaffected ( ). Consistent with the study by Noda et al., Lamas-Toranzo et al. also found that Tmem95 −/− male mice were infertile ( ). In silico analysis suggested that TMEM95 shares organizational similarities with IZUMO1 ( ). Like IZUMO1, it is a type I single-pass transmembrane protein with a signal peptide at the N terminus, a helix-rich N-terminal region, and a transmembrane helix. TMEM95 has an additional leucine-rich cytoplasmic domain compared with IZUMO1. Examination of the localization of IZUMO1 in acrosome-reacted wild-type and Tmem95 −/− sperm revealed no difference in IZUMO1 translocation ( ). Moreover, TMEM95 disappears after the acrosomal reaction ( ). Since IZUMO1 relocates to the equatorial segment only after the acrosomal reaction, this suggests that TMEM95 and IZUMO1 function independently. Experiments using the AVEXIS platform showed that TMEM95 does not bind to JUNO or IZUMO1 ( ). In contrast, coimmunoprecipitation studies using HEK293T cells coexpressing IZUMO1 and TMEM95 suggested that IZUMO1 does bind to TMEM95 ( ). Further studies are required to verify whether or not IZUMO1 and TMEM95 interact. Sperm SOF1 Another new molecular player important for male fertility identified by Noda and collaborators using CRISPR-Cas9–mediated gene knockout was a gene called 1700034O15Rik (also known as Llcfc1 ; ). The gene was aptly renamed SOF1 (sperm–oocyte fusion required 1). SOF1 is widely conserved in mammals and is highly expressed in the testis. SOF1 is predicted to be a 147-residue secreted protein with conserved LLLL and CFN(L/S)AS motifs. These motifs are observed in the DUF4717 family of proteins that have an unknown function but are exclusively found in eukaryotes. SOF1 reportedly undergoes posttranslational modifications during sperm maturation, and it was detected as a protein singlet in testicular germ cells but a doublet in acrosome-intact spermatozoa ( ). The sterility of Sof1 −/− male mice is likely due to defective membrane fusion ( ). The morphology and motility of Sof1 −/− spermatozoa are similar to wild type; however, when used for in vitro fertilization with cumulus-intact oocytes, Sof1 −/− spermatozoa do not fuse or fertilize oocytes and accumulate in the PVS. This inability to fuse was also observed using zona-free oocytes, but sperm continued to bind to the oocyte membrane, suggesting a role in either sperm–egg fusion or in the control of sperm–egg adhesion properties. Expression levels and localization of IZUMO1 are not affected in Sof1 −/− spermatozoa before or after the acrosome reaction. Thus, sterility in Sof1 −/− male mice is not due to a disruption of IZUMO1. Sperm FIMP CRISPR-Cas9–mediated deletion of Fimp (also known in mice as 4930451I11Rik ) results in failure in sperm–egg fusion in mice ( ). Similar to SOF1, FIMP is also a small protein of 132 amino acids that is highly expressed in the testis. The expression of this testis-specific gene is first observed 20 d after birth. The protein is detected in two distinct isoforms: membrane anchored and secreted. Only the transmembrane form appears to be critical for sperm-oocyte fusion in mice. Fimp −/− mice have normal testicular and sperm morphologies, and Fimp −/− spermatozoa penetrate the ZP but fail to fuse with oocytes ( ). In an in vitro fertilization assay using zona-free oocytes, Fimp −/− sperm ability to fuse are severely reduced. IZUMO1 localization and expression levels in Fimp −/− mice are similar to the wild type. FIMP localizes to the equatorial segment membrane, but the FIMP-mCherry signal disappeared in 40% of the acrosome-reacted sperm ( ). In contrast to IZUMO1, it does not appear FIMP is involved in the initial attachment step, as FIMP-expressing cells do not bind oocytes. The precise role of FIMP in sperm–egg fusion remains unclear. Sperm DCST1/DCST2 As identified by gene disruption and complementation experiments, the evolutionarily conserved factors dendrocyte expressed seven transmembrane protein (DC-STAMP) domain-containing 1 and 2 (DCST1/DCST2) are required for gamete fusion ( ). Individual or double gene deletion results in male sterility with the same phenotype as that of Izumo1 −/− or Spaca6 −/− knockouts. Although their molecular mechanism of action is still unknown, DCST1 and DCST2 function might be intrinsically related to SPACA6. Surprisingly, while the rescued double transgenic males had normal fertility, SPACA6 was not detected. The protein stability of SPACA6 may be differently regulated by DCST1/DCST2 and IZUMO1 ( ). In 2014, Lorenzetti et al. characterized a mutant mouse line that had a deletion removing Spaca6 ( ). Male homozygous knockout mice were infertile with a phenotype that closely resembles that of Izumo1 -deficient mice. Subsequent studies by two other groups confirmed that Spaca6 deletion in male mice results in infertility, although mating behavior is normal and sperm are motile and morphologically normal ( ; ). Fertility could be restored by a transgene ( ). In a human zona-free in vitro fertilization assay, an anti-SPACA6 antibody reduced fertilization rates by threefold ( ). Recovery of oocytes from female mice that were mated with Spaca6 −/− male mice revealed that the spermatozoa were trapped in the PVS. This indicates that knockout spermatozoa migrate through the female genital tract to the oocyte and penetrate the ZP but fail to fuse with the oocyte membrane. When Spaca6 −/− sperm was injected into the cytoplasm of oocytes to bypass the membrane fusion step, fertilization was successful, and the fertilized eggs showed normal embryonic development, suggesting that SPACA6 does not play a critical role downstream of sperm–egg fusion. SPACA6 is primarily expressed in testis, with low levels of expression in the epididymis, seminal vesicle, and ovary ( ; ). Orthologues of SPACA6 have been annotated in bull, hamster, human, mouse, rat, and zebrafish ( ). In fresh spermatozoa, SPACA6 is not detected on the plasma membrane; rather, it is localized underneath the membrane of the sperm head. After the acrosomal reaction, SPACA6 relocates to the equatorial segment of the sperm head, with reduced levels detected in the midpiece, and completely diminishes from the neck region ( ). Immunofluorescence staining revealed that the localization of IZUMO1 is unaffected in Spaca6 −/− sperm before and after the acrosomal reaction ( ). To verify this result, Spaca6 −/− male mice were mated with female mice and oocytes were extracted and immunostained with an anti-IZUMO1 antibody revealing that IZUMO1 distribution in Spaca6 −/− spermatozoa was identical to that in wild-type spermatozoa ( ). This confirmed that SPACA6 does not affect IZUMO1 localization. Both IZUMO1 and SPACA6 belong to the IgSF and are expressed in sperm and localized to the equatorial segment upon the acrosomal reaction ( ). The phenotypes of knockout mutants are highly similar as well. The similarities also extend into their domain organization. Both proteins have an N-terminal signal peptide, followed by a helical domain, a single IgSF domain, a single N- linked glycosylation site, a monotopic transmembrane helix, and a short cytoplasmic tail ( ). Despite these similarities, the proteins are not redundant, as both cell-based and mouse studies show that one cannot compensate for the lack of the other ( ). Moreover, SPACA6 does not accumulate at the interface with IZUMO1, and SPACA6-expressing COS-7 or HEK293T cells do not bind to the surface of the oocyte ( ; ). No interaction was detected between SPACA6 and IZUMO1 by coimmunoprecipitation from testis extracts ( ). How SPACA6 interacts with other sperm and oocyte proteins to mediate sperm–egg adhesion and fusion remains to be determined. A genome-wide analysis designed to reveal genetic associations with infertility in bulls revealed the essential role of TMEM95 in fertility ( ). A nonsense mutation that introduces a premature stop codon in Tmem95 diminishes male fertility, although it does not significantly affect sperm morphology or motility ( ). TMEM95 is conserved in primary sequence among bull, hamster, mouse, rat, and humans ( ; ). In bulls, TMEM95 is expressed in spermatozoa and is localized on the acrosome, on the equatorial segment and on the connecting piece ( ). Bull spermatozoa with a knockout mutation in Tmem95 are unable to fuse with oocytes, suggesting that TMEM95 is required for sperm–oocyte fusion ( ). RT-PCR analysis revealed that in mice Tmem95 is expressed exclusively in testis. Expression begins on day 21 postpartum when spermiogenesis begins. Tmem95 −/− mice that carry a 1,919-bp deletion in the Tmem95 locus have normal mating behavior but males are infertile ( ). The Tmem95 −/− spermatozoa have normal morphology and motility and bind to oocytes; however, the mutant sperm have impaired ability to fuse with oocytes and accumulate in the PVS. Expression of a Tmem95 transgene in Tmem95 −/− male mice restored fertility ( ). The fertility of Tmem95 −/− female mice is unaffected ( ). Consistent with the study by Noda et al., Lamas-Toranzo et al. also found that Tmem95 −/− male mice were infertile ( ). In silico analysis suggested that TMEM95 shares organizational similarities with IZUMO1 ( ). Like IZUMO1, it is a type I single-pass transmembrane protein with a signal peptide at the N terminus, a helix-rich N-terminal region, and a transmembrane helix. TMEM95 has an additional leucine-rich cytoplasmic domain compared with IZUMO1. Examination of the localization of IZUMO1 in acrosome-reacted wild-type and Tmem95 −/− sperm revealed no difference in IZUMO1 translocation ( ). Moreover, TMEM95 disappears after the acrosomal reaction ( ). Since IZUMO1 relocates to the equatorial segment only after the acrosomal reaction, this suggests that TMEM95 and IZUMO1 function independently. Experiments using the AVEXIS platform showed that TMEM95 does not bind to JUNO or IZUMO1 ( ). In contrast, coimmunoprecipitation studies using HEK293T cells coexpressing IZUMO1 and TMEM95 suggested that IZUMO1 does bind to TMEM95 ( ). Further studies are required to verify whether or not IZUMO1 and TMEM95 interact. Another new molecular player important for male fertility identified by Noda and collaborators using CRISPR-Cas9–mediated gene knockout was a gene called 1700034O15Rik (also known as Llcfc1 ; ). The gene was aptly renamed SOF1 (sperm–oocyte fusion required 1). SOF1 is widely conserved in mammals and is highly expressed in the testis. SOF1 is predicted to be a 147-residue secreted protein with conserved LLLL and CFN(L/S)AS motifs. These motifs are observed in the DUF4717 family of proteins that have an unknown function but are exclusively found in eukaryotes. SOF1 reportedly undergoes posttranslational modifications during sperm maturation, and it was detected as a protein singlet in testicular germ cells but a doublet in acrosome-intact spermatozoa ( ). The sterility of Sof1 −/− male mice is likely due to defective membrane fusion ( ). The morphology and motility of Sof1 −/− spermatozoa are similar to wild type; however, when used for in vitro fertilization with cumulus-intact oocytes, Sof1 −/− spermatozoa do not fuse or fertilize oocytes and accumulate in the PVS. This inability to fuse was also observed using zona-free oocytes, but sperm continued to bind to the oocyte membrane, suggesting a role in either sperm–egg fusion or in the control of sperm–egg adhesion properties. Expression levels and localization of IZUMO1 are not affected in Sof1 −/− spermatozoa before or after the acrosome reaction. Thus, sterility in Sof1 −/− male mice is not due to a disruption of IZUMO1. CRISPR-Cas9–mediated deletion of Fimp (also known in mice as 4930451I11Rik ) results in failure in sperm–egg fusion in mice ( ). Similar to SOF1, FIMP is also a small protein of 132 amino acids that is highly expressed in the testis. The expression of this testis-specific gene is first observed 20 d after birth. The protein is detected in two distinct isoforms: membrane anchored and secreted. Only the transmembrane form appears to be critical for sperm-oocyte fusion in mice. Fimp −/− mice have normal testicular and sperm morphologies, and Fimp −/− spermatozoa penetrate the ZP but fail to fuse with oocytes ( ). In an in vitro fertilization assay using zona-free oocytes, Fimp −/− sperm ability to fuse are severely reduced. IZUMO1 localization and expression levels in Fimp −/− mice are similar to the wild type. FIMP localizes to the equatorial segment membrane, but the FIMP-mCherry signal disappeared in 40% of the acrosome-reacted sperm ( ). In contrast to IZUMO1, it does not appear FIMP is involved in the initial attachment step, as FIMP-expressing cells do not bind oocytes. The precise role of FIMP in sperm–egg fusion remains unclear. As identified by gene disruption and complementation experiments, the evolutionarily conserved factors dendrocyte expressed seven transmembrane protein (DC-STAMP) domain-containing 1 and 2 (DCST1/DCST2) are required for gamete fusion ( ). Individual or double gene deletion results in male sterility with the same phenotype as that of Izumo1 −/− or Spaca6 −/− knockouts. Although their molecular mechanism of action is still unknown, DCST1 and DCST2 function might be intrinsically related to SPACA6. Surprisingly, while the rescued double transgenic males had normal fertility, SPACA6 was not detected. The protein stability of SPACA6 may be differently regulated by DCST1/DCST2 and IZUMO1 ( ). When the acrosome-reacted spermatozoon reaches the PVS, it is primed to interact and fuse with the egg. The molecular mechanism of mammalian sperm–egg attachment and fusion requires a complex sequence of events that for the most part remain a mystery ( ). However, new insights into some of the steps have now been obtained through structural, biophysical, and genetic analyses of the proteins essential to the adhesion fusion process. Initial attachment At the molecular level, the first step in the attachment of the sperm to the egg involves binding of IZUMO1, which is localized on the equatorial segment of acrosome-reacted sperm ( ), to its counterpart receptor JUNO on the oocyte membrane ( ). JUNO is monomeric when it binds to IZUMO1 ( ; ). Comparison of the crystal structures of IZUMO1 with and without JUNO suggests that there is a binding-induced conformational change in IZUMO1 whereby the 4HB domain moves ∼20° to adopt an upright conformation ( ). IZUMO1 binding to JUNO drives the accumulation and local membrane organization of CD9. The accumulation of CD9 at the sperm–egg interface causes the egg membrane to protrude toward the sperm membrane ( ). IZUMO1 multimerization After the initial IZUMO1–JUNO attachment, Inoue et al. suggested that the IZUMO1–JUNO complex undergoes a multimerization event that is critical for sperm–egg fusion ( ; ). Bimolecular fluorescence complementation and photon-counting histogram analyses were used on a cultured cell–oocyte system to reveal that IZUMO1 forms a multimer at the cell–oocyte interface, but not on the rest of the cell surface. After IZUMO1 oligomerization, JUNO is not detected at the cell surface and presumably is shed ( ). Inoue et al. suggested that the trigger for IZUMO1 multimerization involves a protein disulfide isomerase (PDI). Localization studies revealed the presence of PDIs on the sperm surface ( ; ). PDIs are responsible for proper folding of extracellular or membrane proteins during the maturation process in the endoplasmic reticulum. Western blot and proteomic analysis detected at least four PDI members on the sperm surface, PDI, ERp57, ERp72, and P5. Interestingly, PDI inhibitors reduce sperm–egg fusion in vitro in a dose-dependent manner ( ), and a membrane-impermeable thiol-reactive reagent significantly reduces cell–oocyte binding ( ). To identify those PDIs that function in gamete fusion, sperm were preincubated with antibodies that specifically blocked each PDI member, and the ability of the spermatozoon to fuse with the oocyte was assessed ( ). This revealed ERp57 is critical for gamete fusion. On the IZUMO1 ectodomain, 10 cysteines form five disulfide bonds. Four of the five disulfide bonds are located on the surface and are solvent accessible. The N-terminal helical domain of IZUMO1 was proposed to undergo a collapse or becomes buried at the oligomeric interface ( ). ERp57 and/or other PDIs may catalyze a thiol-disulfide exchange during this conformational rearrangement. Fusogen recruitment After IZUMO1 multimerization, the next step is thought to involve the recruitment of the bona fide human sperm–egg fusogen. Dimerized IZUMO1 was suggested to directly recruit a tight binding unidentified oocyte receptor ( ; ). The identity of the gamete fusion complex remains unknown. SPACA6 was proposed to interact with IZUMO1 to mediate the binding of an oocyte receptor ( ). However, coimmunoprecipitation studies of testis extracts did not show an interaction between SPACA6 and IZUMO1, whereas coimmunoprecipitation analysis using HEK293T cells showed interactions between IZUMO1 and SPACA6, FIMP, TMEM95, and SOF1 ( ). Expression of all five proteins on HEK293T cells did not lead to fusion with zona-free oocytes, suggesting the recruitment of a yet-to-be discovered fusogen is required ( ). Intriguingly, IZUMO1 and SPACA6 both contain an IgSF domain. While IgSF domains are known to facilitate protein–protein interactions, their role and importance in sperm–egg fusion is unknown (see text box). A complete understanding of the interplay of these proteins and the composition of the human gamete fusion machinery will require additional biochemical and functional experimentation. Perspectives: Role of the IgSF in sperm–egg fusion Many of the essential protein players involved in sperm–egg attachment and fusion contain IgSF domains ( ). The IgSFs belong to a large superfamily of proteins that have diverged in sequence and function; ∼500 nonimmunological proteins (nonantibody and non–T cell receptor) with IgSF domains are encoded in the human genome. The IgSF domain is ∼110 residues in size and is defined by two β sheets packed face to face ( ; ). The IgSF fold displays a common core composed of four anti-parallel β strands sandwiched by a second set of three to five β strands. Based on the number of strands and relative locations, several distinct subtypes have been defined. Most common are the variable (V) and constant (C) Ig domains, while a third type (I) is an intermediate structure between the V and C types. Human IZUMO1, SPACA6, EWI-2, and EWI-F (CD9P-1) all contain at least one IgSF domain. The importance of IgSF domains in sperm–egg fusion is not limited to humans. In lower eukaryotes, the HAP2/GCS1 sperm–egg fusogen contains a C-type IgSF domain. Moreover, Caenorhabditis elegans encodes a transmembrane protein with an IgSF domain, termed SPE45, that is required for gamete fusion ( ; ). Interestingly, the IgSF domains of SPE45 and IZUMO1 share a common function, despite only 8.7% sequence identity ( , ). This was demonstrated by the finding that a chimeric SPE45 that contains the murine IZUMO1 IgSF domain had ∼77% of the activity of wild-type SPE45 ( ). The IgSF domain may act as a scaffold to recruit binding partners in cis and/or in trans. Various organisms have IgSF proteins that act during gamete interactions, indicating the widespread utility of IgSF domains in fertilization. The importance of the IgSF domains in human IZUMO1 and C. elegans SPE45 raises the question of whether there are special features unique to these IgSF domains. The crystal structure of human IZUMO1 revealed a novel Ig domain fold with a 2+5 β-sheet arrangement. The IZUMO1 IgSF domain is the only known member to adopt a 2+5 arrangement, thus suggesting a new IgSF subtype. It is not clear whether the IgSF domain in SPE45 adopts a similar structure to the IZUMO1 IgSF domain. The precise roles played by these IgSF domains in IZUMO1, SPACA6, EWI-2, and SPE45 are currently unknown. However, the IgSF domains in other cell surface proteins often form homo- or heterodimers. Structures of IgSF oligomers reveal that all regions of the domain surface can be used for interaction with other molecules. Noncovalent association of IgSF-type domains usually occurs through the exposed faces of the β sheets. Continued studies will undoubtedly reveal important functions of IgSF proteins in gamete fusion. Fusion pore formation The merger of the egg and sperm membranes is an energetically unfavorable process and must require modulation of the membrane architecture in order to form a fusion pore ( ). The formation of a fusion pore typically proceeds through one of two mechanisms, either via a hemifusion intermediate or via direct fusion ( ). In the case of hemifusion, the fusion of the two membranes occurs through the sequential mergers of each pair of bilayer leaflets. First, the outer membrane leaflets contact and mix to form the hemifusion stalk intermediate. This is followed by mixing of the inner leaflets to form the fusion pore. Enveloped viral-cell fusion proceeds through a hemifusion intermediate that is catalyzed by a viral fusion glycoprotein as previously discussed ( ; ; ; ). The viral fusogens all contain distinctive hydrophobic fusion peptides that are inserted into the host target membrane when triggered. In direct fusion, proteins on both membranes arrange into complexes at the site of fusion and bind in trans to bring the two membranes together. This forms a continuous connection between the two protein-lined pores to allow for content mixing. In yeast vacuolar fusion, two proteolipid hexamers formed in trans by the V0 subunit of vacuolar H + -ATPase establishes a bridging channel/pore between the two membranes ( ; ). The pore is subsequently opened through a Ca 2+ -triggered conformational change that expands the proteolipid hexameric complex. It is thought that sperm–egg fusion also proceeds via a membrane hemifusion intermediate, similar to viral-cell fusion. This is at least true in lower eukaryotic cells, in which the sperm–egg fusogen HAP2/GCS1 has a similar overall structure to the class II viral fusogens ( ), such as tick-borne encephalitis virus E glycoprotein. No evidence for HAP2/GCS1 orthologues has been found in vertebrates or mammals; thus, in an early vertebrate ancestor, a new fusogen likely replaced HAP2/GCS1 ( ). The structures of CD9, IZUMO1, and JUNO lack characteristics common to viral fusogens such as the prototypical hydrophobic fusion peptide ( ). Moreover, no readily identifiable fusion peptides were detected upon sequence analysis of SOF1, DCST1/DCST2, TMEM95, FIMP, or SPACA6. Furthermore, cell fusion assay experiments show that the sperm proteins alone or together are not able to trigger cell–cell fusion ( ; ; ; ). Thus, additional factors remain to be identified that are essential for fusion pore formation. At the molecular level, the first step in the attachment of the sperm to the egg involves binding of IZUMO1, which is localized on the equatorial segment of acrosome-reacted sperm ( ), to its counterpart receptor JUNO on the oocyte membrane ( ). JUNO is monomeric when it binds to IZUMO1 ( ; ). Comparison of the crystal structures of IZUMO1 with and without JUNO suggests that there is a binding-induced conformational change in IZUMO1 whereby the 4HB domain moves ∼20° to adopt an upright conformation ( ). IZUMO1 binding to JUNO drives the accumulation and local membrane organization of CD9. The accumulation of CD9 at the sperm–egg interface causes the egg membrane to protrude toward the sperm membrane ( ). After the initial IZUMO1–JUNO attachment, Inoue et al. suggested that the IZUMO1–JUNO complex undergoes a multimerization event that is critical for sperm–egg fusion ( ; ). Bimolecular fluorescence complementation and photon-counting histogram analyses were used on a cultured cell–oocyte system to reveal that IZUMO1 forms a multimer at the cell–oocyte interface, but not on the rest of the cell surface. After IZUMO1 oligomerization, JUNO is not detected at the cell surface and presumably is shed ( ). Inoue et al. suggested that the trigger for IZUMO1 multimerization involves a protein disulfide isomerase (PDI). Localization studies revealed the presence of PDIs on the sperm surface ( ; ). PDIs are responsible for proper folding of extracellular or membrane proteins during the maturation process in the endoplasmic reticulum. Western blot and proteomic analysis detected at least four PDI members on the sperm surface, PDI, ERp57, ERp72, and P5. Interestingly, PDI inhibitors reduce sperm–egg fusion in vitro in a dose-dependent manner ( ), and a membrane-impermeable thiol-reactive reagent significantly reduces cell–oocyte binding ( ). To identify those PDIs that function in gamete fusion, sperm were preincubated with antibodies that specifically blocked each PDI member, and the ability of the spermatozoon to fuse with the oocyte was assessed ( ). This revealed ERp57 is critical for gamete fusion. On the IZUMO1 ectodomain, 10 cysteines form five disulfide bonds. Four of the five disulfide bonds are located on the surface and are solvent accessible. The N-terminal helical domain of IZUMO1 was proposed to undergo a collapse or becomes buried at the oligomeric interface ( ). ERp57 and/or other PDIs may catalyze a thiol-disulfide exchange during this conformational rearrangement. After IZUMO1 multimerization, the next step is thought to involve the recruitment of the bona fide human sperm–egg fusogen. Dimerized IZUMO1 was suggested to directly recruit a tight binding unidentified oocyte receptor ( ; ). The identity of the gamete fusion complex remains unknown. SPACA6 was proposed to interact with IZUMO1 to mediate the binding of an oocyte receptor ( ). However, coimmunoprecipitation studies of testis extracts did not show an interaction between SPACA6 and IZUMO1, whereas coimmunoprecipitation analysis using HEK293T cells showed interactions between IZUMO1 and SPACA6, FIMP, TMEM95, and SOF1 ( ). Expression of all five proteins on HEK293T cells did not lead to fusion with zona-free oocytes, suggesting the recruitment of a yet-to-be discovered fusogen is required ( ). Intriguingly, IZUMO1 and SPACA6 both contain an IgSF domain. While IgSF domains are known to facilitate protein–protein interactions, their role and importance in sperm–egg fusion is unknown (see text box). A complete understanding of the interplay of these proteins and the composition of the human gamete fusion machinery will require additional biochemical and functional experimentation. Perspectives: Role of the IgSF in sperm–egg fusion Many of the essential protein players involved in sperm–egg attachment and fusion contain IgSF domains ( ). The IgSFs belong to a large superfamily of proteins that have diverged in sequence and function; ∼500 nonimmunological proteins (nonantibody and non–T cell receptor) with IgSF domains are encoded in the human genome. The IgSF domain is ∼110 residues in size and is defined by two β sheets packed face to face ( ; ). The IgSF fold displays a common core composed of four anti-parallel β strands sandwiched by a second set of three to five β strands. Based on the number of strands and relative locations, several distinct subtypes have been defined. Most common are the variable (V) and constant (C) Ig domains, while a third type (I) is an intermediate structure between the V and C types. Human IZUMO1, SPACA6, EWI-2, and EWI-F (CD9P-1) all contain at least one IgSF domain. The importance of IgSF domains in sperm–egg fusion is not limited to humans. In lower eukaryotes, the HAP2/GCS1 sperm–egg fusogen contains a C-type IgSF domain. Moreover, Caenorhabditis elegans encodes a transmembrane protein with an IgSF domain, termed SPE45, that is required for gamete fusion ( ; ). Interestingly, the IgSF domains of SPE45 and IZUMO1 share a common function, despite only 8.7% sequence identity ( , ). This was demonstrated by the finding that a chimeric SPE45 that contains the murine IZUMO1 IgSF domain had ∼77% of the activity of wild-type SPE45 ( ). The IgSF domain may act as a scaffold to recruit binding partners in cis and/or in trans. Various organisms have IgSF proteins that act during gamete interactions, indicating the widespread utility of IgSF domains in fertilization. The importance of the IgSF domains in human IZUMO1 and C. elegans SPE45 raises the question of whether there are special features unique to these IgSF domains. The crystal structure of human IZUMO1 revealed a novel Ig domain fold with a 2+5 β-sheet arrangement. The IZUMO1 IgSF domain is the only known member to adopt a 2+5 arrangement, thus suggesting a new IgSF subtype. It is not clear whether the IgSF domain in SPE45 adopts a similar structure to the IZUMO1 IgSF domain. The precise roles played by these IgSF domains in IZUMO1, SPACA6, EWI-2, and SPE45 are currently unknown. However, the IgSF domains in other cell surface proteins often form homo- or heterodimers. Structures of IgSF oligomers reveal that all regions of the domain surface can be used for interaction with other molecules. Noncovalent association of IgSF-type domains usually occurs through the exposed faces of the β sheets. Continued studies will undoubtedly reveal important functions of IgSF proteins in gamete fusion. Many of the essential protein players involved in sperm–egg attachment and fusion contain IgSF domains ( ). The IgSFs belong to a large superfamily of proteins that have diverged in sequence and function; ∼500 nonimmunological proteins (nonantibody and non–T cell receptor) with IgSF domains are encoded in the human genome. The IgSF domain is ∼110 residues in size and is defined by two β sheets packed face to face ( ; ). The IgSF fold displays a common core composed of four anti-parallel β strands sandwiched by a second set of three to five β strands. Based on the number of strands and relative locations, several distinct subtypes have been defined. Most common are the variable (V) and constant (C) Ig domains, while a third type (I) is an intermediate structure between the V and C types. Human IZUMO1, SPACA6, EWI-2, and EWI-F (CD9P-1) all contain at least one IgSF domain. The importance of IgSF domains in sperm–egg fusion is not limited to humans. In lower eukaryotes, the HAP2/GCS1 sperm–egg fusogen contains a C-type IgSF domain. Moreover, Caenorhabditis elegans encodes a transmembrane protein with an IgSF domain, termed SPE45, that is required for gamete fusion ( ; ). Interestingly, the IgSF domains of SPE45 and IZUMO1 share a common function, despite only 8.7% sequence identity ( , ). This was demonstrated by the finding that a chimeric SPE45 that contains the murine IZUMO1 IgSF domain had ∼77% of the activity of wild-type SPE45 ( ). The IgSF domain may act as a scaffold to recruit binding partners in cis and/or in trans. Various organisms have IgSF proteins that act during gamete interactions, indicating the widespread utility of IgSF domains in fertilization. The importance of the IgSF domains in human IZUMO1 and C. elegans SPE45 raises the question of whether there are special features unique to these IgSF domains. The crystal structure of human IZUMO1 revealed a novel Ig domain fold with a 2+5 β-sheet arrangement. The IZUMO1 IgSF domain is the only known member to adopt a 2+5 arrangement, thus suggesting a new IgSF subtype. It is not clear whether the IgSF domain in SPE45 adopts a similar structure to the IZUMO1 IgSF domain. The precise roles played by these IgSF domains in IZUMO1, SPACA6, EWI-2, and SPE45 are currently unknown. However, the IgSF domains in other cell surface proteins often form homo- or heterodimers. Structures of IgSF oligomers reveal that all regions of the domain surface can be used for interaction with other molecules. Noncovalent association of IgSF-type domains usually occurs through the exposed faces of the β sheets. Continued studies will undoubtedly reveal important functions of IgSF proteins in gamete fusion. The merger of the egg and sperm membranes is an energetically unfavorable process and must require modulation of the membrane architecture in order to form a fusion pore ( ). The formation of a fusion pore typically proceeds through one of two mechanisms, either via a hemifusion intermediate or via direct fusion ( ). In the case of hemifusion, the fusion of the two membranes occurs through the sequential mergers of each pair of bilayer leaflets. First, the outer membrane leaflets contact and mix to form the hemifusion stalk intermediate. This is followed by mixing of the inner leaflets to form the fusion pore. Enveloped viral-cell fusion proceeds through a hemifusion intermediate that is catalyzed by a viral fusion glycoprotein as previously discussed ( ; ; ; ). The viral fusogens all contain distinctive hydrophobic fusion peptides that are inserted into the host target membrane when triggered. In direct fusion, proteins on both membranes arrange into complexes at the site of fusion and bind in trans to bring the two membranes together. This forms a continuous connection between the two protein-lined pores to allow for content mixing. In yeast vacuolar fusion, two proteolipid hexamers formed in trans by the V0 subunit of vacuolar H + -ATPase establishes a bridging channel/pore between the two membranes ( ; ). The pore is subsequently opened through a Ca 2+ -triggered conformational change that expands the proteolipid hexameric complex. It is thought that sperm–egg fusion also proceeds via a membrane hemifusion intermediate, similar to viral-cell fusion. This is at least true in lower eukaryotic cells, in which the sperm–egg fusogen HAP2/GCS1 has a similar overall structure to the class II viral fusogens ( ), such as tick-borne encephalitis virus E glycoprotein. No evidence for HAP2/GCS1 orthologues has been found in vertebrates or mammals; thus, in an early vertebrate ancestor, a new fusogen likely replaced HAP2/GCS1 ( ). The structures of CD9, IZUMO1, and JUNO lack characteristics common to viral fusogens such as the prototypical hydrophobic fusion peptide ( ). Moreover, no readily identifiable fusion peptides were detected upon sequence analysis of SOF1, DCST1/DCST2, TMEM95, FIMP, or SPACA6. Furthermore, cell fusion assay experiments show that the sperm proteins alone or together are not able to trigger cell–cell fusion ( ; ; ; ). Thus, additional factors remain to be identified that are essential for fusion pore formation. While significant advances have been made to fully understand the molecular mechanism of fertilization, many questions are still outstanding. The identification of proteins involved in the sperm–egg fusion process remains the holy grail in reproductive biology. Understanding the interplay of all the partners involved has the potential to impact multiple areas of biology. Identifying the full complement of proteins involved in sperm–egg attachment and fusion will allow the mapping of genotype–phenotype correlations and improve diagnostic tests for people suffering from infertility. Understanding the mechanisms of sperm–egg fusion will also reveal ways to improve assisted reproductive technologies for humans and animals. High and predictable fertility rates for cows, pigs, chickens, and sheep are essential for efficient food animal production. Finally, although generally safe and effective, current hormone-based contraceptives may lead to adverse side effects that discourage many from long-term use. The safety and acceptability of contraceptives are particularly important for women, since they bear the greatest burden of contraceptive side effects. It is important to innovate and develop new alternative contraceptives that better meet the reproductive needs and desires of women and couples. Molecules that disrupt sperm–egg protein–protein interactions by binding to the sperm or egg protein side of the axis should result in a potent contraceptive. These reasons underscore why understanding the mechanisms of sexual fertilization is one of the most crucial biological questions.
Towards Cohesive National Surveys in Pakistan: A Comparative Study of DHS and PSLM
0defcbcd-1184-4fbf-abe2-77dde4d89012
11936229
Community Health Services[mh]
Effective evidence based policymaking depends on the availability of high-quality data, which can illuminate social contexts, identify appropriate populations to target with interventions, and enable timely evaluation of policies and programs. Conversely, unreliable data with inaccuracies, incompleteness, or the absence of essential administrative records such as vital statistics and public service outputs, limits the understanding of social issues and the impact of interventions . Unreliable data undermines policymakers’ ability to gauge the effectiveness of their initiatives, potentially leading to inefficient or futile public sector investments. In many low and middle-income countries (LMIC), where program data are unreliable, household surveys are needed to bridge gaps in program data [ – ], to provide insights into social dynamics and track policy and program impacts over time [ – ]. As a developing country, Pakistan relies on a series of surveys that inform various aspects from health, social issue, economics and more ( https://www.pbs.gov.pk/ ). Of these, the leading ones are the Pakistan Demographic Health Survey (PDHS) and Pakistan Social and Living Standards Measurement (PSLM). The PDHS survey aligns with international Demographic Health Surveys (DHS) indicators ( https://dhsprogram.com ), while the PSLM is based on the World Bank-supported Living Standards Measurement Study ( https://www.worldbank.org/en/programs/lsms/overview ). The PDHS includes separate questionnaires for households, ever-married women, ever-married men, biomarkers, and community. It provides extensive details on health indicators and is conducted at the provincial level, taking place sporadically, with intervals of at least five years . It’s sampled at the provincial level and includes around 16,000 married women of reproductive age and 4,000 married men. On the other hand, the PSLM covers a wider range of indicators beyond health and is conducted concomitantly with the Household Integrated Economic Survey (HIES), focusing on income and expenditures from the same household. The PSLM has alternates between provincial and district levels surveys in biennial cycles . The provincial PSLM includes around 25,000 respondents who are asked in depth about health and social indicators. The sample for the district version is around 80,000 but with fewer questions across any domain and has fewer domains. The Demographic Health Surveys and Social and Living Measurement Surveys are also available in many LMICs. The similarities in sampling and questions between the two surveys make them complementary for various health, social, and economic indicators. However, the extent of this complementarity or key differences has never been examined. Another potential option would be to use the relative strengths of these surveys to better inform about social indicators at national and provincial level and allow modeling to infer some indicators at the district levels. As a first step, we compare the PDHS and PSLM provincial surveys in Pakistan, using immunization, family planning, and women’s decision-making modules as examples. This comparison could serve as a model for these surveys conducted in other LMICs. 2.1. Structure of the surveys PDHS and PSLM (provincial and district) cover a diverse range of modules in their surveys ( ). PDHS primarily focuses on health and demographics indicators, while PSLM includes a broader range of socio-economic indicators, including health, demographics, expenses, income, education, consumption, and wealth . The PDHS and the provincial version of the PSLM surveys ask more in-depth questions from households (11,856 and 24,809 households, respectively), while the PSLM district version asks fewer questions from a much larger sample of households (176,790). All three surveys cover basic demographic, education, and employment questions, along with child immunization, diarrhea among children, household characteristics (for estimating wealth index), pre and post-natal care, basic health service questions, and basic knowledge about tuberculosis (TB), Hepatitis B & C. PDHS uniquely captures child nutrition, weight, and height in their biomarker questionnaire, along with questions on domestic violence and gender roles. In contrast, PSLM surveys uniquely capture comprehensive income data, information on communication technology (ICT), and the food insecurity experience scale (FIES). The provincial PSLM survey covers household expenditure and income in depth to create a proper balance sheet, while the district version only includes a short section on income. PDHS (2017–18) survey covers four provinces, two regions (AJK and GB), (erstwhile) FATA, and Islamabad. Meanwhile, the PSLM provincial-level survey includes four provinces of Pakistan, with FATA being part of KP. To standardize the analysis, we restricted the regions to four provinces (Punjab, Sindh, KP, and Balochistan) and included FATA in KP in PDHS to make it consistent with PSLM. Surveys were compared in three stages: (1) briefly comparing modules covered by both surveys, (2) reporting questions addressed on family planning and child immunization in both surveys, and (3) comparing estimates of key indicators for family planning and child immunization. This study utilized modules on ‘Reproduction’, ‘Contraception’, and ‘Child Immunization’ from PDHS and ‘Family Planning’, ‘Women in Decision Making’, and ‘Immunization’ from PSLM. However, the PSLM district-level survey does not include the ‘Family Planning’ section; hence, we focused on the ‘Immunization’ section for differential analysis. The respondents for both surveys were Married Women of Reproductive Age (MWRAs) aged 15–49 years. The sampling strategy, provided by the Pakistan Bureau of Statistics, used the same methodology and enumeration blocks. illustrates the rudiments of the two surveys based on the two modules. Using the described survey modules, two new variables, ‘knowledge of family planning’ and ‘contraceptive method used,’ were created because the questions related to these variables were generic rather than specific to the contraceptive method used. There were minor differences in the categories of ‘source of the method used’ and ‘method type of family planning’ for PDHS and PSLM. The categories were combined to address this issue of source difference ( ). A similar exercise was performed while dealing with contraceptive methods. The analysis gauged the estimates for current users of modern and traditional methods. The children’s sample in the immunization module was restricted to the age of 12–35 months. The age bracket for each vaccine Childhood TB, BCG, Pentavalent (Diphtheria, Tetanus, Pertussis, Hib Pneumonia and Meningitis, Hepatitis B), Polio (Polio-0, Polio-1, Polio-2, Polio-3 and Inactivated Polio vaccine (IPV)) and Measles-1 were assessed at 12–23 months, except for Measles-2 (marked at 24–35 months). PDHS records immunization data on two parameters, ‘yes, on record’ and ‘yes, on recall’ for all vaccines, whereas PSLM includes an additional category, ‘yes, through campaign’ for Polio and Measles vaccines. The primary objective of the analysis was to obtain the percentage of children immunized in both surveys without deviating from the actual number of vaccinated children; therefore, we did not drop or exclude the ‘yes, through campaign’ category. 2.2. Method of analysis For the comparison analysis between PDHS and PSLM, we calculated the proportions of indicators using area weights in each survey to reduce bias in estimates. These weights also provide auxiliary data on population-known characteristics, thereby reducing sampling errors . We estimated a simple difference between the indicators, followed by a two-tailed proportions test . The null hypothesis stated that the difference between weight-adjusted estimates from PDHS and PSLM was equivalent to zero for each indicator of family planning and child immunization. The generic form of the test is given below: H 0 : p ( P D H S ) – p [ 1 ] = 0. H α : p ( P D H S ) – p [ 1 ] ≠ . As we are using secondary data for our analysis, institutional review board (ethics committee) approval was not required. These anonymized datasets are available on online portals. PDHS and PSLM (provincial and district) cover a diverse range of modules in their surveys ( ). PDHS primarily focuses on health and demographics indicators, while PSLM includes a broader range of socio-economic indicators, including health, demographics, expenses, income, education, consumption, and wealth . The PDHS and the provincial version of the PSLM surveys ask more in-depth questions from households (11,856 and 24,809 households, respectively), while the PSLM district version asks fewer questions from a much larger sample of households (176,790). All three surveys cover basic demographic, education, and employment questions, along with child immunization, diarrhea among children, household characteristics (for estimating wealth index), pre and post-natal care, basic health service questions, and basic knowledge about tuberculosis (TB), Hepatitis B & C. PDHS uniquely captures child nutrition, weight, and height in their biomarker questionnaire, along with questions on domestic violence and gender roles. In contrast, PSLM surveys uniquely capture comprehensive income data, information on communication technology (ICT), and the food insecurity experience scale (FIES). The provincial PSLM survey covers household expenditure and income in depth to create a proper balance sheet, while the district version only includes a short section on income. PDHS (2017–18) survey covers four provinces, two regions (AJK and GB), (erstwhile) FATA, and Islamabad. Meanwhile, the PSLM provincial-level survey includes four provinces of Pakistan, with FATA being part of KP. To standardize the analysis, we restricted the regions to four provinces (Punjab, Sindh, KP, and Balochistan) and included FATA in KP in PDHS to make it consistent with PSLM. Surveys were compared in three stages: (1) briefly comparing modules covered by both surveys, (2) reporting questions addressed on family planning and child immunization in both surveys, and (3) comparing estimates of key indicators for family planning and child immunization. This study utilized modules on ‘Reproduction’, ‘Contraception’, and ‘Child Immunization’ from PDHS and ‘Family Planning’, ‘Women in Decision Making’, and ‘Immunization’ from PSLM. However, the PSLM district-level survey does not include the ‘Family Planning’ section; hence, we focused on the ‘Immunization’ section for differential analysis. The respondents for both surveys were Married Women of Reproductive Age (MWRAs) aged 15–49 years. The sampling strategy, provided by the Pakistan Bureau of Statistics, used the same methodology and enumeration blocks. illustrates the rudiments of the two surveys based on the two modules. Using the described survey modules, two new variables, ‘knowledge of family planning’ and ‘contraceptive method used,’ were created because the questions related to these variables were generic rather than specific to the contraceptive method used. There were minor differences in the categories of ‘source of the method used’ and ‘method type of family planning’ for PDHS and PSLM. The categories were combined to address this issue of source difference ( ). A similar exercise was performed while dealing with contraceptive methods. The analysis gauged the estimates for current users of modern and traditional methods. The children’s sample in the immunization module was restricted to the age of 12–35 months. The age bracket for each vaccine Childhood TB, BCG, Pentavalent (Diphtheria, Tetanus, Pertussis, Hib Pneumonia and Meningitis, Hepatitis B), Polio (Polio-0, Polio-1, Polio-2, Polio-3 and Inactivated Polio vaccine (IPV)) and Measles-1 were assessed at 12–23 months, except for Measles-2 (marked at 24–35 months). PDHS records immunization data on two parameters, ‘yes, on record’ and ‘yes, on recall’ for all vaccines, whereas PSLM includes an additional category, ‘yes, through campaign’ for Polio and Measles vaccines. The primary objective of the analysis was to obtain the percentage of children immunized in both surveys without deviating from the actual number of vaccinated children; therefore, we did not drop or exclude the ‘yes, through campaign’ category. For the comparison analysis between PDHS and PSLM, we calculated the proportions of indicators using area weights in each survey to reduce bias in estimates. These weights also provide auxiliary data on population-known characteristics, thereby reducing sampling errors . We estimated a simple difference between the indicators, followed by a two-tailed proportions test . The null hypothesis stated that the difference between weight-adjusted estimates from PDHS and PSLM was equivalent to zero for each indicator of family planning and child immunization. The generic form of the test is given below: H 0 : p ( P D H S ) – p [ 1 ] = 0. H α : p ( P D H S ) – p [ 1 ] ≠ . As we are using secondary data for our analysis, institutional review board (ethics committee) approval was not required. These anonymized datasets are available on online portals. The PDHS contains more detailed questions on family planning, including the brand names of contraceptives, side effects, sterilization details, timeline of the last method used, and advice by healthcare workers on method use. In contrast, PSLM includes questions about satisfaction levels with the current family planning method ( ). Both PDHS and PSLM provincial-level surveys feature similar, in-depth questions about the child immunization module, whereas the PSLM district level survey only asks whether the child is immunized for the recommended vaccines or not. The subsequent subsections present the results for family planning and child from PDHS and PSLM datasets. 3.1. Family planning There is high concordance between the results of the family planning modules from PDHS and PSLM provincial (which is the only version that asks about FP). For example, 99% of Married Women of Reproductive Age (MWRA) in PDHS and 98% in PSLM have heard of at least one family planning method, and 33% and 34% MWRAs were currently using a family planning method, with 26% and 29% using traditional methods, respectively. Similarly, PDHS shows that 46% use modern methods compared to 47% in the PSLM, with a similar method mix. Differences are < 4% for the ‘source of methods’ and < 6% for ‘who makes the decision to use FP’, though statistically significant due to large sample sizes. ( ) The PDHS includes more family planning questions about the side effects, problems faced while using contraception, the number of times the respondent used a method to avoid or delay pregnancy, or ‘what year was the sterilization performed’. PSLM does not have those questions. 3.2. Child Immunization Differences between PDHS and PSLM provincial-level surveys are more pronounced for immunization ( ). For example, differences are < 1% for BCG, which is given within a week after birth and is most likely to be remembered. The differences expand with increasing age of the child, as measured by the differences with each successive dose of pentavalent and pneumococcal vaccines, starting from around 1% for the first dose to just over 9% for the third dose. The differences are higher for measles and over 18% for IPV, which is given sporadically. Polio vaccine recall is distributed non-systematically, as the vaccine is given monthly during campaigns, leading to difficulty in distinguishing between regular and campaign doses. Overall, these differences lead to greater divergence between PDHS and PSLM for immunization at 24%, a full log order higher than the 2% seen for family planning. The divergence increases as the child grows older. Compared to PDHS and PSLM provincial surveys, differences between provincial and district PSLM are minor. Most vaccine percentage differences are below 1% without any statistically significant difference. However, there is a statistically significant drop in polio vaccine coverage in the district PSLM 2019–20. The difference between basic immunization and all age-appropriate vaccinations (both including and excluding polio vaccines) is higher and increases from 2018–19 (provincial) to 2019–20 (district) PSLM. There is high concordance between the results of the family planning modules from PDHS and PSLM provincial (which is the only version that asks about FP). For example, 99% of Married Women of Reproductive Age (MWRA) in PDHS and 98% in PSLM have heard of at least one family planning method, and 33% and 34% MWRAs were currently using a family planning method, with 26% and 29% using traditional methods, respectively. Similarly, PDHS shows that 46% use modern methods compared to 47% in the PSLM, with a similar method mix. Differences are < 4% for the ‘source of methods’ and < 6% for ‘who makes the decision to use FP’, though statistically significant due to large sample sizes. ( ) The PDHS includes more family planning questions about the side effects, problems faced while using contraception, the number of times the respondent used a method to avoid or delay pregnancy, or ‘what year was the sterilization performed’. PSLM does not have those questions. Differences between PDHS and PSLM provincial-level surveys are more pronounced for immunization ( ). For example, differences are < 1% for BCG, which is given within a week after birth and is most likely to be remembered. The differences expand with increasing age of the child, as measured by the differences with each successive dose of pentavalent and pneumococcal vaccines, starting from around 1% for the first dose to just over 9% for the third dose. The differences are higher for measles and over 18% for IPV, which is given sporadically. Polio vaccine recall is distributed non-systematically, as the vaccine is given monthly during campaigns, leading to difficulty in distinguishing between regular and campaign doses. Overall, these differences lead to greater divergence between PDHS and PSLM for immunization at 24%, a full log order higher than the 2% seen for family planning. The divergence increases as the child grows older. Compared to PDHS and PSLM provincial surveys, differences between provincial and district PSLM are minor. Most vaccine percentage differences are below 1% without any statistically significant difference. However, there is a statistically significant drop in polio vaccine coverage in the district PSLM 2019–20. The difference between basic immunization and all age-appropriate vaccinations (both including and excluding polio vaccines) is higher and increases from 2018–19 (provincial) to 2019–20 (district) PSLM. Our analysis indicates that while differences between rates in PSLM and PDHS have closed for family planning, and early immunization, the differences in immunization rates diverge after the 3 rd month of the child’s age. While each survey serves a different niche, our comparison analysis allows the identification of data gaps that can help make these two distinct but complementary surveys more compatible and, in turn, more usable. There was a 20-percentage points discrepancy between PDHS 2012 and the concurrent PSLM for contraceptive use. This issue was discussed with the survey team at the Pakistan Bureau of Statistics. Their internal review suggested that the discrepancy may have arisen because the interviewees for PSLM could be anyone in the household, not necessarily the MWRA, as is the case with PDHS. Persons other than the MWRA in the household may give different responses, resulting in misleading results . This has since been rectified and is reflected in narrowing of the gap in family planning indicators between the two surveys in 2017–18. The same is true for early age immunization but begins to diverge after the third months of the child’s life, perhaps reflecting recall bias of the mother . Family planning is simpler to recall for a woman to remember since she is the primary actor in FP usage, and the use is a one-off event, as only the last status before the survey is included. However, immunization is more complex in that several vaccines are administered, separately and concomitantly, over the first 24 months of a child’s life. This is further complicated if there are more than one child of vaccination age in the household. Additionally, routine polio doses and monthly supplemental campaigns are identical and indistinguishable for parents. This creates several data points for a mother to remember, adding to errors in recall . So many vaccinations may be too much for most mothers to remember. As we found, recall is fully concordant between PDHS and PSLM for BCG (given at birth) and first dose of the Pentavalent vaccine (given at 6 weeks). Thereafter, divergence increases with the increasing age of the child and is the highest for polio vaccine, which continues to be given monthly until a child turns five years old . The quality and internal consistency of the PSLM surveys is validated by the relatively similarity in results of the provincial and district surveys conducted one year apart ( ). Global evidence suggests that having a written record a memory aid helps improve recall [ – ]. The situation in Pakistan has markedly improved since 2006, when only 10% households had an immunization card in the PDHS 2017–18 survey 63% of MWRAs had a vaccination card present at the time of the survey . Interestingly, only 24% of mothers reported having one in PSLM 2018–19, suggesting a need emphasize asking for one during training for PSLM surveys . It would also be prudent to allow lessons from the in-depth questioning done for immunization in the PDHS and replicate it for the PSLM, as it was done for family planning. Internationally, researchers compare Demographic and Health Surveys (DHS) and Multiple Indicator Cluster Surveys (MICS) to assess health indicators, often identifying discrepancies in postnatal care estimates. MICS provides a more detailed postnatal care module, distinguishing immediate and later care, whereas DHS uses a blended module that does not systematically differentiate these phases . Similarly, discrepancies in immunization estimates may arise because PDHS includes more detailed and specific immunization questions, potentially aiding respondent recall and improving data accuracy. An additional factor that may contribute to the difference between the vaccination rates between the two surveys is how responses categories for this question are defined in each survey. In PSLM, both provincial and district level, the response category is “Yes, through the polio campaign”, while PDHS does not have this category. shows that removing the “Yes, through the polio campaign” option from the “all-age appropriate vaccinations” narrows the difference from 23.5% to 20.2%. Thus, the adjustment may have a minor contribution to the difference between the two surveys regarding immunization. Similarly, removing the ‘yes, through the campaign’ category slightly reduces the difference for Polio and Measles indicators. The one-year difference between the timing of the surveys may also have accounted for some of the observed differences in the vaccine administration estimates. PDHS took place in 2017–2018, while PSLM provincial surveys took place in 2018–19. For example, in response to a measles epidemic during 2017–2018, a nationwide Supplementary Immunization Activity (SIA) was conducted in October 2018, during which 37.1 million children aged under 5 were vaccinated. An independent post–SIA coverage survey estimated that SIA coverage of 93.3% . The SIA may have also contributed to the difference between the Measles-Containing Vaccine 1 (MCV1) coverage rates, as it occurred between the two surveys. PDHS 2017–18 reported a 73.5% MCV1 coverage, whereas PSLM 2018–19 reported a 79.1% coverage. Similarly, there is a difference in Measles-Containing Vaccine 2 (MCV2) coverage, with PDHS 2018–19 reporting 66.75% coverage and PSLM 2018–19 reporting 79.7% coverage. Our paper describes a high-level analysis with some suggestions of using available surveys to inform questions of national policy. We show complementarities, similarities in data collected and in quality, as well as some critical differences. PSLM, conducted biennially, with a larger sample size, and includes comprehensive economic details including income, expenditure, transactions, and assets. It is conducted regularly by the Pakistan Bureau of Statistics with government funds. PDHS is more in depth about health, with a fewer other subjects and smaller sample size, and is conducted ad hoc by the federal health ministry (the Ministry of National Health Services, Regulations and Coordination, through its National Institute of Population Studies), with mostly donor funds. There is a need to rationalize how the two surveys are conducted and how their data are used. A key ask has always been the ability inform about indicators at the district level. Only PSLM district survey does so, but only for a few indicators. It may be possible to add some indicators to it and then use data from either PDHS or PSLM provincial estimation techniques to impute other data to estimate family planning use which is not measured by the district PSLM survey. An alternative may be to incorporate all questions of PDHS into the PSLM provincial version, perhaps at lower frequency, for example around every 6 years. Understanding these similarities and limitations allows policy makers to develop confidence in these surveys and incorporate their evidence into their decisions. S1 File Human subjects research checklist. (DOCX) S1 Data The table below shows the comparison of questions asked in both PDHS PSLM surveys. (DOCX)
Computed tomography-based navigation versus accelerometer-based portable navigation in total hip arthroplasty for dysplastic hip osteoarthritis
b2a694ff-ce4a-45e5-aac0-a0b84899575a
11872758
Surgical Procedures, Operative[mh]
Approximately 80% of hip osteoarthritis (HOA) cases in Japan are due to acetabular dysplasia . In total hip arthroplasty (THA) for dysplastic hip osteoarthritis (DHOA), accurate cup placement is more challenging than in primary osteoarthritis due to complex acetabular morphology . This accuracy is crucial to avoid complications such as dislocation, impingement, and polyethylene wear . Dislocation is a leading cause of revision and has become a significant challenge for hip surgeons as implant longevity improves. Various computer-assisted surgery (CAS) systems have been developed to enhance cup placement accuracy . Computed tomography (CT)-based navigation employs a CAS system. Currently, surface matching is the primary registration method, allowing confirmation of accuracy by checking landmark positions post-registration. Previous studies indicate that CT-based navigational THA (CTN-THA) outperforms conventional THA in cup placement position and angle . However, CTN-THA often results in longer operative times, which can increase blood loss, readmissions, reoperations, and wound infection rates . Portable navigation (PN) has been widely used in recent years owing to its low cost and ease of use. Currently, available PNs can be registered using two main methods: lateral position registration, which is based on the direction of the body axis, and supine position registration, which uses the functional pelvic plane (FPP) as a reference, based on both the anterior superior iliac spines and gravity axis . Registration in the lateral position is generally considered inaccurate because it is affected by the pelvis position at the time of lateral fixation . The ‘flip technique’ has been reported as a more accurate method of placing the cup in the lateral position in THA, in which registration is performed in the supine position and then repositioned to the lateral position for surgery . However, there are no reports evaluating the accuracy of cup placement in PN-THA using the ‘flip technique’ and CTN-THA. The study aims to address the following clinical questions: (1) Is the cup placement accuracy of the PN-THA using the ‘flip technique’ for DHOA equivalent to that of the CN-THA?, (2) is the cup placement position of PN-THA using the ‘flip technique’ for DHOA equivalent to that of CN-THA?, (3) does PN-THA shorten the operative time compared with CTN-THA? Patients The study was approved by our institution’s ethics committee and performed in line with the Declaration of Helsinki. All patients provided informed consent for participation and publication. Participants underwent THA with CTN or PN for DHOA (Crowe types I and II) at our institution from April 2020 to April 2023. This study included 45 patients in the CTN-THA group and 69 in the PN-THA group. Ten patients with missing data were excluded, five from each group, resulting in 104 patients. Propensity score matching adjusted for differences in patient backgrounds. Scores were calculated using logistic regression with age, sex, BMI, and Crowe classification as variables. The scores were matched 1:1, resulting in 32 cases for each group (Fig. ). After matching, no significant differences were noted in age, sex, BMI, or Crowe classification (Table ). Surgical procedure Preoperative planning utilized 3D templating software (ZedHip, Lexi, Tokyo, Japan). ZedHip is a 3D preoperative templating software that uses CT images. It allows for the creation of multi-planar reconstruction images based on CT data and enables 3D templating of the stem and cup. The stem that best fit the femoral shape was selected on the 3D template. Cups from the same manufacturer as the stems were positioned at the center of the anatomical hip. If the cup Center–Edge angle was greater than 0°, THA with a cementless cup was planned; if less than 0°, reconstruction with a cemented cup and bulk bone graft was planned. The navigation system was chosen from the same manufacturer as the implant. The proportion of cemented stems in each group did not differ significantly (Table ). All procedures were performed using the posterior approach in the lateral decubitus position by experienced hip surgeons under the supervision of a single surgeon. The target angle for cup placement was determined based on the FPP in the supine position, with a target of 40° for inclination and 20° for anteversion according to Murray’s definition . The use of screws for cup fixation was at the surgeon’s discretion based on intraoperative findings. Postoperatively, full weight bearing was allowed, and gait training was conducted. One week after surgery, CT confirmed the postoperative placement angle. Navigation system CT-based navigation CT-based hip navigation system (Stryker, Mahwah, New Jersey, USA) was used for CTN. The pelvis was segmented, and coordinates were defined using several reference points. Before skin incision, two 4.0 mm diameter pins were inserted percutaneously into the iliac crest to which a pelvic tracker was attached (Fig. A). Intraoperatively, point-pair matching was initially attempted by registering the four representative points determined preoperatively. Surface matching was performed by registering > 30 points on the pelvic surface (Fig. B). The accuracy of the registration was verified by palpating representative landmarks using a pointer. Intraoperative navigation was utilized to verify the intended position and angle of the cup during reaming and cup placement (Fig. C, ) . Portable navigation Naviswiss (Naviswiss AG, Brugg, Switzerland) is a system that uses an accelerometer with a small tag and navigation unit for PN. The navigation unit features an infrared stereo camera to measure the position and orientation of the tag, along with an inertial measurement unit with an accelerometer and gyroscope for spatial orientation. The ‘flip technique’ can be used with this system for THA in a lateral decubitus position. Registration is performed with the patient supine, followed by repositioning to the lateral position, keeping the pins and tags clean. Preoperatively, two 3.0 mm diameter pins are inserted percutaneously into the iliac crest, and a small tag (P-tag) is attached. The anterior superior iliac spines (ASISs) are palpated using a caliper with another small tag (M-tag) (Fig. E). Both tags are captured by an infrared camera to define the FPP based on the ASISs and the weight axis direction. During reaming and cup placement, the P-tag is attached to the pelvis (Fig. F), and the M-tag to the reamer or cup holder (Fig. G). A stereo camera reads these tags to verify the cup placement angle (Fig. H) . Postoperative evaluations Postoperative evaluation used 3D templating software (ZedHip). After importing CT images, the cup placement angle and position were measured by superimposing a template of the same size onto the actual cup . The angle of cup placement was assessed for radiographic inclination (RI) and radiographic anteversion (RA) with reference to the FPP in the supine position. The accuracy error was calculated as the difference between the target angles (RI 40° and RA 20°) and the actual angle. Similarly, the navigation error was determined as the difference between the final navigation display value and the actual placement angle. Both values are expressed as absolute errors. To assess cup placement, a three-dimensional coordinate system was established with reference to the anterior pelvic plane (APP). The origin was defined as the midpoint between the ASISs. The X-axis represents a line through the bilateral ASISs. The Y-axis is parallel to the APP and passes through the origin. The Z-axis is perpendicular to the APP through the origin (Fig. ). The cup’s center of rotation was measured before and after surgery using this coordinate system. The X-axis error indicates internal/external direction error, the Y-axis error indicates vertical direction error, and the Z-axis error indicates anteroposterior direction error. The cup center position recorded in ZedHip at preoperative planning was measured using the coordinate system (Fig. ). Postoperative CT data were imported into ZedHip to measure the actual cup position. A template of the same size was superimposed on the actual placement position to evaluate the cup position along the X-, Y-, and Z-axes. Absolute errors between the actual cup position and the preoperative planned position on the X-, Y-, and Z-axes were evaluated . Surgical outcomes We recorded the operative time and intraoperative and postoperative complications (dislocation, intraoperative fracture, surgical site infection, and nerve palsy). Additionally, clinical outcomes were evaluated using the Japanese Orthopedic Association (JOA) score at both preoperative and 1-year postoperative time points. Statistical Analysis Based on previous reports, if a 3° navigation error was considered significant, 28 cases in each group would be required for a power of 0.8, with a significant difference of 0.05 . Statistical analysis was conducted for each item using the t -test for continuous variables and Fisher’s exact probability test for nominal variables. P < 0.05 was considered significant. EZR version 1.65 (Saitama Medical Center, Jichi Medical University, Saitama, Japan) was used for statistical analysis. The study was approved by our institution’s ethics committee and performed in line with the Declaration of Helsinki. All patients provided informed consent for participation and publication. Participants underwent THA with CTN or PN for DHOA (Crowe types I and II) at our institution from April 2020 to April 2023. This study included 45 patients in the CTN-THA group and 69 in the PN-THA group. Ten patients with missing data were excluded, five from each group, resulting in 104 patients. Propensity score matching adjusted for differences in patient backgrounds. Scores were calculated using logistic regression with age, sex, BMI, and Crowe classification as variables. The scores were matched 1:1, resulting in 32 cases for each group (Fig. ). After matching, no significant differences were noted in age, sex, BMI, or Crowe classification (Table ). Preoperative planning utilized 3D templating software (ZedHip, Lexi, Tokyo, Japan). ZedHip is a 3D preoperative templating software that uses CT images. It allows for the creation of multi-planar reconstruction images based on CT data and enables 3D templating of the stem and cup. The stem that best fit the femoral shape was selected on the 3D template. Cups from the same manufacturer as the stems were positioned at the center of the anatomical hip. If the cup Center–Edge angle was greater than 0°, THA with a cementless cup was planned; if less than 0°, reconstruction with a cemented cup and bulk bone graft was planned. The navigation system was chosen from the same manufacturer as the implant. The proportion of cemented stems in each group did not differ significantly (Table ). All procedures were performed using the posterior approach in the lateral decubitus position by experienced hip surgeons under the supervision of a single surgeon. The target angle for cup placement was determined based on the FPP in the supine position, with a target of 40° for inclination and 20° for anteversion according to Murray’s definition . The use of screws for cup fixation was at the surgeon’s discretion based on intraoperative findings. Postoperatively, full weight bearing was allowed, and gait training was conducted. One week after surgery, CT confirmed the postoperative placement angle. CT-based navigation CT-based hip navigation system (Stryker, Mahwah, New Jersey, USA) was used for CTN. The pelvis was segmented, and coordinates were defined using several reference points. Before skin incision, two 4.0 mm diameter pins were inserted percutaneously into the iliac crest to which a pelvic tracker was attached (Fig. A). Intraoperatively, point-pair matching was initially attempted by registering the four representative points determined preoperatively. Surface matching was performed by registering > 30 points on the pelvic surface (Fig. B). The accuracy of the registration was verified by palpating representative landmarks using a pointer. Intraoperative navigation was utilized to verify the intended position and angle of the cup during reaming and cup placement (Fig. C, ) . Portable navigation Naviswiss (Naviswiss AG, Brugg, Switzerland) is a system that uses an accelerometer with a small tag and navigation unit for PN. The navigation unit features an infrared stereo camera to measure the position and orientation of the tag, along with an inertial measurement unit with an accelerometer and gyroscope for spatial orientation. The ‘flip technique’ can be used with this system for THA in a lateral decubitus position. Registration is performed with the patient supine, followed by repositioning to the lateral position, keeping the pins and tags clean. Preoperatively, two 3.0 mm diameter pins are inserted percutaneously into the iliac crest, and a small tag (P-tag) is attached. The anterior superior iliac spines (ASISs) are palpated using a caliper with another small tag (M-tag) (Fig. E). Both tags are captured by an infrared camera to define the FPP based on the ASISs and the weight axis direction. During reaming and cup placement, the P-tag is attached to the pelvis (Fig. F), and the M-tag to the reamer or cup holder (Fig. G). A stereo camera reads these tags to verify the cup placement angle (Fig. H) . CT-based hip navigation system (Stryker, Mahwah, New Jersey, USA) was used for CTN. The pelvis was segmented, and coordinates were defined using several reference points. Before skin incision, two 4.0 mm diameter pins were inserted percutaneously into the iliac crest to which a pelvic tracker was attached (Fig. A). Intraoperatively, point-pair matching was initially attempted by registering the four representative points determined preoperatively. Surface matching was performed by registering > 30 points on the pelvic surface (Fig. B). The accuracy of the registration was verified by palpating representative landmarks using a pointer. Intraoperative navigation was utilized to verify the intended position and angle of the cup during reaming and cup placement (Fig. C, ) . Naviswiss (Naviswiss AG, Brugg, Switzerland) is a system that uses an accelerometer with a small tag and navigation unit for PN. The navigation unit features an infrared stereo camera to measure the position and orientation of the tag, along with an inertial measurement unit with an accelerometer and gyroscope for spatial orientation. The ‘flip technique’ can be used with this system for THA in a lateral decubitus position. Registration is performed with the patient supine, followed by repositioning to the lateral position, keeping the pins and tags clean. Preoperatively, two 3.0 mm diameter pins are inserted percutaneously into the iliac crest, and a small tag (P-tag) is attached. The anterior superior iliac spines (ASISs) are palpated using a caliper with another small tag (M-tag) (Fig. E). Both tags are captured by an infrared camera to define the FPP based on the ASISs and the weight axis direction. During reaming and cup placement, the P-tag is attached to the pelvis (Fig. F), and the M-tag to the reamer or cup holder (Fig. G). A stereo camera reads these tags to verify the cup placement angle (Fig. H) . Postoperative evaluation used 3D templating software (ZedHip). After importing CT images, the cup placement angle and position were measured by superimposing a template of the same size onto the actual cup . The angle of cup placement was assessed for radiographic inclination (RI) and radiographic anteversion (RA) with reference to the FPP in the supine position. The accuracy error was calculated as the difference between the target angles (RI 40° and RA 20°) and the actual angle. Similarly, the navigation error was determined as the difference between the final navigation display value and the actual placement angle. Both values are expressed as absolute errors. To assess cup placement, a three-dimensional coordinate system was established with reference to the anterior pelvic plane (APP). The origin was defined as the midpoint between the ASISs. The X-axis represents a line through the bilateral ASISs. The Y-axis is parallel to the APP and passes through the origin. The Z-axis is perpendicular to the APP through the origin (Fig. ). The cup’s center of rotation was measured before and after surgery using this coordinate system. The X-axis error indicates internal/external direction error, the Y-axis error indicates vertical direction error, and the Z-axis error indicates anteroposterior direction error. The cup center position recorded in ZedHip at preoperative planning was measured using the coordinate system (Fig. ). Postoperative CT data were imported into ZedHip to measure the actual cup position. A template of the same size was superimposed on the actual placement position to evaluate the cup position along the X-, Y-, and Z-axes. Absolute errors between the actual cup position and the preoperative planned position on the X-, Y-, and Z-axes were evaluated . We recorded the operative time and intraoperative and postoperative complications (dislocation, intraoperative fracture, surgical site infection, and nerve palsy). Additionally, clinical outcomes were evaluated using the Japanese Orthopedic Association (JOA) score at both preoperative and 1-year postoperative time points. Based on previous reports, if a 3° navigation error was considered significant, 28 cases in each group would be required for a power of 0.8, with a significant difference of 0.05 . Statistical analysis was conducted for each item using the t -test for continuous variables and Fisher’s exact probability test for nominal variables. P < 0.05 was considered significant. EZR version 1.65 (Saitama Medical Center, Jichi Medical University, Saitama, Japan) was used for statistical analysis. No significant differences were found in cup size, number of screws, or preoperative JOA scores between the groups (Table ). For the CTN-THA group, cup placement angles were RI 39.8 ± 3.4° and RA 17.8 ± 3.5°; for the PN-THA group, RI 40.3 ± 3.1° and RA 18.3 ± 3.0°. The accuracy error for CTN-THA was 2.8 ± 2.0° in inclination and 3.4 ± 2.1° in anteversion; for PN-THA, it was 2.5 ± 1.8° in inclination and 2.6 ± 2.2° in anteversion. Navigation error for CTN-THA was 2.2 ± 2.0° in inclination and 2.1 ± 1.6° in anteversion, while for PN-THA, it was 2.6 ± 2.2° in inclination and 3.8 ± 3.3° in anteversion. The CTN-THA group had significantly smaller navigation errors than the PN-THA group ( p < 0.01). The number of cases with navigation errors greater than 5° and 10° did not differ between groups (Fig. ). The error between planned and actual cup placement was 2.85 ± 1.79 mm on the X-axis, 2.47 ± 2.16 mm on the Y-axis, and 2.13 ± 2.17 mm on the Z-axis for CTN-THA. In the PN-THA group, the errors were 4.08 ± 3.23 mm in the X-axis, 3.62 ± 2.68 mm in the Y-axis, and 4.27 ± 3.02 mm in the Z-axis, with significantly smaller Z-axis (anteroposterior direction) errors in the CTN-THA group ( p < 0.01; Table ). Operative time was significantly longer in the CTN-THA group (115 ± 41 min) compared to the PN-THA group (87 ± 19 min; p < 0.01). Intraoperative blood loss was similar (CTN-THA: 458 ± 276 mL, PN-THA: 384 ± 249 mL). No significant differences were found in complications or JOA scores one year post-surgery (Table ). This study compares the accuracy of cup placement for DHOA between THA using PN with the ‘flip technique’ and THA using CTN. The results indicated that CTN-THA had a smaller navigation error in anteversion than PN-THA. In addition, CTN-THA showed fewer anteroposterior errors in cup positioning than PN-THA. The operative time was significantly shorter for PN-THA than for CTN-THA. Generally, THA for DHOA is more challenging than primary HOA because of problems such as acetabular and femoral deformity, femoral head subluxation, and severe leg length differences. Specifically, placing the cup at the correct angle is difficult because of inadequate cup coverage due to a lack of bone stock, presence of a double floor compared with primary HOA . CTN-THA has previously been reported to have good accuracy in cases with such a complex hip morphology and in obese patients . On the other hand, it has been reported that cup angle becomes inaccurate due to pelvic tilt during registration in the lateral position of PN-THA . To achieve more accurate cup placement in the lateral position of PN-THA, a registration method using the ‘flip technique’ was developed . However, conventional imageless navigation with registration by palpation of both ASISs and the pubic symphysis has been shown to be less accurate in obese patients due to subcutaneous fat . In our comparison, CTN-THA showed superior accuracy compared to PN-THA for both the angle and position of the cup. The difference in the accuracy between the two navigation systems can be attributed to the registration method. CTN performs registration by directly tapping the bone, and the accuracy of the registration can be confirmed by palpating representative landmarks. On the other hand, as with conventional imageless navigation, the thickness of subcutaneous fat may have affected the accuracy of registration for PN. Compared to primary HOA, greater number of cases of DHOA involve anatomical abnormalities of the acetabulum. Placement of the cup in the anatomical hip center is difficult due to the difficulty in accurately identifying the location of the original acetabulum intraoperatively . In conventional THA, the difference between preoperative planning and actual placement of the cup using a three-dimensional coordinate system is reported to be approximately 3–4 mm . CTN-THA is known to have a more accurate cup position than conventional THA, with an error of approximately 1–2 mm . However, there are no reports on cup placement in PN-THA. In our results, the error of the PN-THA was larger than that of the CTN-THA in the anteroposterior direction. While PN-THA can provide intraoperative information related to cup placement angles, it cannot provide information on cup placement positions. In contrast, CTN-THA provides information on the positions of the reamer and cup. These differences suggest that the accuracy of cup placement positions with CTN-THA is superior to that of PN-THA. Studies have shown that CTN-THA may prolong the operative time compared with conventional THA . However, the impact of PN on operative time is conflicting . Our results showed that PN-THA required a significantly shorter operative time than CTN-THA. If the ‘flip technique’ is used, registration should be performed in the supine position before the skin incision . Our results may have been affected by the fact that registration was performed before the start of surgery and by the simplicity of PN. PN has an advantage in terms of operative time, because longer operative times can lead to increased blood loss and risk of infection . This study has several limitations. First, the small number of cases may limit the detection of differences in clinical outcomes and complications, despite power analysis performed for placement angles. Second, the groups used different types of cups, potentially affecting placement position and angle due to fixation differences. Third, it is uncertain whether FPP is the same before and after THA. However, it is suggested that pelvic tilt after THA progresses over time, with minimal changes observed 1 year postoperatively . Considering this, since this study evaluates CT at 1 week postoperatively, the supine FPP before and after surgery is thought to be almost equivalent. Nonetheless, this study is the first to evaluate cup placement accuracy in PN-THA and CTN-THA for DHOA, which is more challenging than primary HOA. The findings will provide useful information for many hip surgeons. In conclusion, the use of the ‘flip technique’ with the FPP as the reference plane in THA for DHOA resulted in significantly shorter operative time compared to CTN-THA. However, the CTN-THA demonstrated greater accuracy than the PN-THA for cup placement angles and positions, indicating the superiority of CTN.
Preparation and Reliability and Validity Test of the Questionnaire on the Maintenance of Intravenous Catheter in
bb3a51f2-c22d-45c5-81d4-8a48496dfd53
11789573
Surgical Procedures, Operative[mh]
Introduction A central venous catheter (CVC) is a large‐bore CVC placed through the skin using sterile techniques in certain clinical situations. In adult patients, the three placement sites for CVCs are the internal jugular vein, femoral vein, and subclavian vein, with the catheter reaching the right atrium via the puncture site (Bleichmder ). CVCs have been widely used in the clinical treatment of critically ill patients and are an important route for monitoring the condition of critically ill patients, infusing fluids and blood, providing total parenteral nutrition and administering life‐saving drugs (Lafuente Cabrero et al. ). However, due to the fact that the patients' conditions are generally severe, invasive procedures are often performed, and their immune function is low, deep venous catheterization is associated with an increasing number of complications, including catheter‐related bloodstream infections, catheter occlusion, slippage, pneumothorax and air embolism (Wang, Sun, and Wang ). Of these, improper maintenance is the main cause of complications. Nurses monitor CVCs to prevent complications such as infection, pneumothorax, hematoma, bleeding or extravasation, so that corrective measures can be taken in a timely manner to improve medical care (Sun et al. ). The knowledge, attitude, and practices (KAP) of CVC care among intensive care unit (ICU) nurses have a significant impact on the occurrence of CVC complications. To date, no specific questionnaires have been found to assess the KAP of ICU nurses regarding CVC maintenance, making it difficult to standardise and improve the regulation of CVC maintenance by ICU nurses. Therefore, this study developed an ICU Nurse Central Venous Catheter Maintenance Knowledge‐Attitude‐Practice Questionnaire to provide a reliable theoretical basis for evaluating ICU nurses' knowledge, attitudes and behaviours towards CVC maintenance. Materials and Methods 2.1 Research Object 2.1.1 Consulting Experts by Mail Medical experts and psychological experts with intermediate technical titles and above engaged in critical care, clinical medicine, nursing management for more than 10 years; (1) Bachelor degree or above; (2) Interested and willing to participate in the study. The study selected 19 experts from east, west and central China. 2.1.2 Pre‐Survey Subjects In July 2020, 20 ICU nurses from REDACTED were selected by convenience sampling to conduct a pre‐survey on questionnaires. The eligibility criteria are: (1) Have a nurse practicing qualification certificate issued by the Ministry of Health within the valid registration period; (2) Years of front‐line nursing work in ICU > 1 year; (3) Able to correctly understand the questionnaire content; (4) Informed consent and voluntary participation in this survey. Exclusion criteria: training, rotation, practice nurses; Nurses who are out of work due to marriage, illness, childbirth, etc. 2.1.3 Formal Survey Subjects were selected by convenience sampling method. From July to September 2020, on‐site questionnaire survey was conducted among ICU nurses in REDACTED. The inclusion and exclusion criteria are the same. 2.2 Research Methods 2.2.1 Preliminary Questionnaire Construction A multidisciplinary team consisting of two critical care clinical experts, one critical care research expert, one intravenous therapy expert, one psychological measurement expert, one statistician and three graduate students was established. Based on the theory of knowledge, belief and action (Li and Liu ) and referring to domestic and foreign research and related guidelines (Gorski et al. ; Chinese Nursing Association intravenous Infusion Therapy Professional Committee ; Estrada‐Orozco et al. ; National Health and Family Planning Commission ) as well as expert group discussions, the initial questionnaire compiled included 14 items in the knowledge dimension, 8 items in the attitude dimension, and 26 items in the behaviour dimension. Using the Delphi method, two rounds of expert correspondence were conducted from July to September 2020. According to the concentration degree of expert opinions, coordination degree of expert opinions and other indicators, combined with expert suggestions, the initial items of the questionnaire were screened and modified to form a pre‐test version of the questionnaire. The questionnaire contains 55 items in total, including 16 items in the knowledge dimension, 9 items in the attitude dimension and 30 items in the behaviour dimension. Five points are scored for all correct answers to knowledge items, 1 point is scored for all correct answers to multiple choice items, and 0 points is scored for all wrong answers. All attitude and behaviour items are scored on the Likert five‐point scoring scale (Likert ): strongly agree, agree, not necessarily, disagree and strongly disagree five kinds of answers, which are recorded as 5, 4, 3, 2 and 1 points respectively. 2.2.2 Pre‐Survey The pre‐test version of questionnaire was used to conduct a questionnaire survey on 20 nurses who met the inclusion criteria. The questionnaire was collected on site with an effective recovery rate of 100%. After the preliminary survey, no nurses put forward new demands, and the questionnaire content was not modified. So the initial version of the questionnaire consisted of three dimensions and 55 items for clinical testing. 2.2.3 Formal Investigation An on‐site questionnaire survey was conducted on 360 nurses who met the inclusion criteria using the self‐prepared general data questionnaire of nurses and the CVC maintenance questionnaire of ICU nurses (clinical test version), and 334 valid questionnaires were collected. 2.2.4 Item Analysis Items are screened based on data from formal surveys. The following two methods are mainly adopted in this study: (1) critical ratio (CR): also known as extreme value method, that is, the scores of all subjects are sorted from the largest to the smallest. If the score is in the top 27%, that is, the sample of the high group, while the score is in the bottom 27%, that is, the sample of the low group. Through the t ‐test analysis of two independent samples, if the CR value of the item is > 3 or the difference is statistically significant ( p < 0.05), the item will be retained; otherwise, it will be deleted (Xiaoyong ). (2) Total correlation analysis: The correlation coefficient between each item and the total score was calculated. If the correlation coefficient was not significant ( p > 0.05) or r < 0.4, the item was deleted (Xiaoyong ). Based on the project analysis results, a questionnaire on the maintenance of CVC for ICU nurses was formed (preliminary version). 2.2.5 Reliability and Validity Analysis The content validity of this study was obtained by letter consultation with experts. The validity of questionnaire structure was tested by exploratory factor analysis. The reliability is tested by internal consistency reliability and retest reliability. 2.3 Quality Control After unified training, the quality control investigators introduced the research purpose and precautions to the nurses with unified guidance. After completing the questionnaire, they checked the filling status on the spot and corrected and supplemented the errors and omissions in time. 2.4 Ethical Principles This study follows the principles of voluntariness and confidentiality to avoid any harm to the participants. The study was reviewed and approved by the Ethics Committee of REDACTED (Approval number: REDACTED). 2.5 Statistical Methods Excel 2013 was used for data entry and sorting. SPSS 26.0 and Amos24.0 were used for statistical analysis of the data. The measurement data were described by mean ± standard deviation, and the counting data were described by frequency and percentage. The difference was statistically significant when p < 0.05. The correlation between items and total score and the critical ratio method were used to screen and analyse the questionnaire items (Xiaoyong ). Factor analysis was used to test the validity of the questionnaire. The validity analysis included content validity and structure validity. Expert evaluation method was used for Content Validity. Item of Content Validity Index (I‐CVI): For each item, the number of experts with a rating of 3 or 4 divided by the total number of experts participating in the evaluation is the I‐CVI. Scale of Content Validity Index (S‐CVI): The number of times of 3 or 4 ratings divided by the number of evaluations. Exploratory factor analysis and confirmatory factor analysis were used for structural validity. Cronbach α coefficient and broken half reliability were used to describe the validity of the questionnaire. The validity of item level content and item level content were calculated by using expert evaluation method. Exploratory factor analysis was used to test the validity of the structure. If KMO ≥ 0.8, p < 0.05 indicates that exploratory factor analysis is suitable. Principal component analysis is used to extract common factors with eigenvalues > 1, and maximum variance orthogonal spin method is used to delete items with factor loads < 0.4 (Zhang and Dong ). Confirmatory factor analysis was used to test the fit degree of each dimension and item of the questionnaire. Cronbach's α and broken half reliability were used to evaluate the reliability of the questionnaire (Wu ). Research Object 2.1.1 Consulting Experts by Mail Medical experts and psychological experts with intermediate technical titles and above engaged in critical care, clinical medicine, nursing management for more than 10 years; (1) Bachelor degree or above; (2) Interested and willing to participate in the study. The study selected 19 experts from east, west and central China. 2.1.2 Pre‐Survey Subjects In July 2020, 20 ICU nurses from REDACTED were selected by convenience sampling to conduct a pre‐survey on questionnaires. The eligibility criteria are: (1) Have a nurse practicing qualification certificate issued by the Ministry of Health within the valid registration period; (2) Years of front‐line nursing work in ICU > 1 year; (3) Able to correctly understand the questionnaire content; (4) Informed consent and voluntary participation in this survey. Exclusion criteria: training, rotation, practice nurses; Nurses who are out of work due to marriage, illness, childbirth, etc. 2.1.3 Formal Survey Subjects were selected by convenience sampling method. From July to September 2020, on‐site questionnaire survey was conducted among ICU nurses in REDACTED. The inclusion and exclusion criteria are the same. Consulting Experts by Mail Medical experts and psychological experts with intermediate technical titles and above engaged in critical care, clinical medicine, nursing management for more than 10 years; (1) Bachelor degree or above; (2) Interested and willing to participate in the study. The study selected 19 experts from east, west and central China. Pre‐Survey Subjects In July 2020, 20 ICU nurses from REDACTED were selected by convenience sampling to conduct a pre‐survey on questionnaires. The eligibility criteria are: (1) Have a nurse practicing qualification certificate issued by the Ministry of Health within the valid registration period; (2) Years of front‐line nursing work in ICU > 1 year; (3) Able to correctly understand the questionnaire content; (4) Informed consent and voluntary participation in this survey. Exclusion criteria: training, rotation, practice nurses; Nurses who are out of work due to marriage, illness, childbirth, etc. Formal Survey Subjects were selected by convenience sampling method. From July to September 2020, on‐site questionnaire survey was conducted among ICU nurses in REDACTED. The inclusion and exclusion criteria are the same. Research Methods 2.2.1 Preliminary Questionnaire Construction A multidisciplinary team consisting of two critical care clinical experts, one critical care research expert, one intravenous therapy expert, one psychological measurement expert, one statistician and three graduate students was established. Based on the theory of knowledge, belief and action (Li and Liu ) and referring to domestic and foreign research and related guidelines (Gorski et al. ; Chinese Nursing Association intravenous Infusion Therapy Professional Committee ; Estrada‐Orozco et al. ; National Health and Family Planning Commission ) as well as expert group discussions, the initial questionnaire compiled included 14 items in the knowledge dimension, 8 items in the attitude dimension, and 26 items in the behaviour dimension. Using the Delphi method, two rounds of expert correspondence were conducted from July to September 2020. According to the concentration degree of expert opinions, coordination degree of expert opinions and other indicators, combined with expert suggestions, the initial items of the questionnaire were screened and modified to form a pre‐test version of the questionnaire. The questionnaire contains 55 items in total, including 16 items in the knowledge dimension, 9 items in the attitude dimension and 30 items in the behaviour dimension. Five points are scored for all correct answers to knowledge items, 1 point is scored for all correct answers to multiple choice items, and 0 points is scored for all wrong answers. All attitude and behaviour items are scored on the Likert five‐point scoring scale (Likert ): strongly agree, agree, not necessarily, disagree and strongly disagree five kinds of answers, which are recorded as 5, 4, 3, 2 and 1 points respectively. 2.2.2 Pre‐Survey The pre‐test version of questionnaire was used to conduct a questionnaire survey on 20 nurses who met the inclusion criteria. The questionnaire was collected on site with an effective recovery rate of 100%. After the preliminary survey, no nurses put forward new demands, and the questionnaire content was not modified. So the initial version of the questionnaire consisted of three dimensions and 55 items for clinical testing. 2.2.3 Formal Investigation An on‐site questionnaire survey was conducted on 360 nurses who met the inclusion criteria using the self‐prepared general data questionnaire of nurses and the CVC maintenance questionnaire of ICU nurses (clinical test version), and 334 valid questionnaires were collected. 2.2.4 Item Analysis Items are screened based on data from formal surveys. The following two methods are mainly adopted in this study: (1) critical ratio (CR): also known as extreme value method, that is, the scores of all subjects are sorted from the largest to the smallest. If the score is in the top 27%, that is, the sample of the high group, while the score is in the bottom 27%, that is, the sample of the low group. Through the t ‐test analysis of two independent samples, if the CR value of the item is > 3 or the difference is statistically significant ( p < 0.05), the item will be retained; otherwise, it will be deleted (Xiaoyong ). (2) Total correlation analysis: The correlation coefficient between each item and the total score was calculated. If the correlation coefficient was not significant ( p > 0.05) or r < 0.4, the item was deleted (Xiaoyong ). Based on the project analysis results, a questionnaire on the maintenance of CVC for ICU nurses was formed (preliminary version). 2.2.5 Reliability and Validity Analysis The content validity of this study was obtained by letter consultation with experts. The validity of questionnaire structure was tested by exploratory factor analysis. The reliability is tested by internal consistency reliability and retest reliability. Preliminary Questionnaire Construction A multidisciplinary team consisting of two critical care clinical experts, one critical care research expert, one intravenous therapy expert, one psychological measurement expert, one statistician and three graduate students was established. Based on the theory of knowledge, belief and action (Li and Liu ) and referring to domestic and foreign research and related guidelines (Gorski et al. ; Chinese Nursing Association intravenous Infusion Therapy Professional Committee ; Estrada‐Orozco et al. ; National Health and Family Planning Commission ) as well as expert group discussions, the initial questionnaire compiled included 14 items in the knowledge dimension, 8 items in the attitude dimension, and 26 items in the behaviour dimension. Using the Delphi method, two rounds of expert correspondence were conducted from July to September 2020. According to the concentration degree of expert opinions, coordination degree of expert opinions and other indicators, combined with expert suggestions, the initial items of the questionnaire were screened and modified to form a pre‐test version of the questionnaire. The questionnaire contains 55 items in total, including 16 items in the knowledge dimension, 9 items in the attitude dimension and 30 items in the behaviour dimension. Five points are scored for all correct answers to knowledge items, 1 point is scored for all correct answers to multiple choice items, and 0 points is scored for all wrong answers. All attitude and behaviour items are scored on the Likert five‐point scoring scale (Likert ): strongly agree, agree, not necessarily, disagree and strongly disagree five kinds of answers, which are recorded as 5, 4, 3, 2 and 1 points respectively. Pre‐Survey The pre‐test version of questionnaire was used to conduct a questionnaire survey on 20 nurses who met the inclusion criteria. The questionnaire was collected on site with an effective recovery rate of 100%. After the preliminary survey, no nurses put forward new demands, and the questionnaire content was not modified. So the initial version of the questionnaire consisted of three dimensions and 55 items for clinical testing. Formal Investigation An on‐site questionnaire survey was conducted on 360 nurses who met the inclusion criteria using the self‐prepared general data questionnaire of nurses and the CVC maintenance questionnaire of ICU nurses (clinical test version), and 334 valid questionnaires were collected. Item Analysis Items are screened based on data from formal surveys. The following two methods are mainly adopted in this study: (1) critical ratio (CR): also known as extreme value method, that is, the scores of all subjects are sorted from the largest to the smallest. If the score is in the top 27%, that is, the sample of the high group, while the score is in the bottom 27%, that is, the sample of the low group. Through the t ‐test analysis of two independent samples, if the CR value of the item is > 3 or the difference is statistically significant ( p < 0.05), the item will be retained; otherwise, it will be deleted (Xiaoyong ). (2) Total correlation analysis: The correlation coefficient between each item and the total score was calculated. If the correlation coefficient was not significant ( p > 0.05) or r < 0.4, the item was deleted (Xiaoyong ). Based on the project analysis results, a questionnaire on the maintenance of CVC for ICU nurses was formed (preliminary version). Reliability and Validity Analysis The content validity of this study was obtained by letter consultation with experts. The validity of questionnaire structure was tested by exploratory factor analysis. The reliability is tested by internal consistency reliability and retest reliability. Quality Control After unified training, the quality control investigators introduced the research purpose and precautions to the nurses with unified guidance. After completing the questionnaire, they checked the filling status on the spot and corrected and supplemented the errors and omissions in time. Ethical Principles This study follows the principles of voluntariness and confidentiality to avoid any harm to the participants. The study was reviewed and approved by the Ethics Committee of REDACTED (Approval number: REDACTED). Statistical Methods Excel 2013 was used for data entry and sorting. SPSS 26.0 and Amos24.0 were used for statistical analysis of the data. The measurement data were described by mean ± standard deviation, and the counting data were described by frequency and percentage. The difference was statistically significant when p < 0.05. The correlation between items and total score and the critical ratio method were used to screen and analyse the questionnaire items (Xiaoyong ). Factor analysis was used to test the validity of the questionnaire. The validity analysis included content validity and structure validity. Expert evaluation method was used for Content Validity. Item of Content Validity Index (I‐CVI): For each item, the number of experts with a rating of 3 or 4 divided by the total number of experts participating in the evaluation is the I‐CVI. Scale of Content Validity Index (S‐CVI): The number of times of 3 or 4 ratings divided by the number of evaluations. Exploratory factor analysis and confirmatory factor analysis were used for structural validity. Cronbach α coefficient and broken half reliability were used to describe the validity of the questionnaire. The validity of item level content and item level content were calculated by using expert evaluation method. Exploratory factor analysis was used to test the validity of the structure. If KMO ≥ 0.8, p < 0.05 indicates that exploratory factor analysis is suitable. Principal component analysis is used to extract common factors with eigenvalues > 1, and maximum variance orthogonal spin method is used to delete items with factor loads < 0.4 (Zhang and Dong ). Confirmatory factor analysis was used to test the fit degree of each dimension and item of the questionnaire. Cronbach's α and broken half reliability were used to evaluate the reliability of the questionnaire (Wu ). Results 3.1 Expert Correspondence Among the 19 experts who completed 2 rounds of Delphi expert correspondence, 4 were male and 15 were female; Age < 40 years old 5 people, 40 ~ 50 years old 11 people, > 50 years old 3 people; 10 students have bachelor degree, 5 master degree and 4 doctor degree. Professional titles are intermediate 6 people, associate senior 10 people, senior 3 people; The working years are 10 ~ 20 years 8 people, 21 ~ 30 years 9 people, 30 years or more 2 people; Specialist expertise includes critical care (7 people), intravenous care (4 people), nursing management (4 people), nursing research (2 people), clinical medicine (1 person), psychology (1 person). With the average score of importance < 3.50 and coefficient of variation > 0.25 as the deletion criteria (Xiaoyong ), entries were added, deleted and modified based on expert opinions. The recovery rates of the two rounds of expert correspondence questionnaires were 95.0% (19/20) and 100% (19/19), and the positive coefficient of experts were 95.00% and 100.00%, respectively. The Cr of the two rounds of expert consultation is 0.83 ~ 0.98, which can be considered that the experts who participated in this letter have high authority. After the first round of consultation, 12 experts made constructive written suggestions, and after the second round, 4 experts made written suggestions. The revised questionnaire developed after the first round of consultation modified the expression of 6 items, adding 2 items of knowledge dimension, 2 items of attitude dimension and 4 items of behaviour dimension. After the second round of consultation, one item in the attitude dimension was deleted, and the expression ways of the two items were further modified. The consensus questionnaire formed after the letter consultation contained 55 items. The Kendall coordination coefficient of the two rounds was statistically significant ( χ 2 test, p < 0.001, Table ), indicating a good degree of coordination of expert opinions, so the conclusion is credible. 3.2 Project Analysis 3.2.1 Correlation Coefficient Method Pearson correlation coefficient method was used to analyse the correlation between each item and the total score of the questionnaire. Item 2 item 3, 6, 8, 9, 11, 12, 13, 14, 15 in addition to knowledge, attitude, behaviour items 1, 2, 8, 11, 16, 18, 19, 20, 22, 24, 25, 27, 28, 29 ( p > 0.05), the rest of the entries correlation coefficient of 0.40 or higher. 3.2.2 The Critical Ratio Method Conducted independent sample t test to analyse the critical difference between the items in the high and low groups. The decision values of all items in the questionnaire ranged from 3.241 to 11.582 with statistical significance ( p < 0.001), and the results showed that no items were deleted, as shown in Table . 3.3 Reliability and Validity Test 3.3.1 Validity Analysis 3.3.1.1 Content Validity The results of evaluation by nine experts showed that CVI (I‐CVI) = 0.889 ~ 1.000, CVI(S‐CVI) = 0.974, and the content validity was good. 3.3.1.2 Structural Validity Based on the formal investigation data, factor analysis was performed, and the KMO value was 0.913, and Bartlett sphericity test reached a significant level (c2 = 5886.897, p < 0.001), which was suitable for exploratory factor analysis. In this paper, 31 items were analysed by principal component analysis and variance maximum rotation method. There were three common factors (F1–F3) with feature root λ > 1, and the corresponding total interpretation rate of variation was 65.656%, indicating that the questionnaire had good structural validity. The factor load and commonality of each item of the questionnaire are shown in Table . The fit index of the two‐factor structural equation model is shown in Table . The standardised path analysis is shown in Figure . 3.3.2 Reliability Analysis The Cronbach's α coefficient of the questionnaire was 0.843, and the Cronbach's α coefficient of each dimension was 0.754, 0.887, 0.940. The partial half reliability was 0.816, and the retest reliability was 0.813. It shows that the questionnaire has reliability and stability. Expert Correspondence Among the 19 experts who completed 2 rounds of Delphi expert correspondence, 4 were male and 15 were female; Age < 40 years old 5 people, 40 ~ 50 years old 11 people, > 50 years old 3 people; 10 students have bachelor degree, 5 master degree and 4 doctor degree. Professional titles are intermediate 6 people, associate senior 10 people, senior 3 people; The working years are 10 ~ 20 years 8 people, 21 ~ 30 years 9 people, 30 years or more 2 people; Specialist expertise includes critical care (7 people), intravenous care (4 people), nursing management (4 people), nursing research (2 people), clinical medicine (1 person), psychology (1 person). With the average score of importance < 3.50 and coefficient of variation > 0.25 as the deletion criteria (Xiaoyong ), entries were added, deleted and modified based on expert opinions. The recovery rates of the two rounds of expert correspondence questionnaires were 95.0% (19/20) and 100% (19/19), and the positive coefficient of experts were 95.00% and 100.00%, respectively. The Cr of the two rounds of expert consultation is 0.83 ~ 0.98, which can be considered that the experts who participated in this letter have high authority. After the first round of consultation, 12 experts made constructive written suggestions, and after the second round, 4 experts made written suggestions. The revised questionnaire developed after the first round of consultation modified the expression of 6 items, adding 2 items of knowledge dimension, 2 items of attitude dimension and 4 items of behaviour dimension. After the second round of consultation, one item in the attitude dimension was deleted, and the expression ways of the two items were further modified. The consensus questionnaire formed after the letter consultation contained 55 items. The Kendall coordination coefficient of the two rounds was statistically significant ( χ 2 test, p < 0.001, Table ), indicating a good degree of coordination of expert opinions, so the conclusion is credible. Project Analysis 3.2.1 Correlation Coefficient Method Pearson correlation coefficient method was used to analyse the correlation between each item and the total score of the questionnaire. Item 2 item 3, 6, 8, 9, 11, 12, 13, 14, 15 in addition to knowledge, attitude, behaviour items 1, 2, 8, 11, 16, 18, 19, 20, 22, 24, 25, 27, 28, 29 ( p > 0.05), the rest of the entries correlation coefficient of 0.40 or higher. 3.2.2 The Critical Ratio Method Conducted independent sample t test to analyse the critical difference between the items in the high and low groups. The decision values of all items in the questionnaire ranged from 3.241 to 11.582 with statistical significance ( p < 0.001), and the results showed that no items were deleted, as shown in Table . Correlation Coefficient Method Pearson correlation coefficient method was used to analyse the correlation between each item and the total score of the questionnaire. Item 2 item 3, 6, 8, 9, 11, 12, 13, 14, 15 in addition to knowledge, attitude, behaviour items 1, 2, 8, 11, 16, 18, 19, 20, 22, 24, 25, 27, 28, 29 ( p > 0.05), the rest of the entries correlation coefficient of 0.40 or higher. The Critical Ratio Method Conducted independent sample t test to analyse the critical difference between the items in the high and low groups. The decision values of all items in the questionnaire ranged from 3.241 to 11.582 with statistical significance ( p < 0.001), and the results showed that no items were deleted, as shown in Table . Reliability and Validity Test 3.3.1 Validity Analysis 3.3.1.1 Content Validity The results of evaluation by nine experts showed that CVI (I‐CVI) = 0.889 ~ 1.000, CVI(S‐CVI) = 0.974, and the content validity was good. 3.3.1.2 Structural Validity Based on the formal investigation data, factor analysis was performed, and the KMO value was 0.913, and Bartlett sphericity test reached a significant level (c2 = 5886.897, p < 0.001), which was suitable for exploratory factor analysis. In this paper, 31 items were analysed by principal component analysis and variance maximum rotation method. There were three common factors (F1–F3) with feature root λ > 1, and the corresponding total interpretation rate of variation was 65.656%, indicating that the questionnaire had good structural validity. The factor load and commonality of each item of the questionnaire are shown in Table . The fit index of the two‐factor structural equation model is shown in Table . The standardised path analysis is shown in Figure . 3.3.2 Reliability Analysis The Cronbach's α coefficient of the questionnaire was 0.843, and the Cronbach's α coefficient of each dimension was 0.754, 0.887, 0.940. The partial half reliability was 0.816, and the retest reliability was 0.813. It shows that the questionnaire has reliability and stability. Validity Analysis 3.3.1.1 Content Validity The results of evaluation by nine experts showed that CVI (I‐CVI) = 0.889 ~ 1.000, CVI(S‐CVI) = 0.974, and the content validity was good. 3.3.1.2 Structural Validity Based on the formal investigation data, factor analysis was performed, and the KMO value was 0.913, and Bartlett sphericity test reached a significant level (c2 = 5886.897, p < 0.001), which was suitable for exploratory factor analysis. In this paper, 31 items were analysed by principal component analysis and variance maximum rotation method. There were three common factors (F1–F3) with feature root λ > 1, and the corresponding total interpretation rate of variation was 65.656%, indicating that the questionnaire had good structural validity. The factor load and commonality of each item of the questionnaire are shown in Table . The fit index of the two‐factor structural equation model is shown in Table . The standardised path analysis is shown in Figure . Content Validity The results of evaluation by nine experts showed that CVI (I‐CVI) = 0.889 ~ 1.000, CVI(S‐CVI) = 0.974, and the content validity was good. Structural Validity Based on the formal investigation data, factor analysis was performed, and the KMO value was 0.913, and Bartlett sphericity test reached a significant level (c2 = 5886.897, p < 0.001), which was suitable for exploratory factor analysis. In this paper, 31 items were analysed by principal component analysis and variance maximum rotation method. There were three common factors (F1–F3) with feature root λ > 1, and the corresponding total interpretation rate of variation was 65.656%, indicating that the questionnaire had good structural validity. The factor load and commonality of each item of the questionnaire are shown in Table . The fit index of the two‐factor structural equation model is shown in Table . The standardised path analysis is shown in Figure . Reliability Analysis The Cronbach's α coefficient of the questionnaire was 0.843, and the Cronbach's α coefficient of each dimension was 0.754, 0.887, 0.940. The partial half reliability was 0.816, and the retest reliability was 0.813. It shows that the questionnaire has reliability and stability. Discussion 4.1 Significance of Questionnaire Preparation The standard catheter maintenance performed by nurses can reduce complications, extend the service life of the catheter and reduce the economic burden of patients, which is of great significance in the whole course of intravenous therapy. In order to improve the quality of CVC maintenance, it is necessary to have a scientific and quantitative measurement tool to evaluate the knowledge and practice of ICU nurses on CVC maintenance, so as to formulate targeted interventions to improve the quality of CVC maintenance and reduce the occurrence of CVC complications. At present, there are no evaluation tools to evaluate the knowledge and practice of ICU nurses' central venous pipeline maintenance at home and abroad. The existing universal knowledge and practice assessment tools and the standard questionnaire for intravenous therapy involve a wide range of contents (Yao ; Chen et al. ), which is difficult to reflect the specific knowledge, attitude and behaviour of CVC maintenance. Specific assessment tools such as peripherally inserted central catheter and the questionnaire providing catheter maintenance knowledge and practice are highly targeted (Zhang et al. ; Ren et al. ) and are not applicable to the assessment of catheter maintenance knowledge and practice. In view of this, it is of practical significance to construct a questionnaire on the knowledge and practice of ICU nurses' central venous pipeline maintenance and provide a targeted quantitative evaluation tool for clinical practice. 4.2 The Questionnaire Is Applicable In this study, knowledge, belief and practice were selected as the theoretical framework to construct the dimension of the questionnaire, and the questionnaire items were constructed in combination with relevant literature and expert consensus on Clinical venous catheter maintenance and Operation (Chinese Nursing Association intravenous Infusion Therapy Professional Committee ). After group discussion, the item pool was established by brainstorming method. Delphi expert correspondence method was used to select items and form the initial questionnaire. In this study, medical experts and psychological experts from intravenous nursing, critical care, clinical medicine and nursing management were selected, and the initial entries were modified and screened through two rounds of Delphi expert correspondence, and professional opinions on the questionnaire were put forward from multiple angles and directions. The data were collected by investigating ICU nurses, and the reliability of the questionnaire was analysed by various statistical methods. The higher the score of the questionnaire, the higher the knowledge of CVC maintenance, the better the attitude and the more normative the behaviour. This questionnaire met nine aspects of CVC maintenance, including evaluation, tube flushing, tube sealing, dressing replacement, catheter fixation, infusion joint, catheter removal, and infection prevention and quantitatively assessed the knowledge, belief and practice of ICU nurses on CVC maintenance. The language of the questionnaire is easy to understand, and it usually takes less than 10 min to fill in, which is easy to promote and apply. 4.3 The Questionnaire Has Good Reliability and Validity The preparation process strictly follows the standardisation of questionnaire preparation. The questionnaire in this study has good stability. Principal component analysis and variance maximum rotation method showed that there were three common factors with eigenroot λ > 1, and the total interpretation rate of corresponding variation was 65.656%, indicating that the questionnaire structure was valid. The quantitative questionnaire for the maintenance of CVC in ICU nurses has high reliability, validity and operability, and can be used as a quantitative evaluation tool for the maintenance of CVC in ICU nurses. However, this study only decided to delete items from the perspective of statistical analysis, without considering the integrity of catheter maintenance. In addition, the ICU nurses involved were all from Anhui province, which has a certain region, and the sample size and cross‐provincial multi‐center studies need to be increased in the future, so as to improve the questionnaire. Significance of Questionnaire Preparation The standard catheter maintenance performed by nurses can reduce complications, extend the service life of the catheter and reduce the economic burden of patients, which is of great significance in the whole course of intravenous therapy. In order to improve the quality of CVC maintenance, it is necessary to have a scientific and quantitative measurement tool to evaluate the knowledge and practice of ICU nurses on CVC maintenance, so as to formulate targeted interventions to improve the quality of CVC maintenance and reduce the occurrence of CVC complications. At present, there are no evaluation tools to evaluate the knowledge and practice of ICU nurses' central venous pipeline maintenance at home and abroad. The existing universal knowledge and practice assessment tools and the standard questionnaire for intravenous therapy involve a wide range of contents (Yao ; Chen et al. ), which is difficult to reflect the specific knowledge, attitude and behaviour of CVC maintenance. Specific assessment tools such as peripherally inserted central catheter and the questionnaire providing catheter maintenance knowledge and practice are highly targeted (Zhang et al. ; Ren et al. ) and are not applicable to the assessment of catheter maintenance knowledge and practice. In view of this, it is of practical significance to construct a questionnaire on the knowledge and practice of ICU nurses' central venous pipeline maintenance and provide a targeted quantitative evaluation tool for clinical practice. The Questionnaire Is Applicable In this study, knowledge, belief and practice were selected as the theoretical framework to construct the dimension of the questionnaire, and the questionnaire items were constructed in combination with relevant literature and expert consensus on Clinical venous catheter maintenance and Operation (Chinese Nursing Association intravenous Infusion Therapy Professional Committee ). After group discussion, the item pool was established by brainstorming method. Delphi expert correspondence method was used to select items and form the initial questionnaire. In this study, medical experts and psychological experts from intravenous nursing, critical care, clinical medicine and nursing management were selected, and the initial entries were modified and screened through two rounds of Delphi expert correspondence, and professional opinions on the questionnaire were put forward from multiple angles and directions. The data were collected by investigating ICU nurses, and the reliability of the questionnaire was analysed by various statistical methods. The higher the score of the questionnaire, the higher the knowledge of CVC maintenance, the better the attitude and the more normative the behaviour. This questionnaire met nine aspects of CVC maintenance, including evaluation, tube flushing, tube sealing, dressing replacement, catheter fixation, infusion joint, catheter removal, and infection prevention and quantitatively assessed the knowledge, belief and practice of ICU nurses on CVC maintenance. The language of the questionnaire is easy to understand, and it usually takes less than 10 min to fill in, which is easy to promote and apply. The Questionnaire Has Good Reliability and Validity The preparation process strictly follows the standardisation of questionnaire preparation. The questionnaire in this study has good stability. Principal component analysis and variance maximum rotation method showed that there were three common factors with eigenroot λ > 1, and the total interpretation rate of corresponding variation was 65.656%, indicating that the questionnaire structure was valid. The quantitative questionnaire for the maintenance of CVC in ICU nurses has high reliability, validity and operability, and can be used as a quantitative evaluation tool for the maintenance of CVC in ICU nurses. However, this study only decided to delete items from the perspective of statistical analysis, without considering the integrity of catheter maintenance. In addition, the ICU nurses involved were all from Anhui province, which has a certain region, and the sample size and cross‐provincial multi‐center studies need to be increased in the future, so as to improve the questionnaire. This paper was approved by the Ethics Committee of the First Affiliated Hospital of Anhui Medical University (Approval number: Kuai‐Lun Review of the First Affiliated Hospital of An Medical University‐P2020‐17‐12), and we were in accordance with the 1975 Helsinki declaration and its later amendments. The authors have nothing to report. The authors declare no conflicts of interest.
High-pressure distal colostogram in diagnosing anorectal malformations for male patients: our experience to get a high-quality image
92c3408b-1a36-4222-9f79-fe23503ad60e
11927117
Digestive System[mh]
Classifying Anorectal malformation (ARM) can be challenging due to the wide range of congenital anomalies. For most male patients, when they are diagnosed after birth, colostomies are often performed. Definitive surgical repair is usually performed at 6–8 weeks of age, or later, depending on the presence of associated congenital anomalies and the clinical status of the child . Identifying the type and location of a fistula is crucial for surgeons to choose the appropriate surgical procedure for treatment. The surgical repair of these defects depends on the preoperative knowledge of the precise location of the rectum and the fistula, which is found in 95% of cases, anywhere from the perineum to the bladder neck. However, in children with Down syndrome, only 5% have a fistula . The posterior sagittal approach is the most common method used for repairing these malformations. With this approach, surgeons can usually locate the rectum and its connection to the male urethra in 90% of cases. However, in the remaining 10% of cases, there is a rectobladderneck fistula. The rectum is positioned anterior to the sacrum and cannot be reached through the posterior sagittal approach. In such cases, laparotomy or laparoscopy is required to access the rectum . Therefore, a precise preoperative assessment of the patient is crucial for ensuring optimal surgical correction. The identification of the location of the fistula can help to prevent postoperative complications . Postoperative complications and intraoperative damage to anatomical structures, such as the genitourinary tract, nerves, and muscles, may occur if the specific anatomy of the defect is not well clarified preoperatively and if the surgical approach is not adequately planned . It is also essential to describe the level of rectal fistula accurately to compare similar anatomic cases across institutions, enabling a precise study of long-term functional outcomes at a multi-center level . A high-pressure distal colostogram (HPC) can help to identify the presence or absence of any fistula, as well as the type and location of the fistula, which is crucial for determining the surgical approach, postoperative efficacy, and prevention of complications . A properly done high-pressure distal colostogram can help visualize the key anatomical features, providing critical information necessary to plan for a definitive operation. It is of utmost importance to perform the exam adequately, and several studies have reported the detailed technique and pitfalls of high-pressure distal colostogram [ – ]. However, obtaining a high-quality image that provides sufficient information for subsequent treatment remains challenging. Shojaeian et al. reported that, concerning the type of fistula in patients who were confirmed to have a fistula during surgery (8 patients), three (37.5%) patients showed inconsistency with the intraoperative findings . The purpose of this study is to summarize our experience with high-pressure distal colostogram in diagnosing male ARM at our center. A retrospective analysis was conducted on 103 male patients with ARM admitted to our hospital between January 2020 and June 2022. All the patients were diagnosed with ARM during their initial examination after birth. During these examinations, no fistulas was detected in the perineum. They underwent colostomy treatment within the first three days after birth at another hospital before being referred to our center for definitive surgical repair. A standardized and precise high-pressure distal colostogram was performed on 98 patients to confirm the type of ARM. The ARM and fistula location classification was based on the Krickenbeck classification , with findings verified during subsequent definitive surgical repair. For clarity, the stepwise diagnostic and therapeutic approach is delineated in Fig. . This study was carried out in accordance with The Code of Ethics of the World Medical Association (Declaration of Helsinki). The study was approved by the Ethical Committee of Capital Institute of Pediatrics (SHERLL2024063), and informed consent was obtained from all subjects and their legal guardian(s)/parents. Statistical analysis Patient data were examined and presented by descriptive statistics. High-pressure colostogram The distal colon was irrigated through the stoma to wash out the residual meconium. A significant filling defect indicated that residual meconium still remained in the distal rectum. Further irrigation should be performed to wash out the meconium or stools, and the distal high-pressure colostogram should be performed again (Fig. ). The contrast agent was water-soluble (Iodopanol 18.5 g/50 ml, diluted twice). A radiopaque marker was placed in the perineum where the anal dimple is located. The procedures of HPC were performed according to the previous reports [ , , ] (Fig. B). When only the bladder or distal part of the urethra could be visualized after HPC, we catheterized the bladder through the urethra and filled it with contrast. This allowed us to visualize the bladder and urethra and continue the HPC to show the relationship of the fistula with the bladder and urethra (Fig. A-C). A rounded appearance of the distal rectal pouch suggests adequate pressure. When there is no fistula, this rounded appearance persists, and the symmetrical distal blind rectum extends towards the anus dimple (Fig. ). When a fistula is present, usually there is a tapered configuration at the anterior aspect of the distal rectal pouch before filling the fistula and opacification of the bladder or urethra (Fig. ) . In cases in which there was no communication with the urethra, we also catheterized the bladder via the urethra and filled the bladder with contrast. When the bladder and urethra were visualized, we could define the relationship between the region of the fistula and the urethra according to the tapered configuration at the distal rectal pouch. Patient data were examined and presented by descriptive statistics. The distal colon was irrigated through the stoma to wash out the residual meconium. A significant filling defect indicated that residual meconium still remained in the distal rectum. Further irrigation should be performed to wash out the meconium or stools, and the distal high-pressure colostogram should be performed again (Fig. ). The contrast agent was water-soluble (Iodopanol 18.5 g/50 ml, diluted twice). A radiopaque marker was placed in the perineum where the anal dimple is located. The procedures of HPC were performed according to the previous reports [ , , ] (Fig. B). When only the bladder or distal part of the urethra could be visualized after HPC, we catheterized the bladder through the urethra and filled it with contrast. This allowed us to visualize the bladder and urethra and continue the HPC to show the relationship of the fistula with the bladder and urethra (Fig. A-C). A rounded appearance of the distal rectal pouch suggests adequate pressure. When there is no fistula, this rounded appearance persists, and the symmetrical distal blind rectum extends towards the anus dimple (Fig. ). When a fistula is present, usually there is a tapered configuration at the anterior aspect of the distal rectal pouch before filling the fistula and opacification of the bladder or urethra (Fig. ) . In cases in which there was no communication with the urethra, we also catheterized the bladder via the urethra and filled the bladder with contrast. When the bladder and urethra were visualized, we could define the relationship between the region of the fistula and the urethra according to the tapered configuration at the distal rectal pouch. The patient’s average age was 3.60 ± 1.56 (1.20–8.67) months. There were 68 cases of transverse colostomy, 13 cases of descending colostomy, and 17 cases of sigmoid colostomy. Seventy-four (75.5%) patients identified the location of the fistulas through a high-pressure distal colostogram, which showed the rectal bladder or rectourethral fistula. The urinary tract was visualized, and both the bladder and urethra appeared to be filled with contrast, including 14 cases of rectal bladder fistula, 23 cases of rectal prostatic fistula, and 37 cases of recto-bulbar fistula. Three children (3.1%) showed tiny fistulas on the skin at the anal dimple and were diagnosed with rectal cutaneous fistula (Fig. ). Twenty-one (21.4%) patients could not visualize the fistula during the colostogram. For these patients, the contrast was injected into the bladder, and a voiding cystourethrogram (VCUG) was performed. After the bladder and urethra were visualized, ten children (10.2%) showed a tapered configuration at the anterior aspect of the distal rectal pouch before the fistula protruded into the urethra. Based on the position of the fistula and the urethra, 2 cases of rectal prostatic fistula and 8 cases of recto-bulbar fistula were diagnosed. (Fig. ) Seven patients (7.1%) had a thready fistula extending to the base of the penis, diagnosed as a rectoperineal fistula (Fig. A, B). Four out of these patients showed no obvious cutaneous openings at the root of the penis, while three patients had concavities at the end of the fistula (Fig. C). Four cases (4.1%) showed a symmetrical blind distal rectum extending towards the anus; these were identified as imperforate anus without fistula (Fig. ). The type of anorectal malformation and the location of the rectourethral fistula shown on the colostogram are consistent with the confirmed results during subsequent definitive surgical repair (Table ). All the cases in our series can identify the fistula’s presence, type, and location after a well-executed high-pressure distal colostogram. High pressure is essential in the high-pressure distal colostogram procedure. Significant hydrostatic pressure is required to overcome rectal muscle tone by occluding the distal stoma with a balloon-tipped catheter and applying traction during the injection . When the contrast material is instilled through a catheter without a balloon, or through a catheter with an inflated balloon outside the stoma, it often results in leakage of the contrast material out of the stoma. To ensure a tight seal and avoid leakage, after introducing a Foley catheter through the distal stoma, inflate the balloon and gently pull back to make sure the seal is tight. This step is crucial for obtaining high pressure in the distal pouch . Water-soluble contrast should be selected as the contrast agent, and barium should be avoided. After the colostogram, the barium is not conducive to displaying the rectourethral fistula and is not easy to wash out. It is essential to make sure that there is no meconium obstruction in the distal colon. We suggest irrigating the distal colon through the stoma to wash out the residual meconium to make it easier to show the fistula. If a significant filling defect is found in the distal end of the rectum after imaging, it indicated that there is still residual meconium in the distal rectum. Further irrigation should be performed to wash out the meconium or stools, and the distal high-pressure colostogram should be performed again. Therefore, washing out the meconium before the colostogram is crucial to displaying the rectourethral fistula. The location of the colostomy can impact the results of imaging. For patients with transverse colostomy, there is difficulty in washing and cleaning the meconium at the distal colon, which results in residual meconium left in the distal colon. Under these conditions, the contrast agent was unable to pass through the rectourethral fistula, and the distal end of the rectum could not be visualized, leading to the possibility of misdiagnosing a low-type fistula as a high-type one. The most favorable location for a colostomy is the descending colon or the proximal end of the sigmoid colon. This location helps clear the meconium in the distal colon, making it easier to capture distal high-pressure images. Additionally, it ensures that there is enough length for the distal colon to be pulled out through the perineum during anorectoplasty . In this study, 71 children (69%) underwent a transverse colostomy after birth, which may cause the failure of high-pressure colostogram. In certain cases, children may be fitted with a loop colostomy instead of a separate one. However, this can result in feces passing into the distal colon and collecting at the distal end of the stoma. Such a situation can compromise the accuracy of high-pressure distal colostogram and trigger recurrent urinary tract infections. Therefore, when deciding to perform a colostomy, a divided colostomy is recommended. In some cases, the contrast in the distal colon could not pass through the fistula to the bladder or urethra despite sufficient pressure. To address this, we catheterized the bladder via the urethra and filled it, allowing us to image both the bladder and urethra. This helped us define the relationship between the region of the fistula and the urethra, based on the tapered configuration at the distal rectal pouch. In this condition, voiding cystourethrogram is also recommended . While maintaining the pressure in the distal colon, voiding images of the bladder and urethra can help to locate the fistula. However, keeping the pressure of the distal colon is usually difficult when the patient is voiding. Although a cystogram can offer more detailed information, it is an invasive examination. Ensuring patient safety and preventing complications, such as urinary tract infections (UTIs), is essential. Operators must follow strict handwashing protocols to reduce microbial contamination, using sterile gloves and drapes during the procedure. The area where the catheter will be inserted should be cleaned with an antiseptic solution, such as povidone-iodine or chlorhexidine. After the procedure, remove the catheter as soon as possible to prevent prolonged exposure of the urethra and bladder to potential bacteria. It is essential to keep the patient well-hydrated before and after the procedure to promote frequent urination, which can help flush out any potential bacteria in the urinary tract. When there is no fistula, this rounded appearance of the distal rectal pouch persists, and the distal rectum extends towards the anal without the tapered configuration towards the urethra. Four cases (4.1%) presented a symmetrical blind distal rectum extending towards the anus; these were identified as imperforate anus without fistula. In previous studies, only 5% of cases without Down syndrome had a blind rectal pouch without a fistula. The location of the distal end is lower than the level of the bulbar urethra. The cutaneous perineal fistula is the simplest type of ARM, with the lowest part of the rectum opening anterior to the sphincter. This condition can manifest in various ways in males. A midline fistula may appear anywhere from the base of the penis to the midline raphe, or just anterior to the center of the sphincter. During the study, it was found that 7 cases (7.1%) had a thready fistula that connected their distal rectal end to the base of the penis subcutaneously. Without high-pressure colostogram, the tiny distal fistula may not be distinguishable and could be misdiagnosed as a rectourethral bulbar fistula or as having no fistula. The posterior sagittal approach is the most common method for repairing these malformations. For children with a lower position of the blind end of the rectum, radical operation can also be performed through the perineum. Three patients were diagnosed with rectal perineal fistulas, each having tiny openings just anterior to the center of the sphincter. However, the tiny openings were not found after birth and a colostomy was performed. For these patients, the radical operation could also be performed through the perineum after birth, avoiding colostomy. Therefore, for children with anal atresia, a careful perineal examination is important. This study has the following limitations: 1, The pressure for high-pressure distal colostogram was determined based on the distal rectum’s morphology, without using instruments to measure the contrast pressure. Further research is needed to quantitatively measure the pressure in the distal rectum to facilitate the application of this technique. 2, The location of the colostomy was not standardized, which may have affected the contrast imaging results. In conclusion, a properly performed high-pressure distal colostogram combined with VCUG can identify the type of anorectal malformations and the location of the fistula in males.
Effects of very early exercise on inflammatory markers and clinical outcomes in patients with ischaemic stroke- a randomized controlled trial
651dae33-6ec9-4863-936d-dec4f2ddf34c
11927286
Medicine[mh]
The occlusion of cerebral arteries gives rise to ischaemic stroke. This leads to the formation of an ischemic core, causing neuronal damage and the formation of glial scars . Ischaemic stroke, among all stroke subtypes, comprises about 90% of stroke cases and is often associated with high mortality rates and the development of enduring stroke-related sequelae . Consequently, individuals with ischaemic stroke may have sensorimotor, physical, and psychosocial impairments . Meanwhile, evidence has shown that the time for optimum recovery of brain tissue following stroke is limited. Previous research conducted using animal models has shown that the window of opportunity for brain plasticity, functional and structural rearrangement, and neuronal recovery after a stroke is restricted and typically occurs in the early stages after the stroke event . The process of regenerating damaged brain tissue after a stroke peaks one week after . Subsequently, this process gradually declines and reaches a plateau a few weeks after the stroke. With cortical sensory maps, Krakauer et al. showed that neuroplasticity is more enhanced during the first few days after a stroke than at any other time . Therefore, intervention during the very early stage of stroke has been recommended to facilitate both ischaemic and clinical recovery . A previous study found that physical inactivity was associated with a higher risk of stroke (OR = 1.17; 95% CI = 1.12–1.21; p < 0.001) , which may also worsen positive stroke outcomes . One of the main non-pharmacological interventions in early stroke management is exercise prescription. Thus, the introduction of physical exercise very early after stroke onset can help with neuroplasticity and enhance clinical recovery [ – ]. While Wei et al. in their recent systematic review and meta-analysis showed a positive clinical efficacy of early exercise intervention in patients with ischaemic stroke; however, their review centers majorly on studies that started exercise intervention two weeks after the onset of stroke. They concluded that the results of their systematic review and meta-analysis cannot be generalized to exercise intervention undertaken less than 48 h after stroke onset (i,e., very early exercise intervention) . Meanwhile, the evidence of significant clinical recovery of stroke patients who started very early exercise (VEE) intervention compared to delayed exercise intervention is inconclusive [ , , , , ]. The AVERT II study conducted among 71 stroke patients showed no significant difference in the primary outcome (death within 3 months) between very early mobilization (8/38) and usual care (3/33) . There were also no significant differences in harmful events or neurological deterioration between the two groups . In the AVERT III study, 2014 patients with acute stroke (90.7% ischaemic stroke) were randomized from five countries into very early mobilisation and standard care groups . The findings revealed that the frequency of patients who had favourable outcomes at 3 months (score of 0–2 on modified Rankin Scale [mRS]) was significantly lower among patients who started mobilization within 24 h of stroke onset (46%) than those with just usual care (50%) . In another Early Sitting in Ischemic Stroke Patients (SEVEL) study, 138 ischaemic stroke patients were randomized into very early sitting (within 24 h) or progressive sitting (sitting on day 3) group . Their findings showed no significant difference in the frequency of good outcomes (0–2 mRS scores) at 3 months (76.2% vs. 77.3%) . Furthermore, a study by Anjos et al. among patients with ischaemic stroke showed no significant benefits of very early mobilisation after thrombolysis on primary (functional independence) and secondary (mobility, balance, and complications) outcomes at 90 days when compared with usual care . Conversely, a study by Morreale and colleagues in 340 patients with ischaemic stroke showed that patients who started proprioceptive neuromuscular facilitation or cognitive therapeutic exercise intervention within 24 h of stroke onset had better outcomes than those who started the same intervention later (after 24 h) at 12 months follow-up . The results of a pooled analysis of nine randomized clinical trials indicated that very early mobilization showed no significant difference in mortality or complications but contributed significant improvement to activities of daily living and length of hospital stay . These contradictory effects have sparked debate on the safety, efficacy, and optimum dose of VEE in stroke management [ , , ]. Studies have shown that a high dose of mobilization exercise within 24 h may be counterproductive to good outcomes and promote neural cell apoptosis . Meanwhile, Marzolini et al. cautioned against VEE in acute stroke without evidence to justify its safety and efficacy concerning its influence on early stroke inflammatory processes, which are important in stroke recovery. Stroke disrupts the integrity of the blood–brain barrier (BBB), thus, VEE in the presence of BBB dysfunction makes the brain parenchyma susceptible to infiltrating peripheral molecular cells or biomarkers . Therefore, at the acute stage when BBB is very dysfunctional, VEE may theoretically promote pro- or anti-inflammatory mechanisms and potentially harm or enhance brain tissue recovery and worsen or improve the eventual stroke outcome. In stroke rehabilitation, biomarkers have proved useful in the choice of therapy and in knowing the therapy’s course of action , and are reliable in defining therapy that is beneficial, futile, or harmful . Meanwhile, because favourable recovery after a stroke incident is contingent on immediate intervention; VEE is still recommended in stroke rehabilitation guidelines [ , , ], despite little understanding of the cellular action induced by VEE on ischaemic tissue. This knowledge is essential to provide a biological rationale in determining the safety, efficacy, and dose–response association of VEE in patients with stroke. Thus, this study provides empirical data on the effects of initiating moderate-intensity exercise intervention within 24 h of stroke on acute inflammatory mechanisms and the link between acute modulation of inflammatory markers following exercise and clinical outcomes over time. Building on the previous evidence that ischaemic stroke mechanisms and clinical outcomes are defined by certain biomarkers involved in inflammation and blood clotting, such as cytokines, inflammatory cells, and haemostasis markers (e.g., Interleukin-6 [IL-6], leucocytes, fibrinogen, etc.) [ , , ], the specific objectives of the present study are to: (1) quantify the acute changes in IL-6, fibrinogen, leucocytes, neutrophils, lymphocytes, and monocytes following VEE interventions, (2) evaluate the impact of VEE interventions on clinical outcomes, including motor, functional, cognitive, and affective functioning at follow-up, and (3) analyse the association between VEE-induced acute regulation of inflammatory markers and clinical outcomes at follow-up in individuals with acute ischaemic stroke. Participants Participants for this study were patients with acute ischaemic stroke admitted to the emergency room and stroke wards at the Osun State University Teaching Hospital, Osogbo, Nigeria. Patients with clinical and radiological diagnosis (Computed Tomography or Magnetic Resonance Imaging Scans) of acute ischaemic stroke, who were 40 years and older, who were admitted to the hospital within 24 h of the stroke incident, who presented with mild to moderate stroke severity (with National Institute of Health Stroke Scale (NIHSS) scores ≤ 15), and those without any major communication problems preventing them from understanding the protocol were included. However, patients admitted into the intensive care unit or with a problem of consciousness (≥ 2 score on item one of NIHSS), with recurrent stroke, with a score more than 0 on the modified Ranking Scale (mRS) before stroke or with any apparent physical disability before stroke onset, on treatment with recombinant tissue plasminogen activator, and with other stroke type were excluded from the study. Out of 101 patients assessed for eligibility, 51 patients were excluded on account of major communication problems ( n = 8), recurrent stroke ( n = 12), late presentation to the hospital i.e., > 24 h ( n = 27), other neurological problems other than stroke such as Parkinson’s disease ( n = 2), and declined consent ( n = 2). Participants were recruited consecutively into this randomized clinical trial. The trial was retrospectively registered with the Pan African Clinical Trial Registry (PACTR202406755848901). Chan’s sample size formula for two groups of experimental study, M = C × π1 (1- π1) + π2 (1-π2)/(π1 – π2) was employed to calculate the sample size . C = 7.9 for 80% power, π1 and π2 are estimates which are 0.25 and 0.65 to observe a 40% difference (effect size) between the control and experimental group at 5% error of probability, thus, M = 7.9 × (0.25 (1–0.25) + 0.65 (1- 0.65)/ (0.25- 0.65) = 20.49 approx. 21 for a group. Hence, 21 × 2 = 42; however, 4 extra participants (20%) were added to each group to make room for possible attrition and loss to follow-up, thus, making a total number of 50 participants. Therefore, 50 patients who met the inclusion criteria were recruited and randomly assigned to two groups with 25 participants in each group. However, only 48 participants completed the study. Two patients were lost to death and relocation during follow-up and their data was removed from the final analysis. There were no significant differences in the baseline socio-demographic, clinical outcomes, and inflammatory markers between participants lost to follow-up and those who completed the study. The participants’ CONSORT flowchart is shown in Fig. . Participants for this study were patients with acute ischaemic stroke admitted to the emergency room and stroke wards at the Osun State University Teaching Hospital, Osogbo, Nigeria. Patients with clinical and radiological diagnosis (Computed Tomography or Magnetic Resonance Imaging Scans) of acute ischaemic stroke, who were 40 years and older, who were admitted to the hospital within 24 h of the stroke incident, who presented with mild to moderate stroke severity (with National Institute of Health Stroke Scale (NIHSS) scores ≤ 15), and those without any major communication problems preventing them from understanding the protocol were included. However, patients admitted into the intensive care unit or with a problem of consciousness (≥ 2 score on item one of NIHSS), with recurrent stroke, with a score more than 0 on the modified Ranking Scale (mRS) before stroke or with any apparent physical disability before stroke onset, on treatment with recombinant tissue plasminogen activator, and with other stroke type were excluded from the study. Out of 101 patients assessed for eligibility, 51 patients were excluded on account of major communication problems ( n = 8), recurrent stroke ( n = 12), late presentation to the hospital i.e., > 24 h ( n = 27), other neurological problems other than stroke such as Parkinson’s disease ( n = 2), and declined consent ( n = 2). Participants were recruited consecutively into this randomized clinical trial. The trial was retrospectively registered with the Pan African Clinical Trial Registry (PACTR202406755848901). Chan’s sample size formula for two groups of experimental study, M = C × π1 (1- π1) + π2 (1-π2)/(π1 – π2) was employed to calculate the sample size . C = 7.9 for 80% power, π1 and π2 are estimates which are 0.25 and 0.65 to observe a 40% difference (effect size) between the control and experimental group at 5% error of probability, thus, M = 7.9 × (0.25 (1–0.25) + 0.65 (1- 0.65)/ (0.25- 0.65) = 20.49 approx. 21 for a group. Hence, 21 × 2 = 42; however, 4 extra participants (20%) were added to each group to make room for possible attrition and loss to follow-up, thus, making a total number of 50 participants. Therefore, 50 patients who met the inclusion criteria were recruited and randomly assigned to two groups with 25 participants in each group. However, only 48 participants completed the study. Two patients were lost to death and relocation during follow-up and their data was removed from the final analysis. There were no significant differences in the baseline socio-demographic, clinical outcomes, and inflammatory markers between participants lost to follow-up and those who completed the study. The participants’ CONSORT flowchart is shown in Fig. . Patients diagnosed with acute ischaemic stroke and who met the specified inclusion criteria and consented to the study were consecutively recruited and randomly assigned using a simple balloting method, to two groups (very early exercise group [VEEG] and usual care group [UCG]). Participants were assigned to either of the groups by using a process of simple randomization of the ballot system, with a ratio of 1:1. The ballot consisted of an equal number of ‘yes’ and ‘no’ responses (25 each), written on identical pieces of paper, folded in opaque and sealed envelopes, and placed within a non-transparent box. Participants were consecutively allocated to either VEEG (if ‘yes’ was drawn) or UCG (if ‘no’ was drawn). A research assistant, who was not involved with the intervention and evaluation processes, conducted the ballot drawing process, ensuring that each allocation was made without replacement. Clinical outcome assessors and laboratory analysts were masked from the group allocation. Before proceeding, ethical approval from the Ethical Committee of the Osun State University Teaching Hospital, Osogbo, Nigeria was obtained (UTH/REC/2023/05/766). Written informed consent was obtained from all participants or their nominees. Participants in both groups received identical basic care and attention (medical, nursing, etc.) outside of these specific interventions. Baseline assessment Following the randomization process, the participants’ baseline data was collected, including socio-demographics. Participants with secondary education or less were categorized as having a low level of education, and while using the Nigerian minimum wage, monthly income of < #30,000, #30,000- #70,000, and > #70,000 were categorized as low, medium, and high income, respectively. The assessment also included stroke laterality and recommended drugs. In addition, the number and nature of stroke risk factors or co-morbidities among the participants, including hypertension, diabetes mellitus, hyperlipidemia, ischemic heart disease, urinary tract infection, respiratory infection, and smoking and alcohol habits were documented . The assessment of baseline stroke severity for each participant was conducted using the 11-item NIHSS, where each question is assigned a score ranging from 0 to 4. A greater score is suggestive of increased stroke severity . The baseline levels of inflammatory markers such as interleukin-6 (IL-6), fibrinogen, leucocytes, neutrophils, lymphocytes, and monocytes were also assessed within 24 h of stroke incidence. Furthermore, the assessments of clinical variables including motor impairment, physical disability, functional independence, depression, anxiety, and cognition were undertaken. Control group intervention Participants in the control group, UCG, received usual care (positioning and regular turning). There is no consensus on the optimum positioning for acute stroke patients, however, the five recommended positioning in acute stroke in the literature are sitting in an armchair, side lying on the unaffected side, side lying on the affected side, sitting in a wheelchair and supine lying . Because patients were most often confined to bed at an acute stroke, the positioning adopted in this study was side and supine lying with the head elevated to at least 30 degrees . For the unaffected side lying, the affected arm and elbow were straight, and the elbow was supported by a pillow, the affected leg was brought forward, the knee was bent and the leg was supported by a pillow. The head and waist were also supported. For the affected side lying, the affected shoulder was straight to ensure adequate shoulder support, and the affected leg was placed with the thigh to align with the trunk. The knee was bent slightly. The unaffected leg was placed with the bent knee with a pillow in front of the affected leg. The head was supported and bent forward a little. For the supine lying, the head was supported, bent slightly towards the affected shoulder, and gently turned towards the affected side. The buttock at the affected side was supported and extended towards the knee. The affected arm was supported while the elbow was straight and the palms facing upward . Patients were turned 2 hourly . The intervention in the UCG lasted for seven days from the time of randomization. After the seven-day acute intervention, each participant was followed up for three months while continuing with conventional physiotherapy twice weekly. Experimental group intervention The patients in the experimental group (VEEG) underwent exercise intervention within 24 h of stroke incidence. The exercises included passive, active, resisted, and auto-assisted range of motion (ROM) exercises to all joints of both affected and unaffected sides, and graded and dose-titrated mobilization exercises i.e., out-of-bed activity including sitting out of bed, standing, transferring, and walking based on patient’s condition [ , , ]. The exercise intervention lasted 45 min, twice a day (morning and evening), amounting to 1.5 h/d, for seven days [ , , , ], making potentially 14 sessions in all. The mean in-patient physiotherapy session for stroke survivors reported in a Nigerian tertiary hospital was eight sessions , while the average time of hospitalization of stroke patients in Nigeria is about 14 days . Similarly, each participant in this group continued with conventional physiotherapy twice weekly for three months after the acute intervention. Evaluation of physiological parameters Physiological measures including blood pressure, heart rate, temperature, and oxygen saturation, were routinely monitored and recorded daily for participants in both groups, before, during, and after treatment. The intervention was halted and rescheduled when some physiological parameters were altered, including systolic blood pressure falling below 110 mmHg or exceeding 220 mmHg, diastolic blood pressure falling below 80 mmHg or exceeding 105 mmHg, resting heart rate falling below 40 beats per minute or exceeding 110 beats per minute, body temperature exceeding 38.5 °C, and oxygen saturation falling below 92% . Follow-up intervention After the 7-day acute intervention for each group, patients in both groups continued to receive progressive, supervised, 90-min, twice-weekly physiotherapy interventions at follow-up for 3 months. The physiotherapy interventions during the follow-up period included ROM and flexibility exercises (3 sets, 10 reps), strengthening exercises (proprioceptive neuromuscular facilitation and Theraband exercises) (2 sets of 10 reps), balance exercises (step-ups, chair rises, wall exercise, marching, toe rises, ball kicking, sudden stop and turn on motion) (2–3 sets, 10 reps), upper limb functional exercises (opening of drawers, writing, hand exerciser, picking and counting) (3 sets, 10 reps), and endurance exercises (treadmill exercise (0.1–0.5 m/s, 5–10 inclination, 10–20 min)/ riding a stationary bicycle ergometry (2- to 5-min-increments with resistance until 20 to 30 min of continuous cycling at 40 rpm), stepping exercise (3 sets, 10 reps)) . The flowchart of the intervention is presented in Fig. . Coupled with the direct supervision of the therapist, the use of an exercise diary, telephone calls/texts, and the involvement of family/caregivers for reminders were employed to monitor adherence. Primary and secondary outcomes Interleukin-6 is one of the main makers of inflammation in ischaemic stroke [ , , ] while physical disability, assessed by mRS [ , , ], is a common measure of stroke outcome in previous related studies, thus, IL-6 and physical disability were the primary outcomes for the inflammatory markers and clinical outcomes in this study. Fibrinogen, leukocytes, neutrophils, lymphocytes, and monocytes were the secondary outcomes for inflammatory markers, while motor impairment, functional independence, depression, anxiety, and cognition were the secondary clinical outcomes. Assessment of inflammatory markers To observe the trends in changes in the inflammatory markers during the acute stage of stroke (1–7 days), the levels of inflammatory markers in both groups were assessed immediately after randomization before any intervention took place, and on the 4th and 7th day of acute intervention. The concentrations of serum IL-6 and plasma fibrinogen were evaluated via the use of enzyme-linked immunosorbent assay and the Clauss method . The whole blood was analysed within 1 h of blood sample collection to evaluate the concentration of leucocytes and its derivative . Meanwhile, the analysis of leukocytes, neutrophils, lymphocytes, and monocytes was performed utilizing a Beckman Coulter AcT 5 differential haematology analyzer. Assessment of clinical outcomes The clinical outcomes, namely motor impairment, physical disability, functional independence, depression, anxiety, and cognition were assessed for each group at three-time points: at baseline, 1st and 3rd month of follow-up. The motor impairment was assessed by the Supplemental Motor Scale of NIHSS (SMS-NIHSS). The SMS-NIHSS is a standardized assessment tool employed to evaluate objective motor function in stroke. This instrument has eight measures that assess motor dysfunction in the bilateral shoulder, wrist, hip, and ankle joints. The motor function in SMS-NIHSS was assessed using the six-point Likert ordinal grading scale, which encompasses a range from no movement (score of 5) to normal movement (score of 0) , with minimum to maximum score being between 0 to 40. A higher SMS-NIHSS score indicates worse motor impairment. According to previous studies conducted by Enrique et al. and Albanese et al. the SMS-NIHSS has been shown to possess adequate validity and sensitivity in evaluating motor function in individuals who have had a stroke. The assessment of global physical disability was evaluated using the mRS, scored on a scale from 0 (no disability), 1 (no significant disability), 2 (slight disability), 3 (moderate disability), 4 (moderately severe disability), 5 (severe disability), and 6 (death). A lower mRS score corresponds to a greater level of physical functioning . The Modified Barthel Index (MBI), which measures the ability of stroke survivors to perform 10 activities of daily life without assistance, was employed to assess functional independence. The MBI is evaluated using a five-point Likert scale, and its psychometric features are satisfactory in the evaluation of functional independence after a stroke, with higher scores indicating higher functional independence . The Hospital Anxiety and Depression Scale (HADS), which has been validated and widely employed in assessing depression and anxiety among stroke survivors, was used to assess symptoms of depression and anxiety . Each of the HADS subscales has seven items scored on a 4-point Likert scale (0–3), with 21 being the maximum score for depression and anxiety subscale. A higher HADS scores suggest more symptoms of depression or anxiety . The Montreal Cognitive Assessment (MoCA) is a measure that is useful in evaluating the cognitive abilities of people after a stroke event. MoCA is a 30-point test administration that evaluates many areas of cognition . The validity of the MoCA has been proven among individuals who have had a stroke . The maximum score for MoCA is 30, where higher scores on the MoCA have been associated with greater cognitive functioning [ – ]. Data analysis The descriptive statistics of mean, standard deviation, median, interquartile range, frequency, and percentage were used to summarize data. To compare the baseline parameters between the two groups, the independent t-test and chi-square were used for physical features, clinical characteristics, and inflammatory markers, while Mann Whitney U test was applied for clinical outcomes, which were measured with ordinal scales. Repeated Measure ANOVA and Bonferroni post hoc tests were used for within-group comparison of the inflammatory markers across baseline, 4th, and 7th day. Friedman’s ANOVA and Wilcoxon signed ranked test (post-hoc corrections) were used to compare clinical outcomes across baseline, 1st, and 3rd month of follow-up. The independent t-test was employed to compare between-group mean changes of the inflammatory markers on the 4th and 7th day, while the Mann Whitney U test was utilized for clinical outcomes at the 1- and 3-month follow-up. To investigate the associations between changes in inflammatory markers on the 4th and 7th day and clinical outcomes at 1- and 3-month follow-up, Spearman rho correlation coefficients were applied. The alpha level was set at p < 0.05. Data was analysed using SPSS 21.0 version software (SPSS Inc, Chicago, Illinois, USA). Following the randomization process, the participants’ baseline data was collected, including socio-demographics. Participants with secondary education or less were categorized as having a low level of education, and while using the Nigerian minimum wage, monthly income of < #30,000, #30,000- #70,000, and > #70,000 were categorized as low, medium, and high income, respectively. The assessment also included stroke laterality and recommended drugs. In addition, the number and nature of stroke risk factors or co-morbidities among the participants, including hypertension, diabetes mellitus, hyperlipidemia, ischemic heart disease, urinary tract infection, respiratory infection, and smoking and alcohol habits were documented . The assessment of baseline stroke severity for each participant was conducted using the 11-item NIHSS, where each question is assigned a score ranging from 0 to 4. A greater score is suggestive of increased stroke severity . The baseline levels of inflammatory markers such as interleukin-6 (IL-6), fibrinogen, leucocytes, neutrophils, lymphocytes, and monocytes were also assessed within 24 h of stroke incidence. Furthermore, the assessments of clinical variables including motor impairment, physical disability, functional independence, depression, anxiety, and cognition were undertaken. Participants in the control group, UCG, received usual care (positioning and regular turning). There is no consensus on the optimum positioning for acute stroke patients, however, the five recommended positioning in acute stroke in the literature are sitting in an armchair, side lying on the unaffected side, side lying on the affected side, sitting in a wheelchair and supine lying . Because patients were most often confined to bed at an acute stroke, the positioning adopted in this study was side and supine lying with the head elevated to at least 30 degrees . For the unaffected side lying, the affected arm and elbow were straight, and the elbow was supported by a pillow, the affected leg was brought forward, the knee was bent and the leg was supported by a pillow. The head and waist were also supported. For the affected side lying, the affected shoulder was straight to ensure adequate shoulder support, and the affected leg was placed with the thigh to align with the trunk. The knee was bent slightly. The unaffected leg was placed with the bent knee with a pillow in front of the affected leg. The head was supported and bent forward a little. For the supine lying, the head was supported, bent slightly towards the affected shoulder, and gently turned towards the affected side. The buttock at the affected side was supported and extended towards the knee. The affected arm was supported while the elbow was straight and the palms facing upward . Patients were turned 2 hourly . The intervention in the UCG lasted for seven days from the time of randomization. After the seven-day acute intervention, each participant was followed up for three months while continuing with conventional physiotherapy twice weekly. The patients in the experimental group (VEEG) underwent exercise intervention within 24 h of stroke incidence. The exercises included passive, active, resisted, and auto-assisted range of motion (ROM) exercises to all joints of both affected and unaffected sides, and graded and dose-titrated mobilization exercises i.e., out-of-bed activity including sitting out of bed, standing, transferring, and walking based on patient’s condition [ , , ]. The exercise intervention lasted 45 min, twice a day (morning and evening), amounting to 1.5 h/d, for seven days [ , , , ], making potentially 14 sessions in all. The mean in-patient physiotherapy session for stroke survivors reported in a Nigerian tertiary hospital was eight sessions , while the average time of hospitalization of stroke patients in Nigeria is about 14 days . Similarly, each participant in this group continued with conventional physiotherapy twice weekly for three months after the acute intervention. Physiological measures including blood pressure, heart rate, temperature, and oxygen saturation, were routinely monitored and recorded daily for participants in both groups, before, during, and after treatment. The intervention was halted and rescheduled when some physiological parameters were altered, including systolic blood pressure falling below 110 mmHg or exceeding 220 mmHg, diastolic blood pressure falling below 80 mmHg or exceeding 105 mmHg, resting heart rate falling below 40 beats per minute or exceeding 110 beats per minute, body temperature exceeding 38.5 °C, and oxygen saturation falling below 92% . After the 7-day acute intervention for each group, patients in both groups continued to receive progressive, supervised, 90-min, twice-weekly physiotherapy interventions at follow-up for 3 months. The physiotherapy interventions during the follow-up period included ROM and flexibility exercises (3 sets, 10 reps), strengthening exercises (proprioceptive neuromuscular facilitation and Theraband exercises) (2 sets of 10 reps), balance exercises (step-ups, chair rises, wall exercise, marching, toe rises, ball kicking, sudden stop and turn on motion) (2–3 sets, 10 reps), upper limb functional exercises (opening of drawers, writing, hand exerciser, picking and counting) (3 sets, 10 reps), and endurance exercises (treadmill exercise (0.1–0.5 m/s, 5–10 inclination, 10–20 min)/ riding a stationary bicycle ergometry (2- to 5-min-increments with resistance until 20 to 30 min of continuous cycling at 40 rpm), stepping exercise (3 sets, 10 reps)) . The flowchart of the intervention is presented in Fig. . Coupled with the direct supervision of the therapist, the use of an exercise diary, telephone calls/texts, and the involvement of family/caregivers for reminders were employed to monitor adherence. Interleukin-6 is one of the main makers of inflammation in ischaemic stroke [ , , ] while physical disability, assessed by mRS [ , , ], is a common measure of stroke outcome in previous related studies, thus, IL-6 and physical disability were the primary outcomes for the inflammatory markers and clinical outcomes in this study. Fibrinogen, leukocytes, neutrophils, lymphocytes, and monocytes were the secondary outcomes for inflammatory markers, while motor impairment, functional independence, depression, anxiety, and cognition were the secondary clinical outcomes. To observe the trends in changes in the inflammatory markers during the acute stage of stroke (1–7 days), the levels of inflammatory markers in both groups were assessed immediately after randomization before any intervention took place, and on the 4th and 7th day of acute intervention. The concentrations of serum IL-6 and plasma fibrinogen were evaluated via the use of enzyme-linked immunosorbent assay and the Clauss method . The whole blood was analysed within 1 h of blood sample collection to evaluate the concentration of leucocytes and its derivative . Meanwhile, the analysis of leukocytes, neutrophils, lymphocytes, and monocytes was performed utilizing a Beckman Coulter AcT 5 differential haematology analyzer. The clinical outcomes, namely motor impairment, physical disability, functional independence, depression, anxiety, and cognition were assessed for each group at three-time points: at baseline, 1st and 3rd month of follow-up. The motor impairment was assessed by the Supplemental Motor Scale of NIHSS (SMS-NIHSS). The SMS-NIHSS is a standardized assessment tool employed to evaluate objective motor function in stroke. This instrument has eight measures that assess motor dysfunction in the bilateral shoulder, wrist, hip, and ankle joints. The motor function in SMS-NIHSS was assessed using the six-point Likert ordinal grading scale, which encompasses a range from no movement (score of 5) to normal movement (score of 0) , with minimum to maximum score being between 0 to 40. A higher SMS-NIHSS score indicates worse motor impairment. According to previous studies conducted by Enrique et al. and Albanese et al. the SMS-NIHSS has been shown to possess adequate validity and sensitivity in evaluating motor function in individuals who have had a stroke. The assessment of global physical disability was evaluated using the mRS, scored on a scale from 0 (no disability), 1 (no significant disability), 2 (slight disability), 3 (moderate disability), 4 (moderately severe disability), 5 (severe disability), and 6 (death). A lower mRS score corresponds to a greater level of physical functioning . The Modified Barthel Index (MBI), which measures the ability of stroke survivors to perform 10 activities of daily life without assistance, was employed to assess functional independence. The MBI is evaluated using a five-point Likert scale, and its psychometric features are satisfactory in the evaluation of functional independence after a stroke, with higher scores indicating higher functional independence . The Hospital Anxiety and Depression Scale (HADS), which has been validated and widely employed in assessing depression and anxiety among stroke survivors, was used to assess symptoms of depression and anxiety . Each of the HADS subscales has seven items scored on a 4-point Likert scale (0–3), with 21 being the maximum score for depression and anxiety subscale. A higher HADS scores suggest more symptoms of depression or anxiety . The Montreal Cognitive Assessment (MoCA) is a measure that is useful in evaluating the cognitive abilities of people after a stroke event. MoCA is a 30-point test administration that evaluates many areas of cognition . The validity of the MoCA has been proven among individuals who have had a stroke . The maximum score for MoCA is 30, where higher scores on the MoCA have been associated with greater cognitive functioning [ – ]. The descriptive statistics of mean, standard deviation, median, interquartile range, frequency, and percentage were used to summarize data. To compare the baseline parameters between the two groups, the independent t-test and chi-square were used for physical features, clinical characteristics, and inflammatory markers, while Mann Whitney U test was applied for clinical outcomes, which were measured with ordinal scales. Repeated Measure ANOVA and Bonferroni post hoc tests were used for within-group comparison of the inflammatory markers across baseline, 4th, and 7th day. Friedman’s ANOVA and Wilcoxon signed ranked test (post-hoc corrections) were used to compare clinical outcomes across baseline, 1st, and 3rd month of follow-up. The independent t-test was employed to compare between-group mean changes of the inflammatory markers on the 4th and 7th day, while the Mann Whitney U test was utilized for clinical outcomes at the 1- and 3-month follow-up. To investigate the associations between changes in inflammatory markers on the 4th and 7th day and clinical outcomes at 1- and 3-month follow-up, Spearman rho correlation coefficients were applied. The alpha level was set at p < 0.05. Data was analysed using SPSS 21.0 version software (SPSS Inc, Chicago, Illinois, USA). The mean age, weight, height, and body mass index of the participants were 64.2 ± 9.36 years, 66.2 ± 7.05 kg, 1.60 ± 0.06 m, and 25.9 ± 2.66 kg/m 2 , respectively. Overall, most of the participants were male (56.2%) and had right-sided stroke laterality (52.1%). The baseline mean values of all participants for IL-6, fibrinogen, and leucocytes were 6.34 ± 3.15 pg/ml, 417.3 ± 76.3 mg/dl, and 8.48 ± 3.88 × 10 3 /uL, respectively. The results showed that participants in both groups were comparable in physical, socio-demographic, and stroke-related characteristics ( p > 0.05) (Table ). The results of the between-group comparison of baseline inflammatory markers and clinical outcomes showed that both groups were comparable ( p > 0.05). The median and interquartile range of SMSNIHSS, mRS, and MoCA for all participants at baseline were respectively 4.50 (4.0–5.0), 4.0 (4.0–5.0), and 15.0 (10.2–19.7). The results further showed that the baseline of all clinical outcomes was comparable between both groups ( p > 0.05) (Table ). The results examining the effects of interventions within each group on inflammatory markers and clinical outcomes are presented in Tables and . The results showed a significant increase in all examined biomarkers across baseline, 4th, and 7th day of the study among participants in UCG, except lymphocytes, which significantly decreased across study time ( p < 0.05), and monocytes with insignificant increase ( p > 0.05). However, only IL-6, fibrinogen, and leucocytes showed a significant increase across the study time among participants in VEEG ( p < 0.05) (Table ). Furthermore, there was a significant decrease in motor impairment, physical disability, depression, and anxiety, and an increase in functional independence and cognition across baseline, 1st, and 3rd month of follow-up among participants in both groups ( p < 0.05) (Table ). The results of the mean change comparison of the inflammatory markers on the 4th and 7th day of the study between the two groups are presented in Table . On 4th day (difference between day four and baseline), the results showed a lower but insignificant mean change in all inflammatory markers, except lymphocytes and fibrinogen, which had a higher value, among participants in VEEG compared to UCG ( p > 0.05). Meanwhile, on the 7th day (difference between day seven and baseline), neutrophils ( p = 0.021) had a significantly lower mean change, while lymphocytes ( p = 0.001) had a significantly higher mean change among participants in VEEG compared to those in UCG. At this same period, participants in VEEG again maintained a lower but insignificant mean change in other biomarkers compared to those in UCG. The effect size of VEE on inflammatory markers was largely small or medium. On 4th day, the effect size was small in fibrinogen (d = 0.38) and lymphocytes (d = 0.31) and was medium in interleukin-6 (d = 0.55) and neutrophils (d = 0.51) levels but the effect was negligible for monocytes and leucocytes. On the 7th day, the effect size was small in fibrinogen (d = 0.46) and leucocytes (d = 0.39), medium in interleukin-6 (d = 0.55) and neutrophils (d = 0.69), and large in lymphocytes (d = 1.09) concentration. Again, the effect size of VEE on monocyte concentration was negligible on the 7th day (Table ). These results indicate that while VEE had minor/small effects on some inflammatory markers, the modulatory effect of VEE was moderate to substantial in others, suggesting differences in the sensitivity of the inflammatory pathways to VEE. Furthermore, the effect on lymphocytes and leucocytes that increased from day 4 to day 7 suggests that VEE exerts time-dependent modulation of immune cell responses, indicating a progressive impact of VEE on inflammatory markers. Furthermore, the clinical outcomes of the participants were compared in both groups at 1st and 3rd month relative to the baseline. The results (Table ) showed that at 1st month (difference between the first month and baseline) and 3rd month (difference between the third month and baseline) of follow-up, participants in VEEG had a significant decrease in the median change in motor impairment, physical disability, depression, and anxiety, and a significant increase in functional independence and cognition than those in UCG ( p < 0.05). Meanwhile, the effect size of VEE on all clinical outcomes examined in this study was large at both 1st (≥ 0.52 r ≤ 0.66) and 3rd (≥ 0.59 r ≤ 0.70) month of follow-up, except in depression at 1st month (r = 0.49) and depression (r = 0.38) and anxiety (r = 0.43) at 3rd month which was medium (Table ). The large effects of VEE on most clinical outcomes assessed in this study at 1- and 3-month follow-up suggest its potential substantial and sustained benefits in stroke recovery. The correlations between changes in inflammatory markers at days 4 and 7 from baseline and the clinical outcomes at 1- and 3-month follow-up are presented in Table . The results showed that change in IL-6 at day 4 was negatively correlated with MBI at 3 months (r s = −0.33; p = 0.019) while the change in the lymphocytes at day 4 was negatively correlated with anxiety-subscale of HADS (r s = −0.30; p = 0.036) at 1 month. Furthermore, change in IL-6 at day 7 had a positive correlation with SMSNIHSS (r s = 0.30; p = 0.039) and a negative correlation with MBI (r s = −0.33; p = 0.021) at 3 months, while 3-month MBI and 7-day change in fibrinogen (r s = −0.29; p = 0.044), and 3-month mRS and 7-day change in lymphocytes (r s = −0.44; p = 0.002) had a negative correlation. Very early physical exercise has often been advocated and sometimes prescribed after a stroke incident; however, its effect on the clinical outcomes among stroke survivors is inconclusive. Furthermore, the neuro-biological effects of VEE in stroke and its contribution to the eventual stroke outcomes are largely unknown. This study demonstrates the positive modulation of inflammatory markers at the acute stage by VEE, indicating the benefits of early exercise intervention in stroke recovery. Importantly, the improved clinical outcomes at follow-up were associated with this modulation, suggesting the important role of VEE on both neuro-biological mechanisms and clinical recovery over time. The findings of this study add fresh insights into the mediatory functions of inflammation in post-acute stroke care and underscore the prospects of timely rehabilitation in enhancing long-term stroke outcomes. There were no significant differences in terms of socio-demographics, stroke-related characteristics, biomarkers levels, and psycho-physical characteristics between patients in both groups at baseline, showing that patients in both groups were homogenous and comparable, and thus, the observed differences cannot be attributable to these parameters. The findings of this study showed significant up-regulation of the pro-inflammatory markers (IL-6, fibrinogen, leucocytes, neutrophils, and monocytes) and reduction in the anti-inflammatory marker (lymphocytes) assessed in this study from baseline to 4th and 7th day of assessment within each group. The upregulation of the pro-inflammatory markers and the reduction in lymphocyte concentration at the acute stage of stroke observed in this study has been reported earlier in several studies [ , – ]. For instance, the increase of IL-6 among survivors of ischaemic stroke is said to begin within 2 h of stroke onset and reach a peak point the first week of ischaemic event [ , , ]. Meanwhile, fibrinogen, which is the main plasma protein, is involved in haemostasis, coagulation, and blood viscosity . However, reports have shown that there is an increase in the level of circulating fibrinogen after a stroke event . The incidence of stroke causes early depletion of circulating peripheral lymphocytes, i.e., lymphopenia . Lymphocytes are major white blood cell parameters involved with innate immunity, however, their depletion after stroke incidence, termed stroke-induced immune suppression, is common . Unfortunately, the upregulation of pro-inflammatory and reduction in anti-inflammatory molecules within the first week of stroke are consistently associated with worse short- and long-term clinical outcomes of stroke patients [ , , – ]. Despite this, the results showed that patients with VEE intervention had better inflammatory outlook than those with just usual care. The results of the between-group mean change showed a lower mean concentration of IL-6, leucocytes, neutrophils, and monocytes on the 4th and 7th day from baseline among patients in VEEG. Furthermore, concerning lymphocytes, the results showed that patients in VEEG had a higher lymphocyte concentration on the 4th and 7th day compared to those in UCG. The results, for instance, showed that while patients in UCG showed a 24.7% and 27.2% increase in IL-6 concentration on the 4th and 7th day, the increase was just 13.6% and 13.8% for patients in VEEG at the same time points. Furthermore, lymphocyte concentration decreased by 15.8% and 33.8% on the 4th and 7th day among patients in UCG, while the decrease was just 6.95% and 1.66% on the 4th and 7th day among patients in VEEG. In short, this study found that VEE had a moderate effect on some inflammatory markers, including IL-6. Although the reduction in IL-6 concentrations in patients exposed to exercise interventions within 24 h of stroke compared with those who started physical exercise after a week did not reach the threshold of conventional statistical significance ( p = 0.064 at day 4; p = 0.061 at day 7), the moderate effect size observed at both time points (d = 0.55) suggests an important biological influence. As observed in this study, empirical data has shown that physical exercise is a potent modulator of inflammatory mechanisms in individuals with chronic illnesses such as systemic lupus erythematosus, spinal cord injury, etc. [ – ]. Exercise has been shown to not only reduce inflammatory activity but also improve the anti-inflammatory process in disease conditions . Thus, VEE is robust in reducing the pro-inflammatory (e.g., IL-6 or neutrophils) or improving the anti-inflammatory markers (e.g., lymphocytes) in stroke. Among stroke survivors, there is limited data on the effect of exercise on biomarkers at the acute stage; however, the results of related studies are in line with the findings of this study. For instance, Kirzinger et al. reported a non-significant reduction of IL-6, tumor necrosis factor-alpha, and C-reactive protein among sub-acute stroke patients after four weeks of aerobic exercise. However, contrary to the linear increase in fibrinogen levels observed in patients in UCG, the findings of this study showed a non-linear effect of VEE on fibrinogen concentration across the intervention timeline. The results showed a higher increase in fibrinogen levels of patients in VEEG compared with UCG from baseline to 4th day of intervention. However, there was a higher reduction in the fibrinogen level among patients in VEEG on the 7th day compared to baseline. In other words, very early exercises initially caused a higher increase in fibrinogen level, then a higher reduction in the course of intervention than those with regular turning and positioning. Other authors have also reported a decrease in fibrinogen levels after physical exercise in individuals with cardiovascular diseases . A study by Kirzinger et al. showed a similar non-significant change in fibrinogen levels among sub-acute stroke survivors after a 4-week aerobic exercise compared with a relaxation technique. The non-linear effect of exercise on plasma fibrinogen concentration has been mentioned earlier in the literature [ , , ]. Reports state that exercise training induces an acute rise of fibrinogen levels between days 1–3 of exercise training, and reduction at day 5 of the exercise . This phenomenon has been attributed to several factors including acute inflammatory and hormonal responses to exercise which may initiate the production of acute-phase protein, such as fibrinogen, as part of the body’s stress response [ , , ]. Furthermore, exercise may cause mobilisation of stored fibrinogen into the plasma, leading to its transient increase [ , , ]. However, as exercise continues, the anti-inflammatory mechanisms are activated, fibrinolysis processes are promoted, and pro-inflammatory cytokines activities are downregulated, which may have resulted in the observed decline in fibrinogen over time [ , , ]. These findings indicate the important complex interactions between exercise, inflammation, and coagulation, suggesting the potential modulating role of VEE on the dynamics of post-stroke haemostasis over time. Meanwhile, although there was a transient higher increase in the plasma fibrinogen among patients in VEEG from baseline to day 4 in this study, it was negligible compared to the increase observed among patients in UCG in the same period (2.9% vs. 2.8%). This suggests that the percentage of the transient increase among patients with VEE (0.1%) may be clinically negligible and unlikely to have adverse effects. However, to minimize coagulation responses to exercise in this patient population, the prescription of low to moderate-intensity exercises and a gradual increase in exercise intensity over time during early rehabilitation is crucial. Furthermore, careful monitoring of coagulation markers and signs of blood clots (e.g., chest pain, leg swelling, warm skin, pain in the calf, foot, or leg, etc.) is also important, especially among patients with a high risk of thrombosis. Similarly, the findings of this study showed that motor function, physical disability, functional independence, depression, anxiety, and cognition significantly improved from the baseline to 1st and 3rd month of follow-up in each group. This finding is not unexpected as patients in both groups received medical and nursing care concurrently during their admission. Furthermore, after the seven days of acute exercise intervention for patients in VEEG, patients in both groups continued to receive physiotherapy twice weekly for the 3-month follow-up. Therefore, some form of improvement in the physical and psychological health of the stroke survivors in this study is expected for patients in UCG as well. Although the physical exercise was delayed for a week among patients in UCG, starting exercise a week or even two weeks after a stroke incident is still considered an early rehabilitation with better clinical outcomes than stroke patients who started exercise intervention after two weeks . So, in theory, patients in UCG who started physiotherapy a week after stroke onset in this study are still categorized as being exposed to early rehabilitation intervention and should show a considerable improvement in their clinical outlook as observed in this study. Furthermore, the concurrent improvement from the baseline to 1st and 3rd month in the clinical outcomes of patients within each group can also be attributed to early spontaneous motor recovery. At the early stage of stroke, there is a phenomenon called ‘spontaneous biological recovery’ . This recovery occurs within a few days of ischaemic event due to a spontaneous mechanism called neuroplasticity, and according to the ‘proportional recovery rule’, may lead to many patients recovering 70% (+ / − 15%) of their pre-stroke functional abilities within three months of stroke [ , , – ]. This phenomenon has been described as a major cofounder of early therapeutic exercise in stroke rehabilitation . However, this rule only fits patients with mild-to-moderate stroke . Although many patients with stroke may recover spontaneously within 3 months of stroke incidence , however, the rate of recovery is based on many factors, including co-morbidities, social support, and exposure to early rehabilitation intervention . Even a delay of a few days in starting intervention after stroke incidence negatively affects the pace of motor recovery . In the present study, the findings of median change comparison showed that patients in VEEG showed a significantly lower motor impairment, physical disability, depression, and anxiety, and significantly higher functional independence and cognition than patients in UCG at both two-time points of follow-up. Specifically, physical disability, for instance, was reduced by 25% in 1st month and 37.5% in 3rd month of follow-up among patients in UCG, whereas physical disability was reduced by 40% and 80% at 1st and 3rd month among those in VEEG. In line with the findings of this result, previous studies have shown that VEE intervention in bed or out of bed [ , , , ] resulted in better clinical outcomes in individuals with stroke than those who started later. However, reports from other studies showed no significant positive effects of very early out-of-bed exercises [ , , , , ]. The results of AVERT II and III studies, published in 2008 and 2015, showed worse outcomes (frequency of death at 3 months and 0–2 mRS scores) at 3 months among participants in very early mobilization . Similarly, Tong et al. showed that very early intensive mobilization (within 24 h) did not show a better favourable outcome (mRS scores 0–2) compared to early intensive mobilization (after 24 h) at a 3-month follow-up. The discrepancy has been attributed to timing and dosage of intervention . Tong and colleagues observed that a mobilisation dosage of ≥ 3 h/d, though ideal and beneficial after 24 h of stroke onset, was considered high intensity and was associated with worse clinical outcomes if implemented within 24 h . Thus, a short period and higher frequency of intervention have been shown and described as ideal for patients undergoing very early exercise therapy . In what could be considered as moderate intensity, in this study, patients in VEEG underwent exercise training for a total of 1.5 h/d, delivered in two 45-min sessions (morning and evening) for seven days. Thus, better outcomes obtained in this study in patients with VEE may result from not only the timing but also the dosage and frequency of the exercise intervention. While the better clinical outcomes observed among patients with very early exercise intervention can be linked with the direct promotion of neuroplasticity by the exercise, the exercise-related changes in the selected inflammatory markers are another plausible reason for the better clinical outcomes obtained in the VEEG. In this study, there were significant associations between exercise-induced changes in some inflammatory markers on the 4th and 7th day post-stroke and improved clinical outcomes, such as functional independence, motor function, depression, and anxiety, at follow-up. For instance, the positive modulation of IL-6 concentrations by VEE at days 4 (r = −0.33; p = 0.019) and 7 (r = −0.33; p = 0.021) were weakly and negatively associated with improved functional independence at 3 months. Interleukin-6 is a main marker of acute inflammation and tissue damage and a major indicator and mediator of neuronal repair after stroke, therefore, this result highlights the potential clinical importance of targeting the acute inflammatory processes in stroke through timely rehabilitation and the importance of moderate VEE in shaping the process of recovery. Although weak correlations were observed between inflammatory markers and clinical outcomes in this study, these results indicate that inflammatory markers, e.g., IL-6, may serve as targets for therapeutic monitoring of very early rehabilitation. The up-regulation of the pro-inflammatory markers in the first week of stroke has been associated with worse clinical outcomes in individuals with stroke even months and years after the initial ischaemic event . The theoretical basis for these worse outcomes is that the up-regulation of these neuro-biological molecules early after stroke impairs optimal neuroplasticity and promotes more oxidative stress and cell apoptosis of the injured neural tissues and the surrounding areas . As shown by the results of the correlation analyses in this study, the positive modulation of the biomarkers at the acute stage, occasioned by VEE, may have contributed to the better clinical outcomes observed in the VEEG exercises group. Although the implementation of moderate-intensity VEE intervention in this study shows promising results in promoting positive inflammatory mechanisms and clinical outcomes, certain precautions should be undertaken to ensure safety, including close monitoring of the patient’s clinical stability (heart rate, blood pressure, neurological status, oxygen saturation, etc.). Patients with a high risk of thrombosis, with severe stroke, with some type of stroke such as large vessel occlusions, brainstem stroke, and intracerebral haemorrhage, and older patients may require careful consideration. Limitations to the study This study presents with some potential limitations. The use of a relatively homogenous small sample size may limit the chance of detecting the true effect of VEE , while recruiting from one center may reduce the external validity of the findings. The findings of this study are also limited to patients with first-ever mild to moderate ischaemic stroke and those without pre-stroke disability. Patients with severe stroke or haemorrhagic stroke are often associated with higher cerebral oedema and haemodynamic instability and therefore may present with a more profound inflammatory response to VEE. Furthermore, patients with recurrent stroke or pre-stroke disability may present with more baseline functional problems causing heterogeneity in recovery patterns, and may also introduce floor effects where intervention effects are difficult to detect. The use of self-reports in some measure, e.g., in the assessment of clinical depression, may also introduce report bias and social desirability. The lack of a placebo group and a short period of follow-up may also serve as a limitation. Thus, considering these factors, further studies with larger samples or from multiple locations with longer follow-up periods are recommended. To address the small sample size and potential imbalances in the dataset, the use of augmentation and balancing methods could also be applied in future studies to improve the robustness of the results . This study presents with some potential limitations. The use of a relatively homogenous small sample size may limit the chance of detecting the true effect of VEE , while recruiting from one center may reduce the external validity of the findings. The findings of this study are also limited to patients with first-ever mild to moderate ischaemic stroke and those without pre-stroke disability. Patients with severe stroke or haemorrhagic stroke are often associated with higher cerebral oedema and haemodynamic instability and therefore may present with a more profound inflammatory response to VEE. Furthermore, patients with recurrent stroke or pre-stroke disability may present with more baseline functional problems causing heterogeneity in recovery patterns, and may also introduce floor effects where intervention effects are difficult to detect. The use of self-reports in some measure, e.g., in the assessment of clinical depression, may also introduce report bias and social desirability. The lack of a placebo group and a short period of follow-up may also serve as a limitation. Thus, considering these factors, further studies with larger samples or from multiple locations with longer follow-up periods are recommended. To address the small sample size and potential imbalances in the dataset, the use of augmentation and balancing methods could also be applied in future studies to improve the robustness of the results . This study highlights the potential benefits of moderate-intensity VEE in positively modulating the inflammatory markers, including IL-6, during the acute stage of stroke and improving physical disability, motor, cognitive, and affective functioning at 3 months in patients with first-ever mild-to-moderate ischaemic stroke. The effects of moderate-intensity VEE on inflammatory markers, particularly IL-6, were associated with improved clinical outcomes, suggesting the important role of moderate timely exercise intervention in promoting recovery through inflammatory modulation.
The Role of Throat Packs in Orthognathic Surgery—A Systematic Review and Meta-Analysis
ff5b67e5-7e7d-4a4a-931d-62eb767b0cfc
11832261
Surgical Procedures, Operative[mh]
Orthognathic surgery is a procedure performed to reposition the maxilla or mandible to enhance occlusal stability, improve facial proportions and esthetics, increase the retrolingual airway dimension, and enhance temporomandibular joint (TMJ) functions . These surgical procedures are usually carried out via an intraoral approach which entails placement of intraoral vestibular incisions. It involves osteotomy cuts; movement of the jaw bones as per the predetermined surgical plan; and stabilization with intermaxillary fixation and rigid fixation, followed by closure. Concomitant surgeries such as rhinoplasty, septoplasty, inferior turbinate reduction bone grafting, malar augmentation, distraction osteogenesis, TMJ surgery, or neck liposuction may be required to enhance the surgical outcomes . Hence, there is a high risk of blood and other debris (bone chips, screws, wire bits, irrigation fluid, etc.) to slip and get lodged in the throat, aspirated, or ingested. Also, ingestion of blood during surgery is known to induce postoperative nausea and vomiting (PONV) . PONV can be distressing and a significant cause of concern to the patient as well as the surgeon as it can induce electrolyte imbalance secondary to dehydration, bleeding, esophageal rupture, or aspiration of gastric contents. All these adverse events are uncomfortable and unpleasant, leading to overall patient dissatisfaction and substantial morbidity . Hence, to overcome these problems and reduce the morbidity of the patients, surgeons often place throat packs. These packs are commonly inserted in the oropharynx or hypopharynx with the primary intention of absorbing blood and irrigation fluids and also act as a mechanical barrier to avoid the ingestion or aspiration of surgical debris and foreign bodies. Throat packs aim to minimize the incidence of PONV and other consequences . There are documented limitations with the usage of throat packs, such as postoperative throat pain and dysphagia, injury to the mucosa and pharyngeal plexus, aphthous stomatitis, and tongue swelling, and may rarely result in fatal airway obstruction . These adverse effects can hinder postoperative oral intake, contributing to electrolyte imbalance and dehydration, thus outweighing the probable benefits of throat packs in the prevention of PONV . Therefore, the use of throat packs has been controversial and a topic of debate. A few questionnaire studies have explicitly documented the magnitude of adverse effects associated with throat packs and the lack of consensus regarding their use, resulting in the overall opinion that throat packs should not be routinely used during nasal, sinus, upper airway, and otolaryngological surgeries . However, throat packs are still widely used during orthognathic surgery. A few recent systematic reviews (SRs) evaluated the need for throat packs in patients undergoing ENT, otolaryngologic, oral, and dental surgery. The authors could not elicit any clinical benefits of pharyngeal packs and advised against their use during ENT, otolaryngologic, and dental surgery . However, the heterogeneous population included in these reviews does not focus on the effectiveness of these packs among patients undergoing orthognathic surgical procedures, which are mainly performed intraorally with a high risk of blood and debris ingestion or aspiration. With this background, we aimed to review the existing literature and pool the estimates of the quality of gastric contents, PONV, and throat pain associated with and without throat packs among patients undergoing orthognathic surgical procedures. 2.1. Obtaining Eligible Studies The SR and meta-analysis (MA) was reported as per the “PRISMA” 2020 guidelines . The protocol for this review was registered with PROSPERO ( CRD42024508844 ). Globally recognized databases (“PubMed, Scopus, Embase, CINAHL, and Web of Science”) were electronically searched without date restriction, and potentially eligible studies pertaining to the effectiveness of throat packs, pharyngeal packs, or oropharyngeal packs in oral and dental surgery were included. A combination of free text words and keywords with the help of Boolean operators was used to search the databases—(“oral and maxillofacial surgery”, “throat pack”, “pharyngeal pack”, and “oropharyngeal pack”). The search strategy used for PubMed was ((“oral and maxillofacial surgery”[All Fields] AND ((“pharynx”[MeSH Terms] OR “pharynx”[All Fields] OR “throat”[All Fields]) AND pack[All Fields])) OR ((“pharynx”[MeSH Terms] OR “pharynx”[All Fields] OR “pharyngeal”[All Fields]) AND pack[All Fields])) OR ((“oropharynx”[MeSH Terms] OR “oropharynx”[All Fields] OR “oropharyngeal”[All Fields]) AND pack[All Fields]) (detailed search strategies for other databases is listed in Supporting ). 2.2. Inclusion and Exclusion Criteria The search was restricted to randomized controlled trials (RCTs) published or reported in English. Literature reviews, commentaries, brief communications, case reports, and case series were excluded. The inclusion criteria for the review were (A) RCT design studies, (B) patients undergoing orthognathic surgery under general anesthesia, and (C) studies comparing the sole intervention of application of a throat pack with nonapplication of a throat pack during orthognathic surgery under general anesthesia. The exclusion criteria were (A) literature reviews, case reports, and case series; (B) patients undergoing any other head and neck surgery; (C) studies comparing different types of throat packs or where throat pack application is not the sole intervention; and (D) nonavailability of full-length text in English. No restrictions were placed on outcome measures. The search results were added to Rayyan, a web-based tool for screening titles and abstracts. This was performed by two reviewers independently. The included studies were subjected to full-text screening also by two reviewers independently. Discrepancies if any were resolved by a third review author. 2.3. Data Extraction and Synthesis The information that was collected for data extraction included author names, year of publication, age, and sex distribution, type of anesthesia and surgery, type and location of the pack, postoperative quality of gastric contents, PONV, postoperative dysphagia (throat pain) and sore throat, and any other outcome measured. The eligible studies were subjected to quality assessment by two authors independently using the “Cochrane Risk of Bias” assessment tool . 2.4. Statistical Analysis MA was performed using RevMan Ver.5.4.1 (Cochrane Collaboration, 2020). Heterogeneity was assessed using the Q and I 2 statistics. Owing to the smaller number of studies, all the available study-level characteristics were re-examined and evaluated for consistency among these variables. Considering the lack of heterogeneity, the pooled estimates of dichotomous variables (PONV) were calculated using the Mantel–Haenszel method with a fixed-effect model and risk ratios were reported along with a 95% confidence interval. Pooled estimates of continuous variables (postoperative throat pain) were calculated using the inverse variance method with a fixed-effect model and mean difference with 95% confidence intervals was reported as recommended by Schulz et al. . The SR and meta-analysis (MA) was reported as per the “PRISMA” 2020 guidelines . The protocol for this review was registered with PROSPERO ( CRD42024508844 ). Globally recognized databases (“PubMed, Scopus, Embase, CINAHL, and Web of Science”) were electronically searched without date restriction, and potentially eligible studies pertaining to the effectiveness of throat packs, pharyngeal packs, or oropharyngeal packs in oral and dental surgery were included. A combination of free text words and keywords with the help of Boolean operators was used to search the databases—(“oral and maxillofacial surgery”, “throat pack”, “pharyngeal pack”, and “oropharyngeal pack”). The search strategy used for PubMed was ((“oral and maxillofacial surgery”[All Fields] AND ((“pharynx”[MeSH Terms] OR “pharynx”[All Fields] OR “throat”[All Fields]) AND pack[All Fields])) OR ((“pharynx”[MeSH Terms] OR “pharynx”[All Fields] OR “pharyngeal”[All Fields]) AND pack[All Fields])) OR ((“oropharynx”[MeSH Terms] OR “oropharynx”[All Fields] OR “oropharyngeal”[All Fields]) AND pack[All Fields]) (detailed search strategies for other databases is listed in Supporting ). The search was restricted to randomized controlled trials (RCTs) published or reported in English. Literature reviews, commentaries, brief communications, case reports, and case series were excluded. The inclusion criteria for the review were (A) RCT design studies, (B) patients undergoing orthognathic surgery under general anesthesia, and (C) studies comparing the sole intervention of application of a throat pack with nonapplication of a throat pack during orthognathic surgery under general anesthesia. The exclusion criteria were (A) literature reviews, case reports, and case series; (B) patients undergoing any other head and neck surgery; (C) studies comparing different types of throat packs or where throat pack application is not the sole intervention; and (D) nonavailability of full-length text in English. No restrictions were placed on outcome measures. The search results were added to Rayyan, a web-based tool for screening titles and abstracts. This was performed by two reviewers independently. The included studies were subjected to full-text screening also by two reviewers independently. Discrepancies if any were resolved by a third review author. The information that was collected for data extraction included author names, year of publication, age, and sex distribution, type of anesthesia and surgery, type and location of the pack, postoperative quality of gastric contents, PONV, postoperative dysphagia (throat pain) and sore throat, and any other outcome measured. The eligible studies were subjected to quality assessment by two authors independently using the “Cochrane Risk of Bias” assessment tool . MA was performed using RevMan Ver.5.4.1 (Cochrane Collaboration, 2020). Heterogeneity was assessed using the Q and I 2 statistics. Owing to the smaller number of studies, all the available study-level characteristics were re-examined and evaluated for consistency among these variables. Considering the lack of heterogeneity, the pooled estimates of dichotomous variables (PONV) were calculated using the Mantel–Haenszel method with a fixed-effect model and risk ratios were reported along with a 95% confidence interval. Pooled estimates of continuous variables (postoperative throat pain) were calculated using the inverse variance method with a fixed-effect model and mean difference with 95% confidence intervals was reported as recommended by Schulz et al. . A total of 543 publications were identified from various databases (PubMed [ n = 329], Scopus [ n = 131], Embase [ n = 13], CINAHL [ n = 41], and Web of Science [ n = 29]). One hundred and twenty-four duplicates were removed and 480 articles were subjected to title and abstract screening. Twenty-one publications were eligible for full-text screening, of which one publication was unavailable after communication to the corresponding author . Therefore, 20 publications were included in the full-text screening, and two publications were included for data extraction and risk of bias assessment based on the inclusion criteria . A total of 84 participants were included in the SR–MA. The mean age reported by Faro et al. was 29.44 (SD: 8.53) years (range: 18–48) with the majority being females (66%). The mean age reported by Powell et al. was 29.17 (SD: 13.24) years (range: 16–64) with the majority being females (60%). In both studies, patients underwent orthognathic surgery under total intravenous general anesthesia with the placement of an oropharyngeal pack in the test group. The oropharyngeal pack consisted of four saline-soaked gauzes rolled without knots in the study by Faro et al. . On the other hand, Powell et al. used a sterile gauze secured with umbilical tape in their study without explicitly mentioning whether they were dry or soaked in saline or any other fluid. The outcomes measured by Faro et al. were PONV (24 h), sore throat (2 and 24 h), and dysphagia (2 and 24 h), whereas Powell et al. measured the quality of gastric contents before extubation, PONV (2 and 24 h), and throat pain (2 h) . 3.1. Assessment of Risk of Bias Faro et al. used a computer-generated program for randomization, and allocation concealment was performed with sequentially numbered opaque and sealed envelopes. Patients as well as the evaluator were blinded. Powell et al. used odd and even birth years to randomize patients, and only patients were blinded to the assigned intervention, indicating a high risk of bias pertaining to randomization, allocation concealment, and blinding methods (Figures and ). 3.2. Effect of Throat Pack on the Quality of Gastric Contents Only Powell et al. reported the quality of gastric contents. The operating surgeon aspirated gastric contents through a nasogastric tube (NGT) before extubation and classified it as bloody or not bloody. There was no significant difference between the two groups, and the gastric contents were bloody in a majority of the patients (66.7%) in both groups, thus favoring the conclusion that throat packs do not hinder the ingestion of blood or other fluids during orthognathic surgery . 3.3. Effect of Throat Pack on PONV Both the included studies evaluated PONV at different time points. Faro et al. used the Kortilla scale to evaluate the incidence of PONV at 24 h whereas Powell et al. evaluated self-reported PONV at 2 h and 24 h after surgery . The MA showed that there was no significant difference in the incidence of PONV between the groups at 2 h ( p -value = 1; RR = 1.00, 95% CI = 0.53–1.88; I 2 = 0%) . Thus, placement of throat packs during orthognathic surgery did not reduce the incidence of PONV. No analysis was performed for PONV at 24 h as only one study reported the finding. Further, Faro et al. also evaluated the incidence of postoperative nausea (PON) and postoperative vomiting (POV) as separate parameters and found that while 34% of patients had PON, only 24% of patients had POV, indicating that nausea was not always followed by vomiting. 3.4. Effect of Throat Pack on Throat Pain Both the included studies recorded throat pain at different time points. Faro et al. recorded the incidence of throat pain at 2 and 24 h and Powell et al. evaluated throat pain at 2 h. Both studies used the visual analogue scale (VAS) . The MA showed that the mean difference of pain was significantly higher among the group with throat pack than the group without throat pack after orthognathic surgery at 2 h with no heterogeneity among the studies ( p value = 0.02; RR = 1.62, 95% CI = 0.23–3.00; I 2 = 0%) . Thus, the placement of throat packs was shown to cause higher postoperative throat pain at 2 h. No analysis was performed for postoperative throat pain at 24 h as only one study reported the finding. Faro et al. used a computer-generated program for randomization, and allocation concealment was performed with sequentially numbered opaque and sealed envelopes. Patients as well as the evaluator were blinded. Powell et al. used odd and even birth years to randomize patients, and only patients were blinded to the assigned intervention, indicating a high risk of bias pertaining to randomization, allocation concealment, and blinding methods (Figures and ). Only Powell et al. reported the quality of gastric contents. The operating surgeon aspirated gastric contents through a nasogastric tube (NGT) before extubation and classified it as bloody or not bloody. There was no significant difference between the two groups, and the gastric contents were bloody in a majority of the patients (66.7%) in both groups, thus favoring the conclusion that throat packs do not hinder the ingestion of blood or other fluids during orthognathic surgery . Both the included studies evaluated PONV at different time points. Faro et al. used the Kortilla scale to evaluate the incidence of PONV at 24 h whereas Powell et al. evaluated self-reported PONV at 2 h and 24 h after surgery . The MA showed that there was no significant difference in the incidence of PONV between the groups at 2 h ( p -value = 1; RR = 1.00, 95% CI = 0.53–1.88; I 2 = 0%) . Thus, placement of throat packs during orthognathic surgery did not reduce the incidence of PONV. No analysis was performed for PONV at 24 h as only one study reported the finding. Further, Faro et al. also evaluated the incidence of postoperative nausea (PON) and postoperative vomiting (POV) as separate parameters and found that while 34% of patients had PON, only 24% of patients had POV, indicating that nausea was not always followed by vomiting. Both the included studies recorded throat pain at different time points. Faro et al. recorded the incidence of throat pain at 2 and 24 h and Powell et al. evaluated throat pain at 2 h. Both studies used the visual analogue scale (VAS) . The MA showed that the mean difference of pain was significantly higher among the group with throat pack than the group without throat pack after orthognathic surgery at 2 h with no heterogeneity among the studies ( p value = 0.02; RR = 1.62, 95% CI = 0.23–3.00; I 2 = 0%) . Thus, the placement of throat packs was shown to cause higher postoperative throat pain at 2 h. No analysis was performed for postoperative throat pain at 24 h as only one study reported the finding. This SR–MA evaluated the role of throat packs in the prevention of ingestion of blood and PONV during orthognathic surgery, and their effect on postoperative throat pain. Multiple trials reported the role of pharyngeal packs in PONV and postoperative throat pain in nasal, sinus, and upper airway surgery. These studies could not establish the efficacy of throat packs in reducing the frequency of PONV and showed a detrimental effect on throat pain . However, limited studies have evaluated the role of throat packs in minimizing the occurrence of PONV and their impact on throat pain in orthognathic surgery alone. Overall, two studies with 84 participants were included in our MA. Faro et al. evaluated PONV and throat pain with and without throat packs among patients undergoing orthognathic surgery. Powell et al. investigated the possibility of ingestion of blood, PONV, and throat pain with and without throat packs in orthognathic surgery. The risk of bias was low for Faro et al. and moderate to high for Powell et al. . The usefulness of throat packs in preventing blood ingestion can be determined from the quality of aspirated gastric contents postoperatively. Only Powell et al. studied gastric contents after orthognathic surgery to determine the role of throat packs in preventing the entry of blood into the stomach. They placed a 16-Fr NGT into the stomach and aspirated the gastric contents before extubation. The gastric contents were bloody in the majority of the cases (66.7%) in both groups, and no significant difference was seen between the two groups. However, the volume of gastric content aspirated was not reported. Postoperative gastric volume after nasal surgery was evaluated by Altun et al. , and they concluded that patients with intraoperative pharyngeal packs had lesser postoperative gastric volume (1.3 mL/kg) as compared to the control group without pharyngeal packs (1.95 mL/kg), and this in turn contributed to lesser PONV and sore throat in these patients. These findings were in agreement with Temel et al. who studied the effect of pharyngeal packing on gastric volume and PONV in patients undergoing nasal surgery. They found that throat packs reduced the postoperative gastric volume and are useful in preventing intraoperative ingestion or aspiration of blood. However, neither of these studies evaluated the quality of the gastric contents, and therefore, no correlation exists between gastric volume and gastric contents. Among the included studies, diverse methods were used to evaluate PONV. Faro et al. used the Kortilla scale to measure PON, POV, and PONV at 24 h after surgery, while Powell et al. evaluated self-reported PONV at 2 h and 24 h after surgery, and both these studies reported no significant difference in the prevalence of PONV with and without the use of throat packs. Our MA also indicated that there was no significant difference in the incidence of PONV between the groups at 2 h. These findings were in accordance with studies that evaluated the role of throat packs in preventing PONV among patients undergoing nasal, sinus, and upper airway surgery . Several SRs on the efficacy of throat packs among patients who underwent otolaryngological, nasal, and other head and neck surgery reported no beneficial effects in the reduction of PONV . PONV is multifactorial with multiple risk factors playing a significant role in its occurrence. The risk factors reported in the literature include female gender, younger age (< 50 years), nonsmoker, incidence of motion sickness or PONV in the past, type of anesthesia, type of surgery, duration of the procedure, intraoperative blood loss, and use of volatile anesthetics and postoperative opioids. The presence of NGT, anxiety, migraine, perioperative fasting, and body mass index (BMI) are some of the low-risk factors contributing to PONV . The type of surgery performed was similar in both studies, comprising LeFort 1 osteotomy for the maxilla, bilateral sagittal split osteotomy (BSSO) for the mandible, and genioplasty for the chin in various combinations. Silva, O'Ryan, and Poor observed that the number of emetic episodes was the highest in bimaxillary (56.44%) followed by only maxillary (43.65%) and only mandibular (32.75%) orthognathic surgeries. Faro et al. performed bimaxillary surgery with genioplasty (54%), bimaxillary surgery without genioplasty (40%), BSSO with genioplasty (2%), and only BSSO (4%) procedures. Powell et al. performed bimaxillary surgery with genioplasty (13.3%), bimaxillary surgery without genioplasty (66.7%), only LeFort 1 surgery (16.7%), and only BSSO (3.3%) procedures. Thus, the distribution of type of surgery was similar in both the included studies. Inhalational anesthesia with volatile anesthetics has the highest probability of inducing PONV . However, in both the included studies, total intravenous general anesthesia was administered without the use of any volatile anesthetics. According to Gan et al. , one of the strategies to reduce PONV is the use of propofol to induce and maintain anesthesia, which was implemented in both the included studies. According to Silva et al. , longer procedures are associated with a higher occurrence of PONV with procedures lasting > 120 min having the highest prevalence of PONV (54.9%). In the study by Faro et al. , the average duration of surgery was 256.46 ± 74.55 min (range 204–600 min). Intraoperative blood loss may also play a role in PONV by disrupting the hemodynamic stability secondary to dehydration and electrolyte imbalance . Bimaxillary surgery results in maximum blood loss due to the extensive vascularity of the maxillofacial region and long operative time . Faro et al. recorded an average blood loss of 588 ± 312.42 mL (range 100–1500 mL) and noted that 75% of the patients who had PONV had high blood loss volume of more than 500 mL. The use of postoperative opioids is known to increase the occurrence of PONV . Faro et al. advised intravenous morphine (0.1 mg/kg) as a postoperative analgesic whereas Powell et al. advised short-acting opioids (fentanyl) or nonopioids (ketamine) for pain control. According to a study by Claxton et al. , patients who were advised morphine postdischarge had more incidence of PONV (59%) as compared to patients who were advised fentanyl (34%). Thus, the administration of postoperative morphine might have been a cause for higher PONV seen by Faro et al. . In both studies, postoperative ondansetron was prescribed as the antiemetic to control PONV. Consequently, we may conclude that throat packs alone did not determine the presence or absence of PONV in these studies, as multiple other confounding factors affected this parameter. Also, there was significant bias as one of the included studies had insufficient information regarding most variables. Various studies that evaluated the effect of throat packs in nasal, sinus, and otolaryngological surgery reported the incidence of increased postoperative throat pain and dysphagia . However, Tay et al. found no significant difference in throat pain when throat packs were placed or not placed during routine oral surgery. Arta et al. also noted similar findings and recommended the use of pharyngeal packs during rhinoplasty. Our MA showed that the mean difference in pain was markedly higher among the group with throat packs than the group without throat packs after orthognathic surgery at 2 h, with no heterogeneity among the studies. Several factors may contribute to throat pain with intraoperative throat packs. The packing material may itself cause injury to the mucosa, especially in longer surgeries, and result in throat pain . Additional factors are the type and position of throat packs. Vural et al. demonstrated that chlorhexidine gluconate 0.2% and benzydamine hydrochloride 0.15% (CGBH) soaked pharyngeal packs elicited less postoperative throat pain than saline-soaked pharyngeal packs during orthognathic surgery and a SR concluded the same . On the contrary, Meco et al. evaluated the efficacy of dry pharyngeal packs, water-soaked packs, CGBH-soaked packs, and no pharyngeal packs during sinonasal surgery. They found no significant difference in terms of postoperative sore throat. In our review, Faro et al. used four saline-soaked gauzes and Powell et al. used sterile gauze without explicitly mentioning whether they were dry or wet. However, in both studies, patients with throat packs had more throat pain than those without throat packs. Similar findings were reported in several other studies based on nasal and upper airway surgery . The position of the pack in the oropharynx, nasopharynx, or hypopharynx may also significantly affect postoperative dysphagia. Rizvi et al. reported more dysphagia with oropharyngeal packing compared to nasopharyngeal packing. In both the studies included in our review, the authors have employed oropharyngeal packing, which could have contributed to higher pain scores. However, orthognathic surgeries are purely oral procedures with a high risk of blood and foreign body ingestion or aspiration, despite suctioning. Therefore, packing the oropharynx is justified in orthognathic surgeries. The position of throat packs can be a potential confounder which needs to be considered during the evaluation of throat pain secondary to throat packs. The position of throat packs may not be the same across all head and neck surgeries, due to which outcomes like gastric contents, PONV, and postoperative throat pain and discomfort may vary. Due to the above factors, it is not worthwhile to estimate the cumulative estimates of gastric contents, PONV, and postoperative pain and discomfort among patients undergoing all types of head and neck surgeries. Several drawbacks of throat packs include mucosal injury, painful oral ulcers, injury to the pharyngeal plexus, edema of the tongue, pack retention, and in rare cases, fatal airway obstruction . These reports, along with the finding that throat packs do not reduce the occurrence of PONV, and in fact, lead to increased postoperative throat pain, have led to the consensus that throat packs should not be routinely used during sinus, nasal, and upper airway surgery, which has been corroborated by various SRs . These SRs and meta-analyses have provided useful evidence against the placement of throat packs in all types of head and neck surgery and have concluded that recent advances such as endoscopic techniques and hypotensive anesthesia have reduced the amount of intraoperative bleeding and hence circumvent the need for throat packs . However, these reviews included a wide variety of surgical procedures. In addition, these reviews did not consider the quality of gastric contents. Few authors have recommended the application of throat packs when the risk of aspiration is high . Orthognathic surgery comprises a major bulk of oral and maxillofacial surgical procedures and entails a high risk of foreign body aspiration due to the use of mini-plates, screws, and wires of small dimensions and is hence very different from other head and neck surgeries. Thus, in consideration of the above findings from the existing literature, we believe that future studies are required to evaluate the usefulness of throat packs in preventing foreign body aspiration, their effect on the volume and quality of gastric contents, their role in preventing PONV, and their influence on postoperative throat pain among patients undergoing orthognathic surgery. Also, trials need to consider and report findings related to the other potential risk factors of PONV, such as gender and age, type of surgery and anesthesia, position and type of throat pack, duration of the surgical procedure, intraoperative blood loss volume, and the use of postoperative opioids. This will help to lay down guidelines regarding the overall role of throat packs during orthognathic surgical procedures. The strength of the current review is the inclusion of patients chiefly undergoing orthognathic surgery and the inclusion of the quality of gastric contents as a determinant of throat pack effectiveness. A key limitation of the current review is the limited number of studies and the inclusion of studies published in the English language only. In conclusion, the current review provides no evidence in favor of throat packs during orthognathic surgical procedures. The role of throat packs in preventing blood ingestion is questionable due to the limited number of studies. Throat packs play no significant role in preventing PONV, and they increase postoperative throat pain. We have, however, identified research gaps and believe that further studies are required to evaluate the true effectiveness of throat packs, especially pertaining to blood loss volume as a risk factor for PONV, the correlation between gastric volume and gastric contents to truly understand the barrier effect of throat packs and their role in the prevention of foreign body lodgment or aspiration during orthognathic surgical procedures.
Public health communication: Attitudes, experiences, and lessons learned from users of a COVID-19 digital triage tool for children
ae95746c-d970-4100-b85b-010c91c447c2
9376382
Health Communication[mh]
In February 2020, the first cases of COVID-19 were confirmed in Switzerland, forcing the country into a lockdown, which included a temporary closure of day care and primary schools. Guided by the directive “Bleiben Sie zuhause/Stay at home,” people were asked to work from home, avoid public transport, as well as limit social contacts . With many parents and children suddenly at home and an unknown virus lurking, public health communication became critical. The way public health messages are communicated has a bearing on the acceptance and implementation of official guidelines, and recommendations, as well as general societal reassurance and family wellbeing . Uncertainty about the virus and about recommendations, disruption, and postponement of events, school, and social ones seem to elevate stress levels and impact mental health in children . Consequently, mental health problems grew during lockdowns, especially among children and adolescents . Public health recommendations and measures to prevent infections had to be adapted to the evolving knowledge and virus . Health authorities scheduled regular communication briefings to inform and reassure the public. The way information is conveyed shapes public trust both negatively and positively. According to the literature, adherence and attitudes to public health recommendations in Switzerland differ along cultural, geographical, and socioeconomic lines, highlighting the importance of targeted communication . Switzerland is governed by a federal system; the majority of public health and education policies fall under the responsibility of the 26 cantons with a strong focus on individual responsibility. During some phases of the pandemic, decision-making was centralized through a pandemic emergency law, and key public health recommendations were issued by the federal office of public health (FOPH), including a pediatric COVID-19 testing strategy. The Swiss FOPH held regular media releases throughout the pandemic . The primary objective of communication in public health crises is to influence behavior so as to reduce the duration and impact of the crises, e.g., the COVID-19 pandemic . Public health communication depends on trust. The message needs to be clear on the urgency and practical behavior recommended, specifically packaged to meet the varying needs of different groups in society . Telehealth became an essential component of the Swiss healthcare system, as a result of the pandemic . The introduction of online forward triage tools (OFTTs) for healthcare and public health communication has created a new and potentially scalable, public health communication channel. This channel has the potential to reach large numbers of people, irrespective of the time of tool use and location of the user. OFTTs have been developed to communicate public health recommendations regarding testing, isolation/quarantine, as well as advice regarding accessing healthcare services, and school or day-care attendance. Unique attributes of a child-specific OFTT are the different levels and recipients of the recommendation, ranging from the affected patients, the children, their families, and caregivers to a population-wide audience. The purpose of this study was to explore attitudes, experiences, and challenges faced by Swiss OFTT users in regard to public health recommendations given by a child-specific COVID-19 OFTT (pandemic context). Context and intervention The Department of Emergency Telemedicine of the University of Bern together with Paediatric Emergency Medicine, the Department of Paediatrics, Inselspital, Bern University Hospital, University of Bern, Switzerland, and the Department of Paediatrics, Inselspital, Bern University Hospital, University of Bern, Switzerland developed a child-specific COVID-19 OFTT, www.coronabambini.ch . The goal of the OFTT was to provide parents and guardians with public health recommendations with regards to testing, isolation, quarantine, school or day-care attendance, and when to seek healthcare. Details on the development and structure of the OFFT including quantitative data on its usage have been published elsewhere . Study design An exploratory qualitative study design was utilized. The overarching aim of this study was to assess the utility of the child-specific COVID-19 OFTT, www.coronabambini.ch , as well as elicit recommendations to improve future OFTTs. This study aimed to explore the attitudes, experiences, and challenges of Swiss OFTT users and their families, with regard to the communicated public health recommendations of www.coronabambini.ch . We explored the following questions: Central question What was your lived experience of COVID-19 with respect to testing, quarantine, and public health communications? Sub-questions What made you follow or disregard the recommendations given by the OFTT? What is your understanding of the following terms: socioeconomic status, infectiousness of children, and high-risk person? How did these experiences (testing, quarantine, and public health communications) influence your decisions to follow or not follow the OFTT recommendations? Sampling and sample size We adopted a purposeful and quota sampling approach, with a total of key informants ( n = 20) selected from a population of parents, teachers, and guardians, including persons with dual roles as a parent and healthcare/school professional, who had used www.coronabambini.ch , and consented to the study . In qualitative research, saturation guides sample size, and we aimed for both rich narratives and thematic saturation . This was reached by the 12th key informant. Data collection Videos rather than face-face interviews were held in view of the public measures in place at the time of the interviews. An interview guide was used and adapted iteratively . Two researchers were present in each session and filled the questions in turn. The interviews lasted for 45–55 min. Interviews were recorded and transcribed verbatim. Data analysis An inductive and deductive approach to data analysis was utilized. Transcribed interviews were coded thematically (guided by a framework derived from a previous OFTT evaluation under review) while remaining open to new themes. Data management and analysis were performed with the aid of MAXQDA2020 (VERBI Software, Berlin). Narratives on the patient's lived experience of testing, quarantine, and public health communication were elicited. Measure to ensure the trustworthiness of the data Data collection and analysis were performed iteratively, continuously adapting the interview guide to ensure dependability. The two qualitative researchers debriefed at the end of each interview and kept reflexive journals. To ensure transferability, a thick description of participants, context, and data collection process has been outlined. Ethics approval The evaluation of OFTT use was deemed a quality evaluation study by the ethics committee of the Canton of Bern, Switzerland (req-2020-01179). The need for a full ethical review was waived on 21 October 2020. The Department of Emergency Telemedicine of the University of Bern together with Paediatric Emergency Medicine, the Department of Paediatrics, Inselspital, Bern University Hospital, University of Bern, Switzerland, and the Department of Paediatrics, Inselspital, Bern University Hospital, University of Bern, Switzerland developed a child-specific COVID-19 OFTT, www.coronabambini.ch . The goal of the OFTT was to provide parents and guardians with public health recommendations with regards to testing, isolation, quarantine, school or day-care attendance, and when to seek healthcare. Details on the development and structure of the OFFT including quantitative data on its usage have been published elsewhere . An exploratory qualitative study design was utilized. The overarching aim of this study was to assess the utility of the child-specific COVID-19 OFTT, www.coronabambini.ch , as well as elicit recommendations to improve future OFTTs. This study aimed to explore the attitudes, experiences, and challenges of Swiss OFTT users and their families, with regard to the communicated public health recommendations of www.coronabambini.ch . We explored the following questions: Central question What was your lived experience of COVID-19 with respect to testing, quarantine, and public health communications? Sub-questions What made you follow or disregard the recommendations given by the OFTT? What is your understanding of the following terms: socioeconomic status, infectiousness of children, and high-risk person? How did these experiences (testing, quarantine, and public health communications) influence your decisions to follow or not follow the OFTT recommendations? We adopted a purposeful and quota sampling approach, with a total of key informants ( n = 20) selected from a population of parents, teachers, and guardians, including persons with dual roles as a parent and healthcare/school professional, who had used www.coronabambini.ch , and consented to the study . In qualitative research, saturation guides sample size, and we aimed for both rich narratives and thematic saturation . This was reached by the 12th key informant. Videos rather than face-face interviews were held in view of the public measures in place at the time of the interviews. An interview guide was used and adapted iteratively . Two researchers were present in each session and filled the questions in turn. The interviews lasted for 45–55 min. Interviews were recorded and transcribed verbatim. An inductive and deductive approach to data analysis was utilized. Transcribed interviews were coded thematically (guided by a framework derived from a previous OFTT evaluation under review) while remaining open to new themes. Data management and analysis were performed with the aid of MAXQDA2020 (VERBI Software, Berlin). Narratives on the patient's lived experience of testing, quarantine, and public health communication were elicited. Data collection and analysis were performed iteratively, continuously adapting the interview guide to ensure dependability. The two qualitative researchers debriefed at the end of each interview and kept reflexive journals. To ensure transferability, a thick description of participants, context, and data collection process has been outlined. The evaluation of OFTT use was deemed a quality evaluation study by the ethics committee of the Canton of Bern, Switzerland (req-2020-01179). The need for a full ethical review was waived on 21 October 2020. The following themes emerged: (1) definition and expectations of high-risk persons, (2) quarantine instructions and challenges, (3) blurred division of responsibility between authorities and parents, (4) a novel condition and the evolution of knowledge, (5) definition and implications of socioeconomic status, and (6) new normal and societal divisions . Theme 1 High-risk person definition and expectations Several public health recommendations were formulated to protect individuals at high risk of severe infection from COVID-19. Users were challenged in understanding this definition and its practical implications. Who is a person at high risk? What was expected from those that fell into this category? How did this category shift over time? Most key informants cited age, 65 years and older, and those with chronic conditions such as hypertension and diabetes, as the intended high-risk groups. Others were more than irritated by the high-risk label as revealed below: “ I heard from BAG that auto-immune disease is a high-risk factor and was scared to death. I feared that both my husband and I (who have the same condition) were in mortal danger. I was constantly in fear that the kids would bring the virus home. Eventually auto-immune treatment was adapted for COVID patients-giving us the necessary encouragement. I was angry that – even though many factors were considered high risk-nothing was done to ensure the safety of these people. The only focus was on old people and not on me or us?” [Key Informant 17] “ My husband and I are both slightly overweight - so-called “high risk”. Should we stop caring for our kids during the pandemic? ” [Key Informant 15] “ Day-care (KITA) workers do not disclose their status if they are at high risk. So, we took the kid out of KITA during the first wave – and that was our way of coping with the fear .” [Key Informant 14] “ I do think that the state has acted reasonably. I feel perhaps more could have been done, especially for people in high-risk situations .” [Key Informant 11] The family construct, grandparents, meaning, and roles Intergenerational responsibility is still embedded in society. Many families in Switzerland rely on grandparent support in childcare. The public health recommendations had substantial implications on families. It was a difficult task to balance family constructs with protecting family members in a situation of evolving information. There is a link between the high-risk definition as discussed earlier and the role of grandparents, most of whom automatically fell into the over 65 age group.With social distancing and closure of many public spaces, the role of the family became even more pronounced. The advent of the COVID-19 vaccines brought hope as grandparents were vaccinated and could resume their roles in the family. “ My parents have not regularly, but still relatively often looked after the children, which we actually then stopped with the outbreak of the pandemic. My parents are both over 70, and the whole information situation made it difficult, because there was a very long discussion about the role of children in spreading the disease. We certainly were rather cautious to protect our parents. And only then, over time, did we occasionally allow them to look after the children wearing masks. And now they have both been vaccinated twice.” [Key Informant 14] “ People were pretty careful not to associate with people whose children didn't go to school together. That was a big issue. Also, we kept seeing the grandparents through the windows, but we didn't visit them anymore .” [Key Informant 1] “ Old people's homes, the grandparents and elderly, the situation and settings need revisiting-the human being is a social being-isolation dehumanizes people. There are issues of self-protection vs protection of others and individual autonomy vs communal responsibility.” [Key Informant 11] Theme 2 Quarantine instructions and challenges How does a family's housing situation affect people's experience of isolating, homeschooling, and working from home? Challenges experienced with quarantine recommendations varied widely ranging from space challenges and home office to quarantining with small children to health issues and homeschooling as revealed below: “ I have seen many children's weight curves go up. The children moved less and were often at home. And also, the school situation I think was difficult for a lot of kids. For kids with latent hyperactivity, home-schooling didn't go well at all. I had several patients who were not doing well at all .” [Key Informant 20] “ I am not a teacher, nor do I want to be one. Now I have to teach my 3 children, attending different grades, algebra to history? I am overwhelmed .” [Key Informant 1] Most key informants revealed how they let their guard down as the pandemic lingered, a term known as pandemic fatigue. Below is what was said by some participants: “ First quarantines are viewed as adventure but subsequent ones are viewed negatively and people try to avoid them.” [Key Informant 10] “ In the beginning it was bad. I also wore a mask when the child was sick because I didn't know what to do... but we are in the fortunate position that we have two floors. That's when we said the sick child is upstairs and in the room with me and my husband is in the basement. We also tried not to have dinner together. I have to say, that was at the very beginning, during the first lockdown. During the second lockdown, it was wintertime and you had a cold and you had a cough. Then it was no longer taken so, I can say seriously [laughs]? That maybe a wrong word. But the first time there was really just the uncertainty and we tried hard, with masks, with separate bedrooms. No hugs, no kisses.” [Key Informant 1] Theme 3 Blurred responsibility between authorities and parents Informants revealed challenges in following the recommendations of the OFTT based on national testing recommendations, citing blurred responsibility between different authorities and between parents and authorities. Who is responsible for the wellbeing of children, the parents, or the authorities? If the authorities have a say, which authority directives should parents follow? The complexities of this issue are revealed below: “ Honestly. The school authorities say follow the canton's lead. The canton says they basically recommend this and that, and then each school develops its own concept. What do I do then as a mother? You are faced with a situation where everyone says it's the other person's fault.” [Key Informant 15] “ One cannot take kids out of school; it is not allowed. As parent you feel helpless At least they could offer online schools as an option if parents want to keep their kids safe .” [Key Informant 2] It is of interest that the problem is not only the communication gap but rather perceived shame or secrecy around COVID-19 cases , as revealed below: “ Schools did not share the information of how many kids tested positive in their school freely. There was perhaps a sense of shame and guilt?” [Key Informant 2] Theme 4 The evolution of knowledge about the virus Evolving understanding of the pandemic and related uncertainty brought about fear as revealed below: “ Then it was said again and again, no, they [children] are not a risk, etc., then it was said that they were. It is a very difficult situation. The logistics and everything, the work, there's so much involved that it was sometimes really hard to decide what to do.” [Key Informant 19] Theme 5 Implications of socioeconomic status and challenges around its definition Socioeconomic status seemed to play a role in the pandemic, though participants had less understanding of the term and concept. “ Socio-economic status-what is that? What does that word mean?” [Key Informant 7] - “ That question disturbed me. I felt my privacy invaded and so did not answer the question. Explain why you want to know this and guarantee that the data will not be shared with third parties.” [Key Informant 11] Who can afford to test, isolate, and wait for results or quarantine at home in case of a positive test result? Gardens became a socioeconomic status symbol that was well-appreciated during the pandemic as revealed below: “ We live in a house with a garden, luckily [laughs], especially during the lockdown that was great.” [Key Informant 14] “ We were privileged and fortunate that we were both able to work from home for the most part, and that we could share the child care, and that our children were not so young anymore .” [Key Informant 11] Theme 6 New normal and societal divisions Existing societal divisions became widened by the pandemic and mistrust fuelled by fear grew as revealed below: “ In small villages, rumours abound. One father came to me and said that the neighbours went away for holiday and brought the infection and now I have the disease. My kid got it from their kid upon returning from holidays and brought it home to me. Needless to say that their father had not tested their child to confirm who brought the virus home. When I asked him why he did not test the child, he said that he was scared of exposing the child to the painful and potentially dangerous test but was anyway convinced of where it came from.” [Key Informant 11] “ What was known as a light flu before 2020, became a scary thing after 2020, and symptoms like a slight sore throat or runny nose almost meant isolation -literally and physically.” [Key Informant 11] Several public health recommendations were formulated to protect individuals at high risk of severe infection from COVID-19. Users were challenged in understanding this definition and its practical implications. Who is a person at high risk? What was expected from those that fell into this category? How did this category shift over time? Most key informants cited age, 65 years and older, and those with chronic conditions such as hypertension and diabetes, as the intended high-risk groups. Others were more than irritated by the high-risk label as revealed below: “ I heard from BAG that auto-immune disease is a high-risk factor and was scared to death. I feared that both my husband and I (who have the same condition) were in mortal danger. I was constantly in fear that the kids would bring the virus home. Eventually auto-immune treatment was adapted for COVID patients-giving us the necessary encouragement. I was angry that – even though many factors were considered high risk-nothing was done to ensure the safety of these people. The only focus was on old people and not on me or us?” [Key Informant 17] “ My husband and I are both slightly overweight - so-called “high risk”. Should we stop caring for our kids during the pandemic? ” [Key Informant 15] “ Day-care (KITA) workers do not disclose their status if they are at high risk. So, we took the kid out of KITA during the first wave – and that was our way of coping with the fear .” [Key Informant 14] “ I do think that the state has acted reasonably. I feel perhaps more could have been done, especially for people in high-risk situations .” [Key Informant 11] Intergenerational responsibility is still embedded in society. Many families in Switzerland rely on grandparent support in childcare. The public health recommendations had substantial implications on families. It was a difficult task to balance family constructs with protecting family members in a situation of evolving information. There is a link between the high-risk definition as discussed earlier and the role of grandparents, most of whom automatically fell into the over 65 age group.With social distancing and closure of many public spaces, the role of the family became even more pronounced. The advent of the COVID-19 vaccines brought hope as grandparents were vaccinated and could resume their roles in the family. “ My parents have not regularly, but still relatively often looked after the children, which we actually then stopped with the outbreak of the pandemic. My parents are both over 70, and the whole information situation made it difficult, because there was a very long discussion about the role of children in spreading the disease. We certainly were rather cautious to protect our parents. And only then, over time, did we occasionally allow them to look after the children wearing masks. And now they have both been vaccinated twice.” [Key Informant 14] “ People were pretty careful not to associate with people whose children didn't go to school together. That was a big issue. Also, we kept seeing the grandparents through the windows, but we didn't visit them anymore .” [Key Informant 1] “ Old people's homes, the grandparents and elderly, the situation and settings need revisiting-the human being is a social being-isolation dehumanizes people. There are issues of self-protection vs protection of others and individual autonomy vs communal responsibility.” [Key Informant 11] How does a family's housing situation affect people's experience of isolating, homeschooling, and working from home? Challenges experienced with quarantine recommendations varied widely ranging from space challenges and home office to quarantining with small children to health issues and homeschooling as revealed below: “ I have seen many children's weight curves go up. The children moved less and were often at home. And also, the school situation I think was difficult for a lot of kids. For kids with latent hyperactivity, home-schooling didn't go well at all. I had several patients who were not doing well at all .” [Key Informant 20] “ I am not a teacher, nor do I want to be one. Now I have to teach my 3 children, attending different grades, algebra to history? I am overwhelmed .” [Key Informant 1] Most key informants revealed how they let their guard down as the pandemic lingered, a term known as pandemic fatigue. Below is what was said by some participants: “ First quarantines are viewed as adventure but subsequent ones are viewed negatively and people try to avoid them.” [Key Informant 10] “ In the beginning it was bad. I also wore a mask when the child was sick because I didn't know what to do... but we are in the fortunate position that we have two floors. That's when we said the sick child is upstairs and in the room with me and my husband is in the basement. We also tried not to have dinner together. I have to say, that was at the very beginning, during the first lockdown. During the second lockdown, it was wintertime and you had a cold and you had a cough. Then it was no longer taken so, I can say seriously [laughs]? That maybe a wrong word. But the first time there was really just the uncertainty and we tried hard, with masks, with separate bedrooms. No hugs, no kisses.” [Key Informant 1] Informants revealed challenges in following the recommendations of the OFTT based on national testing recommendations, citing blurred responsibility between different authorities and between parents and authorities. Who is responsible for the wellbeing of children, the parents, or the authorities? If the authorities have a say, which authority directives should parents follow? The complexities of this issue are revealed below: “ Honestly. The school authorities say follow the canton's lead. The canton says they basically recommend this and that, and then each school develops its own concept. What do I do then as a mother? You are faced with a situation where everyone says it's the other person's fault.” [Key Informant 15] “ One cannot take kids out of school; it is not allowed. As parent you feel helpless At least they could offer online schools as an option if parents want to keep their kids safe .” [Key Informant 2] It is of interest that the problem is not only the communication gap but rather perceived shame or secrecy around COVID-19 cases , as revealed below: “ Schools did not share the information of how many kids tested positive in their school freely. There was perhaps a sense of shame and guilt?” [Key Informant 2] Evolving understanding of the pandemic and related uncertainty brought about fear as revealed below: “ Then it was said again and again, no, they [children] are not a risk, etc., then it was said that they were. It is a very difficult situation. The logistics and everything, the work, there's so much involved that it was sometimes really hard to decide what to do.” [Key Informant 19] Socioeconomic status seemed to play a role in the pandemic, though participants had less understanding of the term and concept. “ Socio-economic status-what is that? What does that word mean?” [Key Informant 7] - “ That question disturbed me. I felt my privacy invaded and so did not answer the question. Explain why you want to know this and guarantee that the data will not be shared with third parties.” [Key Informant 11] Who can afford to test, isolate, and wait for results or quarantine at home in case of a positive test result? Gardens became a socioeconomic status symbol that was well-appreciated during the pandemic as revealed below: “ We live in a house with a garden, luckily [laughs], especially during the lockdown that was great.” [Key Informant 14] “ We were privileged and fortunate that we were both able to work from home for the most part, and that we could share the child care, and that our children were not so young anymore .” [Key Informant 11] Existing societal divisions became widened by the pandemic and mistrust fuelled by fear grew as revealed below: “ In small villages, rumours abound. One father came to me and said that the neighbours went away for holiday and brought the infection and now I have the disease. My kid got it from their kid upon returning from holidays and brought it home to me. Needless to say that their father had not tested their child to confirm who brought the virus home. When I asked him why he did not test the child, he said that he was scared of exposing the child to the painful and potentially dangerous test but was anyway convinced of where it came from.” [Key Informant 11] “ What was known as a light flu before 2020, became a scary thing after 2020, and symptoms like a slight sore throat or runny nose almost meant isolation -literally and physically.” [Key Informant 11] Our study sheds light on the complex decision-making process that confronts OFTT users, which goes beyond simple tool recommendations. The following themes emerged: (i) definition and expectations of high-risk persons, (ii) quarantine instructions and challenges, (iii) blurred responsibility between authorities and parents, (iv) a novel condition and the evolution of knowledge, (v) definition and implications of socioeconomic status, and (vi) new normal and societal divisions . High-risk person definition and expectations As everyone aged 65 years was regarded as being at high risk of suffering severe COVID-19 disease, a significant portion of the Swiss population was thought to require public health interventions , and measures to protect this group were put in place. Our study revealed a perceived lack of information about what to do if you are a parent and at high risk. Living with a person who was considered high risk, a partner, or a child, generated significant uncertainties and challenges within families . Many Swiss families rely on grandparents who take care of their grandchildren on a regular basis . The high-risk assignation of those aged 65 years and above brought in an additional dynamic into many families. With instructions to protect grandparents, many families had to reorganize childcare, and it is needless to say that the childcare system in Switzerland is quite expensive . In line with our findings, many grandparents felt disenfranchised by their usefulness and their role in the family in taking care of grandchildren. For them, the experience was described as both isolating and disorienting, as many grandparents find purpose and meaning in caring for grandchildren . Significant social isolation-induced psychological effects have been reported in this group . In concurrence with our findings, socioeconomic status, particularly finances, and influences perceived stress levels . With grandparents out of the childcare equation, childcare became an additional economic strain over and above the many stressors of life in general and the pandemic in particular . With lack of grandparent support, additional childcare costs, and fear of losing one's job, testing and quarantine recommendations become challenging and difficult to follow, in concurrence with other findings . The effect on OFTT utility, testing, and forward transmission of the virus need to be considered and explored further . Socioeconomic status, quarantine instructions, and challenges The quarantine experience was perceived very differently by the key informants in our study. Some described it as stressful or even enriching depending on the family environment, living space, and profession. The first quarantine experiences were described by many as adventurous, with more family time. However, some families experienced serial quarantines, disruptions in social life, and having to reorganize work and family life, which resulted in testing and quarantine fatigue . Homeschooling emerged as another major challenge that families faced. These factors described in this study were also reported to be important factors in the decision to test or quarantine and thus may have a direct influence on the course of the pandemic and the effect of public health measures. Socioeconomic status shaped everyday life during the pandemic . Whether a family lived in a house with a garden or a small apartment in the city emerged as an important factor influencing decisions to follow or disregard public health recommendations, highlighting the link between living arrangements and socioeconomic status. Links between socioeconomic status, living space, homeschooling, working from home, and stress levels have also been reported elsewhere . Access to outdoor spaces needs to be considered in pandemic settings. The very term, “socioeconomic status” was not understood by some key informants, while some felt uncomfortable divulging their status (privacy). Other ways of assessing this in research are needed. The evolution of knowledge and societal divisions One problem of many parents was to assess how infectious their children were. Many decisions were made around this question, and the novel nature of the disease made it difficult for both parents and OFTT providers. Clear communication of this issue influenced many decisions from testing, sending a child to school or visiting grandparents, or allowing them to take care of their grandchildren. In line with our findings, failure to engage the community and inadequate health risk communication can render the best public health strategies ineffective . The lack of clear information about what to do, accompanied by fear of contracting or passing on the disease, brought about societal divisions, the young vs. the old, and intrafamilial and interfamilial conflict. In support of our findings, fear, a parallel pandemic, has been reported elsewhere . Blurred responsibility between authorities and parents Switzerland is a country that places high value on individual responsibility and has decentralized decision-making across its 26 cantons, especially involving the healthcare sector . The pandemic put the notions of decentralized decision-making and individual responsibility to the test. Policies on school attendance, test criteria, school testing, or the wearing of masks varied across cantons . Attending school is compulsory in Switzerland. Parents can therefore not keep their children out of school. Some parents were left helpless by the feeling of being responsible for their children and sometimes by fear of contracting COVID-19 on the one hand, but on the other hand, being forced to send the children to school by law. Blurred responsibilities between parents and authorities further caused significant uncertainties about how to proceed when a child fell sick . The novel, evolving virus, knowledge about the virus, changing guidelines, and sometimes divergent recommendations made by authorities and schools made the situation difficult for parents: Who do I follow ? The federal government, canton, or school authorities? In support of our findings, clarity on the roles and responsibilities of different authorities in pandemic settings is called for . Should authorities override the parental responsibility to protect their own children, possibly by keeping them at home when they fear a novel infection with unknown health consequences? Some parents highlighted the fact that the high-risk status of teachers and childcare personnel is not known, which further complicates the matter. Should teachers and childcare personnel disclose their status to parents in pandemic settings . Such questions need to be explored in the future. Interconnectedness-systems thinking Our study revealed the interconnectedness between public health communications and other factors, ranging from high-risk status and expectations, socioeconomic status, and medical and practical decisions to decisions to test quarantine or send the kid to school. The decision to follow or disregard OFTT recommendations is influenced by these broader issues. The interconnectedness revealed in our study highlights the need for systems thinking in public health communication since a policy can have both intended and unintended effects . As everyone aged 65 years was regarded as being at high risk of suffering severe COVID-19 disease, a significant portion of the Swiss population was thought to require public health interventions , and measures to protect this group were put in place. Our study revealed a perceived lack of information about what to do if you are a parent and at high risk. Living with a person who was considered high risk, a partner, or a child, generated significant uncertainties and challenges within families . Many Swiss families rely on grandparents who take care of their grandchildren on a regular basis . The high-risk assignation of those aged 65 years and above brought in an additional dynamic into many families. With instructions to protect grandparents, many families had to reorganize childcare, and it is needless to say that the childcare system in Switzerland is quite expensive . In line with our findings, many grandparents felt disenfranchised by their usefulness and their role in the family in taking care of grandchildren. For them, the experience was described as both isolating and disorienting, as many grandparents find purpose and meaning in caring for grandchildren . Significant social isolation-induced psychological effects have been reported in this group . In concurrence with our findings, socioeconomic status, particularly finances, and influences perceived stress levels . With grandparents out of the childcare equation, childcare became an additional economic strain over and above the many stressors of life in general and the pandemic in particular . With lack of grandparent support, additional childcare costs, and fear of losing one's job, testing and quarantine recommendations become challenging and difficult to follow, in concurrence with other findings . The effect on OFTT utility, testing, and forward transmission of the virus need to be considered and explored further . The quarantine experience was perceived very differently by the key informants in our study. Some described it as stressful or even enriching depending on the family environment, living space, and profession. The first quarantine experiences were described by many as adventurous, with more family time. However, some families experienced serial quarantines, disruptions in social life, and having to reorganize work and family life, which resulted in testing and quarantine fatigue . Homeschooling emerged as another major challenge that families faced. These factors described in this study were also reported to be important factors in the decision to test or quarantine and thus may have a direct influence on the course of the pandemic and the effect of public health measures. Socioeconomic status shaped everyday life during the pandemic . Whether a family lived in a house with a garden or a small apartment in the city emerged as an important factor influencing decisions to follow or disregard public health recommendations, highlighting the link between living arrangements and socioeconomic status. Links between socioeconomic status, living space, homeschooling, working from home, and stress levels have also been reported elsewhere . Access to outdoor spaces needs to be considered in pandemic settings. The very term, “socioeconomic status” was not understood by some key informants, while some felt uncomfortable divulging their status (privacy). Other ways of assessing this in research are needed. One problem of many parents was to assess how infectious their children were. Many decisions were made around this question, and the novel nature of the disease made it difficult for both parents and OFTT providers. Clear communication of this issue influenced many decisions from testing, sending a child to school or visiting grandparents, or allowing them to take care of their grandchildren. In line with our findings, failure to engage the community and inadequate health risk communication can render the best public health strategies ineffective . The lack of clear information about what to do, accompanied by fear of contracting or passing on the disease, brought about societal divisions, the young vs. the old, and intrafamilial and interfamilial conflict. In support of our findings, fear, a parallel pandemic, has been reported elsewhere . Switzerland is a country that places high value on individual responsibility and has decentralized decision-making across its 26 cantons, especially involving the healthcare sector . The pandemic put the notions of decentralized decision-making and individual responsibility to the test. Policies on school attendance, test criteria, school testing, or the wearing of masks varied across cantons . Attending school is compulsory in Switzerland. Parents can therefore not keep their children out of school. Some parents were left helpless by the feeling of being responsible for their children and sometimes by fear of contracting COVID-19 on the one hand, but on the other hand, being forced to send the children to school by law. Blurred responsibilities between parents and authorities further caused significant uncertainties about how to proceed when a child fell sick . The novel, evolving virus, knowledge about the virus, changing guidelines, and sometimes divergent recommendations made by authorities and schools made the situation difficult for parents: Who do I follow ? The federal government, canton, or school authorities? In support of our findings, clarity on the roles and responsibilities of different authorities in pandemic settings is called for . Should authorities override the parental responsibility to protect their own children, possibly by keeping them at home when they fear a novel infection with unknown health consequences? Some parents highlighted the fact that the high-risk status of teachers and childcare personnel is not known, which further complicates the matter. Should teachers and childcare personnel disclose their status to parents in pandemic settings . Such questions need to be explored in the future. Our study revealed the interconnectedness between public health communications and other factors, ranging from high-risk status and expectations, socioeconomic status, and medical and practical decisions to decisions to test quarantine or send the kid to school. The decision to follow or disregard OFTT recommendations is influenced by these broader issues. The interconnectedness revealed in our study highlights the need for systems thinking in public health communication since a policy can have both intended and unintended effects . Public health communication in a pandemic setting emerged as both critical and challenging. In line with our findings, it is imperative to note that in crises, communication that focuses more on information provision than on practical behavior may lead to a public that knows what to, without understanding the action to take, leading to non-adherence behaviors . Many key informants that used the child-specific COVID-19 OFTT revealed that the decision to follow or disregard recommendations is complex and multifaceted. We learned the following lessons which could be of use in future COVID-19 telehealth interventions: We have learned that indeed more effective communication strategies are called for in times of public health crises. Communication of high-risk person groups and the use of high-risk person labels in OFTTs and public health, in general, ought to be accompanied by clear instructions and measures to protect all identified groups and thus to prevent stigma. What role can digital health play? Can technological mediation get this right? We recommended evaluative research on digital interventions to further explore this issue. Public health communication and recommendations issued during the COVID-19 pandemic were cited as having an effect on family constructs, roles, and practical and medical decision-making. These effects on individuals and families need to be considered at present and in future telehealth interventions (OFTT). Isolation and quarantine emerged as a challenge for specific population groups with limited living space and limited access to outdoor spaces such as gardens. This aspect needs to be considered in future pandemics. Perceived blurred responsibility lines between parents and school authorities, federal government, and cantons were cited as challenges in our study. The perceived blurred responsibility lines seem to affect individual and family decision-making. This calls for further discussions. Policies can have both intended and unintended consequences. Attempts to protect older people can have unintended consequences for the older people themselves and the family in general. The multiple factors presented above reportedly influenced individual and family decision-making, attitudes to test or not to test, and quarantine experiences. These findings demonstrate and highlight the need for systems thinking in public health communication. Strengths and limitations Our tool was one of the first child-specific COVID-19 OFTTs set up in Switzerland. The insights gained in this study can help inform other telehealth departments setting up OFTTs for public health communication at present and in future pandemic settings on the one hand but also inform healthcare authorities in their efforts to further improve public health policy communication. Our study findings have the limitation that they derive from a specific health setting in Switzerland, and our key informants are users of a specific health communication tool, our COVID-19 children OFTT. Transferability to other societies, pandemics, and settings might be limited. Our tool was one of the first child-specific COVID-19 OFTTs set up in Switzerland. The insights gained in this study can help inform other telehealth departments setting up OFTTs for public health communication at present and in future pandemic settings on the one hand but also inform healthcare authorities in their efforts to further improve public health policy communication. Our study findings have the limitation that they derive from a specific health setting in Switzerland, and our key informants are users of a specific health communication tool, our COVID-19 children OFTT. Transferability to other societies, pandemics, and settings might be limited. Considering the evolving virus, rapidly changing circumstances, and different interest groups, public health communication becomes both an art and science, even more so when using a new technological communication channel, an OFTT. Policies can have both intended and unintended consequences. Public health communication strategies need to be continuously evaluated to ensure that intended messages are understood by the public. Our study findings, therefore, highlight the need for systems thinking in public health communication , especially in the pediatric context as policies affect family and societal structures. The original contributions presented in the study are included in the article/ , further inquiries can be directed to the corresponding author/s. Ethics review and approval/written informed consent was not required as per local legislation and institutional requirements. JM, AM, CS, KK, and TS were involved in the study design and data collection. JM and JR carried out the qualitative data analysis and wrote the first draft. JM, JR, AM, CS, NT, CA, KK, and TS contributed to further drafts. All authors contributed and approved the final draft. This study was partly cofinanced by the Federal Office of Public Health, Switzerland, and the Swiss National Science Foundation (196615). Emergency telemedicine at the University of Bern, Switzerland was supported by the Touring Club Switzerland through a foundational professorship to TS. The funder has no influence on the content of the manuscript or the decision to publish it. TS holds the endowed professorship for emergency telemedicine at the University of Bern, Switzerland. The funder, Touring Club Switzerland, has no influence on the research performed, the content of any manuscript, or any decision to publish. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Health related quality of life and associated factors after cesarean delivery among postpartum mothers in Gondar, Ethiopia: a cross-sectional study
5186fbbc-7399-41a8-84df-2b3e13c56644
11948938
Surgery[mh]
The postpartum period is the time after childbirth when a mother’s body recovers and adjusts physically and emotionally. Traditionally defined as six weeks . American College of Obstetricians and Gynecologists extends it to 12 weeks, calling it the “fourth trimester” . It is further categorized into acute (24 h), early (7 days), and late (6 weeks to 6 months) phases, highlighting its critical impact on maternal health .This study focuses on the first six weeks of the postpartum period, as most pregnancy-related changes return to normal within this time frame . According to the World Health Organization (WHO), quality of life is an individual’s perception of their position in life, considering cultural context, goals, expectations, and concerns . Evidence shows that health related quality of life (HRQoL) is a strong indicator of maternal care, influenced by physical, psychological, relational, and social health . HRQoL for pregnant and postpartum mothers has been frequently measured by generic validated multidimensional measurement tools such as, the SF-36 and SF-12 of the Medical Outcomes Study (MOS), followed by the WHO Quality of Life Scale-Bref (WHOQoL-Bref) and Mother Generated Index (MGI), respectively . The SF-34 PREG, an adapted version of the SF-36 for pregnancy, offers high reliability and relevance for assessing HRQoL in pregnant women, though it omits the social functioning dimension . Another tool, the QOL-GRAV scale, was developed to measure quality of life during pregnancy, showing strong reliability and validity with the WHOQoL-Bref . However, this study used the validated MOS SF-36 tool, which is also validated in Amharic version and widely applicable for measuring HRQoL in various groups in Ethiopia . Cesarean delivery (CD), a surgical method of childbirth involving an incision in the uterus, has seen a rise in maternal requests beyond the WHO recommended rate of 10–15% . In Ethiopia, a systematic review and meta-analysis reported an overall CD prevalence of 29.55%, exceeding the ideal rate . Women who request CD due to fear of labor or negative vaginal delivery (VD) experiences account for 42% of all CDs . However, severe maternal morbidity is reported to be higher in CD compared to VD . CD has several side effects such as discomfort, blood loss, weakness, restricted movement, difficulty in daily activities, and a visible scar that persists long after the surgery . Moreover, a study showed that exposure for repeated cesarean sections had an impact on physical and mental domains of HRQoL . Women’s pain, physical functioning, and overall well-being depend on complications associated with each mode of birth, emphasizing the need for comprehensive counseling to help them make informed decisions regarding delivery mode . The women’s postpartum period is a vulnerable and critical time characterized by several physiological, psychological, and social changes . Studies have shown 50% of postnatal mother deaths happen within the first 24 h of delivery, whereas the majority of maternal and baby deaths happen within the first four weeks following delivery . Common postpartum complications include urinary and fecal incontinence, infections, anemia, wound healing issues, headaches, backaches, exhaustion, and depression [ – ]. Surgery and anesthesia complications, such as delayed recovery, physical limitations, and pain, can negatively impact the mental and emotional aspects of HRQoL, leading to poor overall HRQoL in postpartum mothers . To reduce these risks, pregnant women should be fully informed and encouraged to opt for vaginal delivery unless medically indicated, supported by appropriate policies and guidelines . Studies revealed that the physical and mental components of HRQoL were compromised in postpartum mothers who had CD than via VD [ , – ]. Postpartum women also experience more depressive and pain symptoms after CD . This might have negative effect on their HRQoL during their next pregnancy and the postpartum period, as well as their attitude of parenthood and well-being of the family. Despite the high risk of maternal and newborn deaths and the rising CD rates, WHO highlights the postpartum period as one of the most neglected phases in maternal health . Studies in Ethiopia reported a mean HRQoL score below 50 , and 63.2% of them experiencing low HRQoL either delivered vaginally or via cesarean Sect. . Most studies assess postpartum HRQoL without distinguishing delivery modes, limiting insights into CD specific effects and often overlooks key recovery factors, particularly in the early postpartum period [ , , ]. Therefore, this study aimed to assess the HRQoL of postpartum mothers following CD and identify the factors associated with it during the first six weeks of postpartum by using MOS SF-36 tool. Study design and setting This study used an institutional based cross-sectional study design conducted from April to June 2024 at public health facilities in Gondar town, Northwest Ethiopia. Gondar, the capital city of the Central Gondar administrative zone, is served by eight health centers and two hospitals providing immunization services. One hospital, University of Gondar Comprehensive Specialized Hospital and four health centers (Poly Health Center, Gabriel Health Center, Maraki Health Center, and Azezo Health Center) were randomly selected using a lottery method, taking into account the feasibility of the study. Study population The source population included all postpartum mothers who delivered via cesarean section in Gondar town. The study population consisted of mothers attending child immunization at public health facilities six weeks post-delivery. Inclusion and exclusion criteria The study included postpartum mothers aged 18 years and older who gave birth in Gondar town. Exclusion criteria encompassed women with physical disabilities (such as spinal cord injuries, amputations, paralysis, or limb deformities), those with preexisting chronic illnesses (including chronic hypertension, cardiovascular and pulmonary disorders, and neuropsychiatric conditions), and individuals who were unable to communicate or comprehend the study requirements. Variables of the study Independent variables were categorized into socio- demographic factors (age, BMI, marital status, education levels of both the mother and partner, occupation of both, family support, and number of living children) and obstetric-related and clinical factors (parity, antenatal care visits, birth order history, preterm labor, pregnancy desirability, postnatal care visits, pregnancy complications, postpartum anemia, HIV status, urinary incontinence, and postpartum depression). Additionally, anesthesia and surgery related variables includes urgency of the surgery, number of cesarean deliveries, type of anesthesia, type of postoperative analgesia, delivery complications, perceived pain after discharge, postoperative nausea and vomiting, and shivering. Operational definition The MOS-SF-36 contains eight domains comprising two main categories namely, physical and mental HRQoL. Each raw scale score on each domain was transformed from 0 to 100 (0–100 scale) by using the formula of transformed. 11 [12pt]{minimal} $$\,Scale = }{{possible\,raw\,score\,range}} 100$$ Physical component Summary (PCS) mean score of HRQoL is the arithmetic average of the transformed scores of physical functioning, role physical, bodily pain, and general health domains . Mental component Summary (MCS) mean score of HRQoL is the arithmetic average of the transformed scores of social functioning, mental health; role emotional, and vitality domains . Overall HRQoL mean score is the arithmetic average of the transformed score of the eight domains . Higher HRQoL is when participants scored greater than or equal to the standardized mean value of 50 . Lower HRQoL is when participants scored less than the standardized mean value of 50 . Postpartum depression: was assessed by using an Edinburgh Postnatal Depression Scale (EPDS). According to the EDPS, study participants who scored ≥ 13 are considered as having postpartum depression and it is validated in Addis Ababa, Ethiopia . Multidimensional scale of perceived social support of family support domain was utilized to assess the extent of perceived social support from their families. According to the scale, mean scale score ranging from 1 to 2.9 could be considered poor support, 3.0 to 5.0 could be considered moderate support and 5.1 to 7 could be considered strong support . Postpartum anemia: is defined as hemoglobin < 10 g/dl or hematocrit < 30% during postpartum period . Sample size and sampling procedure The sample size for this study was calculated using the single population proportion formula, assuming a prevalence of 50% due to the absence of prior studies in Ethiopia assessing HRQoL among postpartum mothers following CD. The formula used was [12pt]{minimal} $$\:n=\!/\:\!.)}^{2}\:(1-\:)}{{}^{2}}$$ , where n = initial estimated sample size, Z = Confidence level (α); α = 95%; [12pt]{minimal} $$\:Z\!/\:\!.$$ =1.96, [12pt]{minimal} $$\:\:\:$$ = proportion; [12pt]{minimal} $$\:\:\:=0.5$$ , [12pt]{minimal} $$\:\:$$ = marginal error; [12pt]{minimal} $$\:\:$$ =5%, resulting in an initial sample size of approximately 384. After adding a 10% non-response rate, the final sample size was set at 424. Sampling was conducted through simple random sampling, with samples allocated proportionally to each health facility. Participants meeting the inclusion criteria were consecutively recruited until the target sample size was achieved (Fig. ). Data collection procedure Data were collected using structured and semi-structured questionnaires developed by the principal investigator, initially in English and subsequently translated into Amharic for simplicity. This translation was back-translated to ensure consistency by two bilingual experts. The questionnaire addressed socio-demographic variables, multidimensional social support, obstetric and clinical characteristics, postpartum depression, and HRQoL. Data were gathered through face-to-face interviews conducted in private settings. Additionally, information regarding pregnancy complications, postoperative hematocrit levels, types of postoperative analgesia, and delivery complications was extracted from medical records the day after the interview. The MOS SF-36 was utilized to assess HRQoL across eight domains: physical functioning, role physical, bodily pain, general health, vitality, social functioning, role emotional and mental health. Each domain was scored on a scale from 0 (worst) to 100 (best), with a Cronbach’s alpha above 0.70 for all domains, except for social functioning, which was 0.68 [ , , ]. The EPDS, consisting of 10 items, was used to evaluate postpartum depression, demonstrating sensitivity and specificity of 78.9% and 75.3%, respectively, in a validation study in Ethiopia . Additionally, perceived family support was measured using a 7-option Likert scale with a Cronbach’s alpha of at least 0.7 (Annex ). Data quality control To ensure data quality, a pre-test was conducted with 5% of the calculated sample size prior to the main study. This pre-test facilitated necessary revision to the questionnaire, enhancing its clarity, logical flow, and skip patterns. Five data collectors participated in a comprehensive one-day training session, which covered research objectives, eligibility criteria, data collection tools, and procedures. The training also included protocols for addressing any acute pain or postpartum depression that participants may experience during the study. Additionally, the importance of maintaining confidentiality and implementing effective data quality management practices was emphasized. Throughout the data collection period, the principal investigator and supervisors conducted daily reviews of completed questionnaires to ensure data completeness and consistency. Data processing and analysis procedures Upon completion of data collection, variables were entered, coded, and cleaned for errors using Epi-data software, version 4.6. The cleaned data was then transferred to SPSS, version 25 for analysis. Descriptive statistics were computed in accordance with the MOS SF-36 tool developer’s guidelines . Pre-coded numeric values were recorded, with 10 negatively worded items reverse-coded to ensure that higher scores reflected more favorable health states. Each item was scored on a scale of 0 to 100, using a transformation formula where 0 represented the worst possible health state and 100 the best. Items on the same scale was averaged together to create the 8-scale scores. Next, the PCS mean was computed from 4 scale scores namely physical functioning, role physical, bodily pain, and general health domain transformed scores, whereas the MCS mean was computed from the remaining 4 scale scores namely social functioning, mental health, vitality, and the role emotional domain transformed scores. The overall HRQoL mean was determined from the transformed scores of all eight domains, and participants were categorized into higher and lower HRQoL based on a standardized mean score of 50. Socio-demographic characteristics, clinical factors, and obstetric-related variables were analyzed and presented in text and tables. Chi-square tests were employed to assess associations between independent variables and the outcome variable. Crude odds ratios (COR) and adjusted odds ratios (AOR) with 95% confidence intervals were calculated to evaluate the strength of associations with postpartum HRQoL. Bivariate and multivariate logistic regression analyses were conducted to explore the relationships between dependent and independent variables. Variables with a p -value < 0.25 in bivariate analyses were included in multivariate logistic regression, with a significance level set at p < 0.05 for identifying statistically significant factors associated with HRQoL. Model fitness was assessed using the Hosmer and Lemeshow goodness-of-fit test ( p = 0.207), and multicollinearity was examined using the variance inflation factor, which revealed no significant issues. Normality data was checked using Kolmogorov-smirnov test with scatter plot. Normally distributed data was expressed with mean and standard deviation and median and interquartile range were employed for non-normally distributed data. This study used an institutional based cross-sectional study design conducted from April to June 2024 at public health facilities in Gondar town, Northwest Ethiopia. Gondar, the capital city of the Central Gondar administrative zone, is served by eight health centers and two hospitals providing immunization services. One hospital, University of Gondar Comprehensive Specialized Hospital and four health centers (Poly Health Center, Gabriel Health Center, Maraki Health Center, and Azezo Health Center) were randomly selected using a lottery method, taking into account the feasibility of the study. The source population included all postpartum mothers who delivered via cesarean section in Gondar town. The study population consisted of mothers attending child immunization at public health facilities six weeks post-delivery. The study included postpartum mothers aged 18 years and older who gave birth in Gondar town. Exclusion criteria encompassed women with physical disabilities (such as spinal cord injuries, amputations, paralysis, or limb deformities), those with preexisting chronic illnesses (including chronic hypertension, cardiovascular and pulmonary disorders, and neuropsychiatric conditions), and individuals who were unable to communicate or comprehend the study requirements. Independent variables were categorized into socio- demographic factors (age, BMI, marital status, education levels of both the mother and partner, occupation of both, family support, and number of living children) and obstetric-related and clinical factors (parity, antenatal care visits, birth order history, preterm labor, pregnancy desirability, postnatal care visits, pregnancy complications, postpartum anemia, HIV status, urinary incontinence, and postpartum depression). Additionally, anesthesia and surgery related variables includes urgency of the surgery, number of cesarean deliveries, type of anesthesia, type of postoperative analgesia, delivery complications, perceived pain after discharge, postoperative nausea and vomiting, and shivering. The MOS-SF-36 contains eight domains comprising two main categories namely, physical and mental HRQoL. Each raw scale score on each domain was transformed from 0 to 100 (0–100 scale) by using the formula of transformed. 11 [12pt]{minimal} $$\,Scale = }{{possible\,raw\,score\,range}} 100$$ Physical component Summary (PCS) mean score of HRQoL is the arithmetic average of the transformed scores of physical functioning, role physical, bodily pain, and general health domains . Mental component Summary (MCS) mean score of HRQoL is the arithmetic average of the transformed scores of social functioning, mental health; role emotional, and vitality domains . Overall HRQoL mean score is the arithmetic average of the transformed score of the eight domains . Higher HRQoL is when participants scored greater than or equal to the standardized mean value of 50 . Lower HRQoL is when participants scored less than the standardized mean value of 50 . Postpartum depression: was assessed by using an Edinburgh Postnatal Depression Scale (EPDS). According to the EDPS, study participants who scored ≥ 13 are considered as having postpartum depression and it is validated in Addis Ababa, Ethiopia . Multidimensional scale of perceived social support of family support domain was utilized to assess the extent of perceived social support from their families. According to the scale, mean scale score ranging from 1 to 2.9 could be considered poor support, 3.0 to 5.0 could be considered moderate support and 5.1 to 7 could be considered strong support . Postpartum anemia: is defined as hemoglobin < 10 g/dl or hematocrit < 30% during postpartum period . The sample size for this study was calculated using the single population proportion formula, assuming a prevalence of 50% due to the absence of prior studies in Ethiopia assessing HRQoL among postpartum mothers following CD. The formula used was [12pt]{minimal} $$\:n=\!/\:\!.)}^{2}\:(1-\:)}{{}^{2}}$$ , where n = initial estimated sample size, Z = Confidence level (α); α = 95%; [12pt]{minimal} $$\:Z\!/\:\!.$$ =1.96, [12pt]{minimal} $$\:\:\:$$ = proportion; [12pt]{minimal} $$\:\:\:=0.5$$ , [12pt]{minimal} $$\:\:$$ = marginal error; [12pt]{minimal} $$\:\:$$ =5%, resulting in an initial sample size of approximately 384. After adding a 10% non-response rate, the final sample size was set at 424. Sampling was conducted through simple random sampling, with samples allocated proportionally to each health facility. Participants meeting the inclusion criteria were consecutively recruited until the target sample size was achieved (Fig. ). Data were collected using structured and semi-structured questionnaires developed by the principal investigator, initially in English and subsequently translated into Amharic for simplicity. This translation was back-translated to ensure consistency by two bilingual experts. The questionnaire addressed socio-demographic variables, multidimensional social support, obstetric and clinical characteristics, postpartum depression, and HRQoL. Data were gathered through face-to-face interviews conducted in private settings. Additionally, information regarding pregnancy complications, postoperative hematocrit levels, types of postoperative analgesia, and delivery complications was extracted from medical records the day after the interview. The MOS SF-36 was utilized to assess HRQoL across eight domains: physical functioning, role physical, bodily pain, general health, vitality, social functioning, role emotional and mental health. Each domain was scored on a scale from 0 (worst) to 100 (best), with a Cronbach’s alpha above 0.70 for all domains, except for social functioning, which was 0.68 [ , , ]. The EPDS, consisting of 10 items, was used to evaluate postpartum depression, demonstrating sensitivity and specificity of 78.9% and 75.3%, respectively, in a validation study in Ethiopia . Additionally, perceived family support was measured using a 7-option Likert scale with a Cronbach’s alpha of at least 0.7 (Annex ). To ensure data quality, a pre-test was conducted with 5% of the calculated sample size prior to the main study. This pre-test facilitated necessary revision to the questionnaire, enhancing its clarity, logical flow, and skip patterns. Five data collectors participated in a comprehensive one-day training session, which covered research objectives, eligibility criteria, data collection tools, and procedures. The training also included protocols for addressing any acute pain or postpartum depression that participants may experience during the study. Additionally, the importance of maintaining confidentiality and implementing effective data quality management practices was emphasized. Throughout the data collection period, the principal investigator and supervisors conducted daily reviews of completed questionnaires to ensure data completeness and consistency. Upon completion of data collection, variables were entered, coded, and cleaned for errors using Epi-data software, version 4.6. The cleaned data was then transferred to SPSS, version 25 for analysis. Descriptive statistics were computed in accordance with the MOS SF-36 tool developer’s guidelines . Pre-coded numeric values were recorded, with 10 negatively worded items reverse-coded to ensure that higher scores reflected more favorable health states. Each item was scored on a scale of 0 to 100, using a transformation formula where 0 represented the worst possible health state and 100 the best. Items on the same scale was averaged together to create the 8-scale scores. Next, the PCS mean was computed from 4 scale scores namely physical functioning, role physical, bodily pain, and general health domain transformed scores, whereas the MCS mean was computed from the remaining 4 scale scores namely social functioning, mental health, vitality, and the role emotional domain transformed scores. The overall HRQoL mean was determined from the transformed scores of all eight domains, and participants were categorized into higher and lower HRQoL based on a standardized mean score of 50. Socio-demographic characteristics, clinical factors, and obstetric-related variables were analyzed and presented in text and tables. Chi-square tests were employed to assess associations between independent variables and the outcome variable. Crude odds ratios (COR) and adjusted odds ratios (AOR) with 95% confidence intervals were calculated to evaluate the strength of associations with postpartum HRQoL. Bivariate and multivariate logistic regression analyses were conducted to explore the relationships between dependent and independent variables. Variables with a p -value < 0.25 in bivariate analyses were included in multivariate logistic regression, with a significance level set at p < 0.05 for identifying statistically significant factors associated with HRQoL. Model fitness was assessed using the Hosmer and Lemeshow goodness-of-fit test ( p = 0.207), and multicollinearity was examined using the variance inflation factor, which revealed no significant issues. Normality data was checked using Kolmogorov-smirnov test with scatter plot. Normally distributed data was expressed with mean and standard deviation and median and interquartile range were employed for non-normally distributed data. Sociodemographic characteristics From 424 eligible participants, 418 responded to the questionnaires with response rate of 98.58%. The mean age of the study participants was 30.28 with standard deviation of 6.43. The 46.4% of respondents had age between 25 and 34 years. Majority of participants (82.8%) were married, and more than half (56.2%) had secondary education or higher. Housewife comprised 58.6% of the sample, and majority (78.2) reported strong family support (Table ). Obstetrics and clinical-related variables Majority (71.1%) were multiparous, and about (56.5%) had received at least 4 antenatal care (ANC) visits. Participants who had received less than 2 post-natal care (PNC) visits comprised 56.7%. Moreover, 24.2% of the sample experienced obstetric complications during pregnancy. About 96 (23%) of postpartum women had anemia and 86 (20.6%) of women had postpartum depression (Table ). Anesthesia and surgery related variables Two-third (66.5%) of study participants were gave birth through emergency CD, and majority (78.9%) of respondents had one or more previous CD. Most (85.4%) of participants underwent CD under regional anesthesia. Moreover, more than half (63.6%) of respondents reported as they had feeling of pain continued even after discharge from which the majority accounts headache 156(58.6%) followed by surgical site pain 155 (58.3%) (Table ). HRQoL of the study participants The MOS SF-36 scale found that 66.5% (95% CI: 61.8, 71.0) of postpartum women following CD had a low HRQoL (Fig. ). The mean of HRQoL of sample was 47.92 ± 4.28 (95% CI: 47.51, 48.33), and we found the mean score of PCS with 48.21 ± 5.63) and the mean score of MCS with (47.62 ± 6.02 (Table ). From the eight domains of HRQoL, the lowest mean score was observed in the social functioning domain with a Mean ± SD of 45.84 ± 13.99, whereas the highest mean score in the body pain domain was 50.08 ± 12.78 (Fig. ). Factors associated with HRQoL In the binary logistic regression model, variables with p-value less than 0.25 were selected as candidate variables for multivariate logistic regression analysis. Thus, age of the mother, partner’s education, mother’s education, receiving at least four ANC visits, receiving at least two PNC, perceived family support, postpartum depression, complications during pregnancy, urgency of surgery, type of anesthesia and perceived pain after discharge were identified as a candidate variable for multivariate logistic regression analysis. The multivariate logistic regression analysis showed receiving at least two PNC, postpartum depression, type of anesthesia, complications during pregnancy and perceived pain after discharge were significantly associated with a low HRQoL. Our study showed that postpartum women who had less than two PNC visits were 2.58 times more likely to have a low HRQoL than those postpartum women who had at least two PNC visits [AOR = 2.58, 95% CI: (1.59–4.19)]. The odds of having a low HRQoL among postpartum women who had complications during pregnancy were 5.32 times [AOR = 5.32, 95% CI (2.69–10.54)] higher than those postpartum women who had no complication during pregnancy. This study also identified that postpartum women had perceived pain after discharge were 2.64 times more likely to have a low HRQoL than postpartum women who had no perceived pain after discharge [AOR = 2.64, 95% CI (1.61–4.35)]. Postpartum women who underwent CD under general anesthesia were 2.36 times more likely to have a low HRQoL compared to those who underwent CD under regional anesthesia [AOR = 2.36, 95% CI: (1.08–5.14)]. Moreover, Postpartum women who had postpartum depression were 2.41 times more likely to have a low HRQoL than postpartum women who had no postpartum depression [AOR = 2.41, 95% CI: ( 1.22–4.77)] (Table ). From 424 eligible participants, 418 responded to the questionnaires with response rate of 98.58%. The mean age of the study participants was 30.28 with standard deviation of 6.43. The 46.4% of respondents had age between 25 and 34 years. Majority of participants (82.8%) were married, and more than half (56.2%) had secondary education or higher. Housewife comprised 58.6% of the sample, and majority (78.2) reported strong family support (Table ). Majority (71.1%) were multiparous, and about (56.5%) had received at least 4 antenatal care (ANC) visits. Participants who had received less than 2 post-natal care (PNC) visits comprised 56.7%. Moreover, 24.2% of the sample experienced obstetric complications during pregnancy. About 96 (23%) of postpartum women had anemia and 86 (20.6%) of women had postpartum depression (Table ). Two-third (66.5%) of study participants were gave birth through emergency CD, and majority (78.9%) of respondents had one or more previous CD. Most (85.4%) of participants underwent CD under regional anesthesia. Moreover, more than half (63.6%) of respondents reported as they had feeling of pain continued even after discharge from which the majority accounts headache 156(58.6%) followed by surgical site pain 155 (58.3%) (Table ). The MOS SF-36 scale found that 66.5% (95% CI: 61.8, 71.0) of postpartum women following CD had a low HRQoL (Fig. ). The mean of HRQoL of sample was 47.92 ± 4.28 (95% CI: 47.51, 48.33), and we found the mean score of PCS with 48.21 ± 5.63) and the mean score of MCS with (47.62 ± 6.02 (Table ). From the eight domains of HRQoL, the lowest mean score was observed in the social functioning domain with a Mean ± SD of 45.84 ± 13.99, whereas the highest mean score in the body pain domain was 50.08 ± 12.78 (Fig. ). In the binary logistic regression model, variables with p-value less than 0.25 were selected as candidate variables for multivariate logistic regression analysis. Thus, age of the mother, partner’s education, mother’s education, receiving at least four ANC visits, receiving at least two PNC, perceived family support, postpartum depression, complications during pregnancy, urgency of surgery, type of anesthesia and perceived pain after discharge were identified as a candidate variable for multivariate logistic regression analysis. The multivariate logistic regression analysis showed receiving at least two PNC, postpartum depression, type of anesthesia, complications during pregnancy and perceived pain after discharge were significantly associated with a low HRQoL. Our study showed that postpartum women who had less than two PNC visits were 2.58 times more likely to have a low HRQoL than those postpartum women who had at least two PNC visits [AOR = 2.58, 95% CI: (1.59–4.19)]. The odds of having a low HRQoL among postpartum women who had complications during pregnancy were 5.32 times [AOR = 5.32, 95% CI (2.69–10.54)] higher than those postpartum women who had no complication during pregnancy. This study also identified that postpartum women had perceived pain after discharge were 2.64 times more likely to have a low HRQoL than postpartum women who had no perceived pain after discharge [AOR = 2.64, 95% CI (1.61–4.35)]. Postpartum women who underwent CD under general anesthesia were 2.36 times more likely to have a low HRQoL compared to those who underwent CD under regional anesthesia [AOR = 2.36, 95% CI: (1.08–5.14)]. Moreover, Postpartum women who had postpartum depression were 2.41 times more likely to have a low HRQoL than postpartum women who had no postpartum depression [AOR = 2.41, 95% CI: ( 1.22–4.77)] (Table ). Assessment of HRQoL of women following CD is essential for global healthcare since the joint effect of pregnancy related changes and surgery and anesthesia related side effects increase maternal morbidity and mortality during the postpartum period. This study aimed to assess the HRQoL of postpartum mothers following CD and identify the factors associated with it during the first six weeks postpartum using MOS SF-36 validated tool. Our study revealed that almost two-thirds [66.5%, 95% CI: (61.8, 71.0)] of postpartum mothers who gave birth via CD had a low level of HRQoL with mean and standard deviation of 47.92 ± 4.28. Key factors significantly associated with lower HRQoL included less than two postnatal care visits, postpartum depression, type of anesthesia, complications during pregnancy, and perceived pain after discharge. This finding is higher than study done among postpartum women delivered via either of modes of deliveries in Arbaminch, Ethiopia ( n = 409) which had 62.3% of low level of HRQoL with a mean score of 45.15 ± 8.13 . The reason for the variation might be due to the effect of surgical procedure and anesthesia on postpartum physical recovery compared to vaginal birth . However, in this study, the prevalence of low HRQoL is lower than the result of a study done among postpartum mothers ( n = 429) who gave birth either vaginal via CD in Oromia region, Ethiopia which was [73.7% (95% CI: 69.4–77.7)] . The explanation for this difference might be our study done included postpartum mothers who gave a live birth through CD in urban area. While, the study done in Oromia, Ethiopia included postpartum mothers who gave a live or still birth through either vaginal via CD in urban and rural area. According to our result the mean score of overall HRQoL is [47.92 ± 4.28, 95% CI: 47.51, 48.33], lower than a finding in Iran with a mean score of 76.56 ± 14.04 . The possible explanation for variation might be due to difference in study design and tool. Our study is conducted using cross sectional study design and MOS SF-36 validated tool, while the study in Iran was done with longitudinal study at 3rd trimester and at 8th week’s postpartum using WHOQoL- Bref questionnaire. More over our result is lower than a result in Spain with mean score of 71.94 ± 17.48 . The possible explanation for these discrepancies may be due to differences in the study population, with the current study conducted on mothers at six weeks of postpartum, while the Spanish study included women who gave birth a year prior to the survey since women who gave birth a year before the study may have recovered from the impact of pregnancy and delivery. Our result also lower than the finding in Iran with mean score of 68.38 ± 13.6 , and Brazil with mean score of 86.86 ± 10.6 . The reason for the variation might be due to differences in the tool used to assess quality of life. In this study, we used SF-36 questionnaires, but studies conducted in Brazil used the WHOQoL-Bref tool. The common explanation for the all differences might also be due to the surgical procedure, and the effect of anesthesia further affect their postpartum recovery compared to vaginal birth . The results of our study reported the mean and SD scores of PCS and MCS with [48.22 ± 5.63, 95% CI: 47.68, 48.76] and [47.62 ± 4.28, 95% CI: 47.51, 48.33] respectively. The results are lower than a study conducted in Kuwait, with the mean PCS and MCS scores 54.5 and 52.9 respectively . The possible reason for the difference might be the study participants and assessment tool. In this study we assessed quality of life at six weeks using SF 36 tool, while a study in Kuwait was done at six months and SF 12 tool was utilized. In addition, in our study mean scores of PCS and MCS are lower than mean scores of PCS (59.05 ± 19.48) and MCS (55.08 ± 25.17) of study conducted in North-East Romania . The variation could be brought on by the differences in the study population. In this study only postpartum mothers who underwent CD were included, whereas the study conducted in North-East Romania included either of modes of deliveries. Another reason might be due to the effect of anesthesia, cesarean scar and other CD related factors that reduce components of physical health like physical functioning, role physical, bodily pain, and general health domains. The mean PCS [48.22 ± 5.63, 95% CI: 47.68, 48.76] and MCS [47.62 ± 4.28, 95% CI: 47.51, 48.33] scores in our study are lower and higher than mean PCS (49.5 ± 9.3) and MCS scores (40.79 ± 10.90) of a study conducted in Ethiopia on 409 randomly selected post-partum women following either cesarean or vaginal deliveries . Whereas, the mean MCS score in our study is higher. The possible justification for this difference can be the effect of anesthesia and surgery on physical recovery lead to low PCS score. However, special attention and support from family members may be good following CD which might bring better MCS score . In this study postpartum women who had less than two postnatal care visits [AOR, 2.58, 95% CI: 91.59–4.19] were more likely to have a low HRQoL. Likewise, a study in rural china showed postnatal visit associated with HRQoL . In addition, our finding is supported by a study in Ethiopia showed PNC positively associated with high HRQoL among postpartum women following either cesarean or vaginal deliveries . WHO recommend postnatal visits to improve short and long term health wellbeing of the mother and newborn which might improve physical and psychological recovery . Thus, postnatal visit might improve physical and mental component of HRQoL. The result of this study revealed that postpartum women who had complications during pregnancy were negatively associated with HRQoL following CD [AOR, 5.32, 95% CI: 2.69–10.54]. This finding is congruent with a study done in Netherland , Austria which found hypertensive disorders, antepartum hemorrhage, gestational diabetes were negatively associated with HRQoL. In addition, supported by a study done among postpartum mothers after either modes of deliveries in southern Ethiopia showed that preeclampsia were negatively associated with HRQoL . This is due to the fact that gestational complications can have long-term physical, mental, and social consequences that result false women’s perception of health and overall well-being after childbirth . Therefore, early screening and management of pregnancy related complications are essential to mitigate their impact on postpartum recovery. Our study found pain during postpartum period was negatively associated with HRQoL [AOR, 2.64, 95% CI: 1.61–4.35]. One of the reasons for this might be pain results most common symptoms interfere with both quality of life and general functioning as a result of potential detrimental impacts on maternal movement, mobility, sleep and mental health which may influence a woman’s transition to pre pregnancy state . Therefore, management of pain is particularly important for early recovery and satisfactory pain relief improves mobility and enhances breastfeeding and infant care in the postpartum period. Implementing standardized pain management protocols could significantly enhance maternal health outcome. Postpartum women who underwent CD under general anesthesia were more likely to have low HRQoL [AOR, 2.36, 95% CI: 1.08–5.14]. This finding is supported by a study done in India . This might be due to regional anesthesia enables patients to return to normal daily activities earlier than general anesthesia and it provides more effective pain control, less bleeding, early ambulation and better satisfaction for mothers there by increasing their quality of life . Therefore, encouraging the use of regional anesthesia whenever clinically appropriate could enhance maternal outcomes following CD. Finally, in this study postpartum women who had postpartum depression were more likely to have a low HRQoL than postpartum women who had no postpartum depression [AOR, 2.41, 95%CI:1.22–4.77]. This finding is supported by studies done in Nigeria, Kuwait, and Netherland [ , , ]. The reason might be depression is negatively associated with mental and physical health, and depression by itself affects women’s ability to function, relation with her child, interpersonal relationship, sleeping pattern, and social engagement, thus lowering HRQoL. So, addressing postpartum depression through early screening and intervention programs is essential to improve HRQoL and overall maternal well-being. This study has several limitations. First, recall bias may affect the accuracy of self-reported data. Second, the findings should be generalized with caution, as the study focused on women from a specific geographic area. Additionally, as a cross-sectional study assessing HRQoL only within the first six weeks postpartum, it does not capture changes in HRQoL throughout the early and late postpartum period. Despite these limitations, the study provides valuable insights into the factors influencing postpartum HRQoL and serves as a foundation for future interventions aimed at improving maternal well-being after cesarean delivery. This study founds two-thirds of postpartum women underwent CD, had a low level of HRQoL. Factors significantly associated with lower HRQoL included less than two postnatal care visits, postpartum depression, type of anesthesia, pregnancy complications, and postpartum pain. These findings highlight the need for increased attention to the HRQoL of postpartum women following CD, especially in terms of minimizing morbidity related to surgery and anesthesia. To improve maternal outcomes, it is essential to enhance postnatal care by ensuring mothers receive the recommended number of visits and monitoring recovery closely. Addressing pregnancy complications, such as hypertensive disorders, gestational diabetes, and antepartum hemorrhage, is crucial in promoting better postpartum health. Additionally, encouraging the use of regional anesthesia and integrating effective postoperative pain management can improve recovery. Incorporating routine screening and support for postpartum depression is also vital for overall well-being. Finally, future research should focus on longitudinal studies to track changes in HRQoL across various stages of pregnancy and postpartum to develop more accurate interventions. Below is the link to the electronic supplementary material. Supplementary Material 1
From Hybrids to New Scaffolds: The Latest Medicinal Chemistry Goals in Multi-target Directed Ligands for Alzheimer’s Disease
0da95416-3945-4a3a-a079-3b2b5eb5b5af
8686302
Pharmacology[mh]
INTRODUCTION According to the World Alzheimer Report (2019), around 50-60% of the overall dementias correspond to Alzheimer’s disease (AD). Although it has been for years a major health concern in developed economies, it is increasing in the developing countries as life expectancy increases. Even though WHO estimates that around 47 million people currently suffer from AD, this number is expected to double every 20 years; thus, the AD population could reach 75 million by 2030 . Despite multiple efforts carried out within the last decades by universities, foundations, research centers and pharmaceutical companies, the detailed pathogenesis of AD is still unclear, and the underlying mechanism leading to this disease is not yet fully understood. Unfortunately, given the continuous failures in clinical trials, pharmaceutical companies are pulling out AD research, and it has been 17 years since the last drug, memantine, reached the market in 2003 . AD is a progressive neurodegenerative disease resulting in the irreversible loss of neurons, particularly in the cortex and hippocampus . Symptoms may include progressive loss of memory, cognition, motor, and functional capacity, often accompanied by behavioral disturbance such as aggression, depression and wandering , being the most common cause of dementia among elderly people . Many authors defined AD as a heterogeneous disease caused by a combination of environmental and genetic factors, being the age one of the most important risk factors for the development of the disease . Some of the predisposing factors of this pathology include vascular disease , diabetes , depression , and hypertension . On the other hand, many lifestyle modifications such as physical activity, sleep, feeding, smoking, alcohol, and intellectual stimulation are thought to have an impact cognitive impairment even though more evidence is still needed. So far, AD has been related to several altered brain functions, including extracellular plaques containing abnormal deposits of beta-amyloid peptides , the hyperphosphorylated form of the microtubular Tau protein involving twisted fibers , inflammation , oxidative stress , cholinergic neuron damage (cholinergic hypothesis) , serotonin misregulation (serotoninergic hypothesis) , and many others . Despite many years of evidence suggesting a connection between amyloid plaques or neurofibrillary tangles as the earliest lesions in AD, the role of such processes remains controversial even though there is no doubt that those aggregates promote inflammation responses and activate neurotoxic pathways, leading to dysfunction and death of brain cells. In this line, the inflammatory process significantly contributes to AD pathogenesis . In a recent review , the importance of understanding the inflammation process was explained, suggesting that the control of interactions between the immune and nervous system could be a key to the prevention or delaying of most late-onset central nervous system (CNS) diseases, including AD. Authors concluded that the brain can no longer be viewed as an immune-privileged organ. At present, only 4 drugs have been approved for AD treatment, acetylcholinesterase inhibitors donepezil, rivastigmine, galantamine and the N -methyl- d -aspartate (NMDA) receptor antagonist memantine (Fig. ), and they only address associated symptomatology without halting or reversing the disease progression . To this day, AD has also been related to additional targets/functions, whose misregulation has been proposed to lead to AD onset. These include ApoE , dopamine D 2 receptor , γ-aminobutyric acid receptor , acetylcholinesterase and butyrylcholinesterase , β- and γ-secretases , serotonin 5-HT 6 and 5-HT 4 receptors , serotonin transporter , or SRFP1 . After providing this big picture, the only clear issue is that we are facing a multifactorial disorder, which cannot be managed by drugs acting at just a single level. The “one drug-one target” paradigm has not succeeded and does not provide a solution in the treatment of complex and multifactorial diseases like AD . In this context, the multitarget approach has recently emerged as a potential solution by using multi-target directed ligands (MTDLs) . Thus, the aims of this proposal are based on the design of new drug candidates simultaneously modulating different biological targets involved in the neurodegenerative AD cascade. Due to the complex etiology and multifactorial nature of this disease, various hypotheses have been proposed in an attempt to address it, although none of them is able to explain the onset and progression of the disease on its own . 1.1 Cholinergic Hypothesis The cholinergic hypothesis is based on the association between low levels of acetylcholine (ACh) and a decline in learning, cognitive function and memory in AD patients . It has been demonstrated that the dysfunction and neuronal loss in basal forebrain regions are directly related to the expression and activity of choline acetyltransferase (ChAT) and acetylcholinesterase (AChE), specific enzymes related to CNS functions. Their activities play an essential role in cholinergic transmission, showing variations in the cerebral cortex and the hippocampus in AD-suffering subjects . The presynaptic cholinergic deficit is associated with a marked loss of cholinergic cells from the nucleus basalis of Meynert, decrease of ACh releasing and reuptaking . The cholinergic hypothesis has not had widespread support because the AChE inhibitor-based AD treatment only brings a slight symptomatic improvement, failing in curing or preventing the disease progression . 1.2 Amyloid Hypothesis Another plausible and widely studied cause of AD is based on the amyloid cascade hypothesis. The accumulation of the hydrophobic amyloid-beta (A β , A β 40 and A β 42 ) peptides resulting in self-aggregation and insoluble plaques formation is still considered to be the main feature of AD etiology . It is originated from the proteolytic cleavages of the transmembrane amyloid precursor protein (APP) by specific secretases ( β -, and γ -secretase) . A β fibrils accumulation is thus considered an early toxic event that activates neurotoxic pathways. Some studies suggest that these oligomers can destroy the integrity of the cell membrane and disrupt the steady-state of brain cells , leading to brain cell dysfunction and death . Some authors indicate that A β aggregates can also induce oxidative stress , initiate an inflammatory response , and alter calcium homeostasis . Furthermore, Selkoe emphasizes that the word “cause” of AD pathology cannot necessarily be directly applied to the A β accumulation, due to the existence of some genetic mutations or polymorphisms that can produce an increase in other peptide accumulation (presenilin or apolipoprotein E) . Despite what was previously indicated, the self-aggregation of A β itself is insufficient to explain the accumulation of the peptide in specific brain regions of AD patients. The “metal hypothesis of AD” is based on the effects of A β accumulations (as senile plaques) promoted by A β -metal interactions. The metal ion content of the brain are essential trace elements that are stringently regulated with virtual no passive flux of metals from the circulation to the brain, but interestingly, elevated concentration of copper, zinc, and iron have been detected in amyloid plaques, which induces the protein to precipitate into metal-enriched masses . However, the mechanism of how these metals bind to and promote its aggregation is still unknown . A plausible approach may be modulating these interactions by metal chelators, and indeed, this is considered another promising strategy for AD treatment. 1.3 Tau Hypothesis The Tau protein is an important component of the neuronal cytoskeleton, being its principal activities related to stabilizing microtubules , cell shape maintaining and axonal transport . In the normal brain, the balance between Tau phosphorylation and dephosphorylation is a dynamic process that causes conformational and structural changes, regulating the stability of the cytoskeleton and axonal morphology . The imbalance in the action of different kinases and phosphatases is one of the possible proposals of Tau hyperphosphorylation . During the development of AD, Tau begins to phosphorylate in a massive way, which triggers its collapse and intracellular aggregation to form neurofibrillary tangles (NFTs) . A progressive neuronal degeneration is the start of alteration leading to degradation of the cytoskeletons. In other words, these fibrils create a physical barrier within the neuron that generates a toxic medium with a high concentration of NFTs. Some authors exposed that NFTs are inert and do not have influence in microtubules assembly, but they choke the affected neurons and facilitate cell death by acting as a space-occupying lesion. In a review , the authors summarize the evidences and therapeutic approaches that linked Tau misregulation to AD pathogenesis. One approach is the use of kinase inhibitors and phosphatase activators , however, these enzymes are present in nearly every cell in the body and the problem would be finding molecules that alter the activi-ty specifically of the target enzyme without affecting the others. Identifying key sites of Tau in order to develop small molecule anti-aggregators is still a hopeful field of research . Certainly, another proposed approach involving Tau and Amyloid hypotheses is immunotherapy, which is the use of immunity-enhancing techniques as a medical treatment. Huge advances in immunotherapy AD research have been achieved within the last decade , supported by several Clinical Trials and the recent FDA approval of Aducanumab. However, in order to stick to the script, such an interesting approach will not be considered here as it falls far beyond the scope of this review. Indeed, it is worth mentioning that the Tau hypothesis alone is inadequate to explain all the symptomatic conditions observed in AD, so it is not surprising that drugs targeting Tau protein itself have not achieved any relevant progress. 1.4 Serotonergic Hypothesis Depression may be one of the initial symptoms of neurodegenerative disorders, and it is regarded as a risk factor for later development of dementias, being depressed mood in elders associated with an increased risk of AD . Nowadays, our concept of the nature of the relationship between cognitive impairment and the serotonin system is evolving, thus the serotonergic hypothesis of AD is slowly emerging , as long as more and more researchers worldwide are suggesting AD modulators based on monoamine oxidase (MAO) inhibitors, serotonin reuptake inhibitors (SSRIs) , and 5-HT 4 and 5-HT 6 modulators . According to many authors, the accumulation of A β -amyloid could be a secondary effect of reduced monoamine neurotransmitters . The MAO enzyme exists as two isoforms, MAO-A and MAO-B, and their principal activities are related to catalyzing the oxidation of monoamines and are thus responsible for the metabolism of neurotransmitters such as serotonin, noradrenaline, and dopamine . They are located in the CNS and in peripheral tissues. Some studies revealed that MAOs are associated with psychiatric and neurological disorders, including AD . MAO-A inhibitors are used as antidepressants and anti-anxiety agents, while MAO-B inhibitors have been revealed to be useful in neurodegenerative disorders such as Parkinson´s disease and AD, also inhibiting their associate oxidative damage . In summary, simultaneous inhibition of both MAOs, have been suggested to provide additional benefits in AD therapy. Along with MAO, 5-HT 4 receptor (5-HT 4 R) ligands have also been proposed in AD research since many studies have shown the involvement of 5-HT 4 R in cognitive processes. Moreover, many authors provided important findings suggesting that 5HT 4 R agonists may also affect the amyloid β -peptide pathway, supporting the serotonergic approach in AD . The scope of this review is to describe some widely studied bioactive structures: curcumin -, resveratrol -, chromone- , and indole -derivatives as MTDLs for Alzheimer’s Disease, mainly oriented to interact with the aforementioned targets, included or not in the previously described hypotheses. This literature review is focused in identifying small molecular fragments as promising starting points for biological target modulation with the aim of shifting the current paradigm towards a disease-modifying strategy. In this review, we summarize the latest medicinal chemistry goals in AD-related MTDLs development: small molecule fragment with known biological activities combined through hybridization or fine chemical tuning, in order to develop true MTDLs to face such devastating disease from a multifactorial point of view. Cholinergic Hypothesis The cholinergic hypothesis is based on the association between low levels of acetylcholine (ACh) and a decline in learning, cognitive function and memory in AD patients . It has been demonstrated that the dysfunction and neuronal loss in basal forebrain regions are directly related to the expression and activity of choline acetyltransferase (ChAT) and acetylcholinesterase (AChE), specific enzymes related to CNS functions. Their activities play an essential role in cholinergic transmission, showing variations in the cerebral cortex and the hippocampus in AD-suffering subjects . The presynaptic cholinergic deficit is associated with a marked loss of cholinergic cells from the nucleus basalis of Meynert, decrease of ACh releasing and reuptaking . The cholinergic hypothesis has not had widespread support because the AChE inhibitor-based AD treatment only brings a slight symptomatic improvement, failing in curing or preventing the disease progression . Amyloid Hypothesis Another plausible and widely studied cause of AD is based on the amyloid cascade hypothesis. The accumulation of the hydrophobic amyloid-beta (A β , A β 40 and A β 42 ) peptides resulting in self-aggregation and insoluble plaques formation is still considered to be the main feature of AD etiology . It is originated from the proteolytic cleavages of the transmembrane amyloid precursor protein (APP) by specific secretases ( β -, and γ -secretase) . A β fibrils accumulation is thus considered an early toxic event that activates neurotoxic pathways. Some studies suggest that these oligomers can destroy the integrity of the cell membrane and disrupt the steady-state of brain cells , leading to brain cell dysfunction and death . Some authors indicate that A β aggregates can also induce oxidative stress , initiate an inflammatory response , and alter calcium homeostasis . Furthermore, Selkoe emphasizes that the word “cause” of AD pathology cannot necessarily be directly applied to the A β accumulation, due to the existence of some genetic mutations or polymorphisms that can produce an increase in other peptide accumulation (presenilin or apolipoprotein E) . Despite what was previously indicated, the self-aggregation of A β itself is insufficient to explain the accumulation of the peptide in specific brain regions of AD patients. The “metal hypothesis of AD” is based on the effects of A β accumulations (as senile plaques) promoted by A β -metal interactions. The metal ion content of the brain are essential trace elements that are stringently regulated with virtual no passive flux of metals from the circulation to the brain, but interestingly, elevated concentration of copper, zinc, and iron have been detected in amyloid plaques, which induces the protein to precipitate into metal-enriched masses . However, the mechanism of how these metals bind to and promote its aggregation is still unknown . A plausible approach may be modulating these interactions by metal chelators, and indeed, this is considered another promising strategy for AD treatment. Tau Hypothesis The Tau protein is an important component of the neuronal cytoskeleton, being its principal activities related to stabilizing microtubules , cell shape maintaining and axonal transport . In the normal brain, the balance between Tau phosphorylation and dephosphorylation is a dynamic process that causes conformational and structural changes, regulating the stability of the cytoskeleton and axonal morphology . The imbalance in the action of different kinases and phosphatases is one of the possible proposals of Tau hyperphosphorylation . During the development of AD, Tau begins to phosphorylate in a massive way, which triggers its collapse and intracellular aggregation to form neurofibrillary tangles (NFTs) . A progressive neuronal degeneration is the start of alteration leading to degradation of the cytoskeletons. In other words, these fibrils create a physical barrier within the neuron that generates a toxic medium with a high concentration of NFTs. Some authors exposed that NFTs are inert and do not have influence in microtubules assembly, but they choke the affected neurons and facilitate cell death by acting as a space-occupying lesion. In a review , the authors summarize the evidences and therapeutic approaches that linked Tau misregulation to AD pathogenesis. One approach is the use of kinase inhibitors and phosphatase activators , however, these enzymes are present in nearly every cell in the body and the problem would be finding molecules that alter the activi-ty specifically of the target enzyme without affecting the others. Identifying key sites of Tau in order to develop small molecule anti-aggregators is still a hopeful field of research . Certainly, another proposed approach involving Tau and Amyloid hypotheses is immunotherapy, which is the use of immunity-enhancing techniques as a medical treatment. Huge advances in immunotherapy AD research have been achieved within the last decade , supported by several Clinical Trials and the recent FDA approval of Aducanumab. However, in order to stick to the script, such an interesting approach will not be considered here as it falls far beyond the scope of this review. Indeed, it is worth mentioning that the Tau hypothesis alone is inadequate to explain all the symptomatic conditions observed in AD, so it is not surprising that drugs targeting Tau protein itself have not achieved any relevant progress. Serotonergic Hypothesis Depression may be one of the initial symptoms of neurodegenerative disorders, and it is regarded as a risk factor for later development of dementias, being depressed mood in elders associated with an increased risk of AD . Nowadays, our concept of the nature of the relationship between cognitive impairment and the serotonin system is evolving, thus the serotonergic hypothesis of AD is slowly emerging , as long as more and more researchers worldwide are suggesting AD modulators based on monoamine oxidase (MAO) inhibitors, serotonin reuptake inhibitors (SSRIs) , and 5-HT 4 and 5-HT 6 modulators . According to many authors, the accumulation of A β -amyloid could be a secondary effect of reduced monoamine neurotransmitters . The MAO enzyme exists as two isoforms, MAO-A and MAO-B, and their principal activities are related to catalyzing the oxidation of monoamines and are thus responsible for the metabolism of neurotransmitters such as serotonin, noradrenaline, and dopamine . They are located in the CNS and in peripheral tissues. Some studies revealed that MAOs are associated with psychiatric and neurological disorders, including AD . MAO-A inhibitors are used as antidepressants and anti-anxiety agents, while MAO-B inhibitors have been revealed to be useful in neurodegenerative disorders such as Parkinson´s disease and AD, also inhibiting their associate oxidative damage . In summary, simultaneous inhibition of both MAOs, have been suggested to provide additional benefits in AD therapy. Along with MAO, 5-HT 4 receptor (5-HT 4 R) ligands have also been proposed in AD research since many studies have shown the involvement of 5-HT 4 R in cognitive processes. Moreover, many authors provided important findings suggesting that 5HT 4 R agonists may also affect the amyloid β -peptide pathway, supporting the serotonergic approach in AD . The scope of this review is to describe some widely studied bioactive structures: curcumin -, resveratrol -, chromone- , and indole -derivatives as MTDLs for Alzheimer’s Disease, mainly oriented to interact with the aforementioned targets, included or not in the previously described hypotheses. This literature review is focused in identifying small molecular fragments as promising starting points for biological target modulation with the aim of shifting the current paradigm towards a disease-modifying strategy. In this review, we summarize the latest medicinal chemistry goals in AD-related MTDLs development: small molecule fragment with known biological activities combined through hybridization or fine chemical tuning, in order to develop true MTDLs to face such devastating disease from a multifactorial point of view. CURCUMIN AND CURCUMINOIDS HYBRIDS The major curcuminoids present in turmeric ( curcuma longa ) are curcumin, demethoxycurcumin and bisdemethoxycurcumin (Fig. ), being curcumin the most bioactive component . Curcuminoids from turmerichave shown anti-inflammatory, antioxidant, anticancer, antimicrobial, and neuroprotective effects, among others . These compounds display different chemical functions: a methoxy phenolic group; α , β -unsaturated β -diketo linker, and keto-enol tautomerism having a predominant keto form in acidic and neutral solutions, and stable enol form in alkaline medium. The aromatic groups confer hydrophobicity, the linker brings flexibility and tautomeric structures also influence the hydrophobicity and polarity . However, curcumin exhibits several limitations, such as chemical instability, poor solubility in water, low bioavailability, and fast metabolism under physiological conditions, thereby resulting in a rapid systematic elimination and limiting its application as a drug candidate . In this sense, it is reasonable to design curcumin analogs able to enhance the aforementioned drawbacks. Several groups tested curcumin derivatives using cells and mouse models of AD and reported that curcumin derivatives have strong anti-amyloid beta aggregation property, are able to cross the blood-brain barrier (BBB), ameliorates cognitive decline, and improve synaptic functions in a mouse model . Besides, curcumin itself also exert MAO-B inhibitory capabilities, in addition to the ability to degrade Tau neurofibrillary tangles , although the mechanism of action of such processes are not fully understood. Yan et al . reported the synthesis and biological activities of MTDLs based on chimerical structures consistent in donepezil-curcumin fused scaffolds. The most active studied compounds 1 , 2 , and 3 were evaluated in AChE and butyrylcholinesterase (BuChE) inhibition, BuChE /AChE selectivity, A β 1-42 self-aggregation inhibition and antioxidant effects (Fig. ). Compound 1 was revealed as potent AChEi (IC 50 = 187 nM) compared to the rest of the series (donepezil (DPZ) AChE IC 50 = 37 nM as reference), and the highest selectivity ratio (BuChE /AChE: 66.3) which was significantly better than Tacrine and Galantamine (selectivity: 0.15 and 25.3, respectively) although still far away from DPZ (selectivity: 85.4). Inhibitory activity against A β 1-42 self-aggregation was evaluated employing curcumin as reference (54.9% at 20µM). Compounds 1 , 2 and 3 displayed 45.3%, 30.4% and 22.0%, respectively, evidencing the importance of the hydroxy group in the A β 1-42 self-aggregation inhibitory activity. They also conducted an oxygen radical absorbance capacity assay (ORAC), evaluating their antioxidant activity in vitro with Trolox as standard. All compounds exhibited good ORAC values of 1.01 – 3.07 Trolox equivalent (expressed as µM of Trolox eq/ µM tested compounds). It is known that curcumin (2.52 Trolox eq.) displays potent antioxidant activity, but compound 1 , featuring a hydroxyl group at the meta position, displayed a stronger one. Due to the poor solubility and oral bioavailability of curcumin, scientists have seen the need to modify its structure to improve these deficiencies. In this way, Wang et al . designed and synthesized L-lysine-functionalized curcumin derivatives to improve their water-solubility and inhibition of amyloid fibrillation in vitro , using Hen egg-white lysozyme (HEWL) as a model protein (Fig. ). They used N α -Fmoc-N ε -Boc-L-lysine as a novel water-soluble amino acid derivative. Compounds 4 and 5 exhibited enhanced solubility (3.32 x 10 -2 g/mL and 4.66 x 10 -2 g/mL, respectively) in water compared to curcumin (1-10 µg/mL) . Moreover, these compounds showed amyloid fibrillation inhibition of HEWL when the concentration of 4 and 5 reach to 20.14 mM and 49.62 mM, respectively. In addition, the lag phase duration of 4 ( e.g. , 7.3 days) is longer than 5 ( e.g. , 6.2 days), the authors attributed it to the phenolic hydroxyl group and the charged amino acid, concluding that it is an effective way to improve its solubility. In a recent work, Cui et al . studied and synthesized water-soluble curcumin derivatives based on Boc-L-isoleucine (Fig. ). The inhibitory potency of the monosubstituted derivative 6 on the formation of HEWL amyloid fibrils was superior to the disubstituted counterpart 7 at low concentration, suggesting the importance of the free hydroxyl group in the aromatic ring (20% and 3.5% at 0.1 mM; both reached to 70-80% at 0.5 mM). Regarding the solubility profile, both derivatives exhibited enhanced solubility 3.05 mg/mL and 2.12 mg/mL in water respect to curcumin . It is worth mentioning that both derivatives displayed low cytotoxicity in HeLa cell line, above 70% viability at 10-50 µM. Many authors proposed that the intractable nature of the A β plaques and tangles stimulates a chronic inflammatory reaction to clear this debris . These plaques depositions in the brain stimulate an inflammatory response generating the overexpression of proinflammatory mediators, such as the neuroinflammatory interleukin , playing a key role in inflammatory and anti-inflammatory processes in AD. Interleukin-6 (IL-6) is a soluble mediator with a pleiotropic effect on inflammation, immune response, and hematopoiesis . Inhibition of IL-6 secretion is frequently used as a readout of anti-inflammatory activity. In this line, Lakey-Beitia et al . reported new curcumin derivatives synthesized by etherification, and esterification of curcumin and benzyl bromide, acetyl chloride, 4-(benzyloxy)-4-oxobutanoic acid, and 4-(cyclopentyloxy)-4-oxobutanoic acid, displaying anti-aggregation capabilities and anti-inflammatory activity (Fig. ). In order to evaluate the IL-6 production, lipopolysaccharide-stimulated macrophages were used. Compounds 8 , 9 , and 11 exhibited more potent anti-inflammatory activity compared to curcumin (IC 50 = 8.25 µM), while compound 10 displayed a similar effect. The introduction of a benzyl moiety liked through an ether bond in one of the curcumin rings ( 8 ) led to the most potent anti-inflammatory derivative, but the presence of a bulky diester group was conducted to less active derivatives 10 and 11 . They concluded that hydroxyl groups on the aromatic rings of the curcumin were the pharmacophore required to diminish the IL-6 production. Regarding the anti-aggregation profile in vitro , compounds 8 , 9 , and 11 inhibited the A β aggregation in a concentration-dependent manner, with IC 50 values ranging from 1.32 to 2.05 µM, showing an amyloid anti-aggregation effect in the same magnitude as the standard curcumin (IC 50 = 1.4 µM) but, surprisingly compound 10 did not present anti-aggregation activity. To delve into this concept, Okuda et al . designed a series of asymmetric curcumin derivatives by different strategies, and the most active compounds are summarized in (Fig. ). Firstly, a compound series were synthesized by changing the hydroxyl and methoxyl substitution pattern on one of the aromatic moieties ( 12 ), showing that the inhibitory activity on A β aggregation increased when these substituents were located in meta position to each other, displaying higher inhibitory activity compared to curcumin (IC 50 = 5.4 µM). Next, by only exchanging one aromatic ring for other cyclic structures, curcumin derivative 13 was achieved with interesting results, suggesting that a bicyclic structure may increase the inhibitory activity, especially in Tau aggregation. Combining the aforementioned results, compound 14 was designed and synthesized. Taking into account that in animal models , curcumin undergoes rapid metabolic reduction and conjugation, resulting in poor systematic bioavailability after oral administration , they introduced various residues in order to protect the residual phenolic hydroxyl group ( 14 ) from being metabolized, although just one compound ( 15 ) exerted comparable inhibitory activity to 14 . Additional performed experiments were related to obtaining the pharmacokinetic profile. Each compound was orally administered to rats at 50 mg/kg. The C max of 15 was found to be 20-fold lower than that of curcumin (5.7 ± 3.3 ng/mL at 30 min and 125 ± 65 ng/mL at 15 min), but the concentration in the brain was 13-fold lower compared to curcumin itself. In order to achieve a more convenient pharmacokinetic profile, many structural changes were necessary. They modified the central diketone skeleton in 15 by introducing a pyrazole ring ( 16 ). Although the IC 50 = 1.2 µM for A β aggregation and IC 50 = 0.66 µM for Tau aggregation cannot be denoted as a stunning result, the concentration of 16 in the brain was 300-fold higher than that of 15 and 20-fold higher than that of curcumin. Li et al . synthesized and evaluated MTDLs based on rivastigmine and curcumin hybrids. Rivastigmine demonstrated unique central selective towards AChE and BuChE inhibitory activity free of hepatic metabolism, while curcumin represents a neuro-protective agent, with a variety of functions (Fig. ). Compound 18 was the most potent AChE inhibitor by 20-fold compared to the reference compound (rivastigmine, IC 50 = 2.07 µM). Regarding the AChE inhibitory profile, the position of the aminoalkyl group in the benzene ring resulted crucial for the inhibition potency. While the derivate 17 displayed only moderate micromolar activity, shifting this group to the 4-position conducted to nanomolar IC 50 values, as shown in (Fig. ) (compounds 18 and 19 ). On the other hand, all compounds exerted good inhibitory activity regarding BuChE with compounds 17 and 19 showing 40-50-fold improvement respect to rivastigmine (BuChE IC 50 = 0.37 µM). A β aggregation inhibitory profile of compounds 17 , 18 , and 19 were qualitatively evaluated by Transmission Electron Microscopy (TEM). As depicted in this work, after incubating A β 1-40 along with the selectedmolecules, the reduction in A β 1-40 deposition was evident, as only a few fibbers could be observed, which was similar to curcumin and indicated that all compounds were also endowed with potent A β anti-aggregation capabilities. Interestingly, the addition of rivastigmine had little effect on its aggregation. In a further assay designed to shed light on initial metabolism, compound 18 was incubated with rat cortex homogenate (AChE) and the extract was analyzed by HPLC-PS after 24 h, in which the prodrug activation process was confirmed by obtaining compound 20 . This compound showed potent ABTS [2,20-azinobis-(3-ethylbenzothiazoline-6-sulfonic acid)] radical cation scavenging capacity (IC 50 = 2.91 µM) respect to melatonin control (IC 50 = 1.92 µM), and moderate copper ion chelating activity in vitro . In 2017, Liu et al . reported the synthesis and biological evaluation of tacrine-curcumin derivatives as MTDLs (Fig. ) along with a deep molecular modeling study in order to rationalize their results. They evaluated in vitro the AChE and BuChE inhibition and the most active compounds were selected for further investigation. The AChE inhibitory activity of 21 and 23 was higher than the tacrine (IC 50 = 0.10 µM). Compound 21 showed higher inhibitory activity against both enzymes compared to the other compounds. The authors attributed this result to the absence ofthe aromatic ring at the end of the side chain and the smaller structure, making it suitable for the accommodation into the catalytic gorge of AChE, since this is relatively narrow and with large steric hindrance. Regarding the composition of AChE and BuChE, they mainly differ in their amino acid composition at the mid gorge level. While AChE has several aromatic residues, those in BuChE are replaced by smaller aliphatic counterparts resulting in a larger pocket in the latter. The molecular modeling study of compound 21 showed interactions with Trp84 and Phe330 through a π- π staking due to the relatively small side chain on the tacrine derivative and could smoothly enter the catalytic active site (CAS) pocket. On the other hand, compound 23 has an aromatic ring at the end of the side chain, resulting in a stabilized interaction by hydrogen bonding between the carbonyl group and Tyr121 residue, so that the ligand can be perfectly located in the gorge of AChE with the benzene ring binding to CAS, and the tacrine moiety binding to the peripheral anionic site (PAS). It is, therefore, understandable why 23 presented the most potent activity in the AChE enzymatic assay. The antioxidant capabilities of 21 , 22 , and 23 were determined by ORAC, using curcumin and tacrine as positive and negative controls, respectively. Curcumin was a potent scavenger of peroxyl radical (3.1 trolox eq.). Compounds 22 and 23 showed potent ability to scavenge reactive oxygen species (2.0 and 2.4, respectively) while compound 21 exhibited a weak ROS scavenger profile (0.4) in the same order to tacrine (0.3). By taking advantage of the aforementioned properties of curcumin derivatives as MTDLs, Dias et al . designed a series of compounds based on the combination of feruloyl subunit present in curcumin, and N -benzylpiperidine (a pharmacophoric subunit from DPZ) with the aim to obtain MTDLs as promising leads prototypes for AD (Fig. ). Compounds were evaluated as EeAChE inhibitors, resulting in two active compounds ( 24 , 25 ) in relation to curcumin as reference ( Ee AChE IC 50 = 132.12 µM), even though still far from DPZ ( Ee AChE IC 50 = 0.026 µM). On the other hand, compound 24 was also active in equine serum butyrylcholinesterase ( eq BuChE), exhibiting a discrete value compared to standard DPZ ( eq BuChE IC 50 = 4.69 µM for DPZ) but more active than curcumin ( eq BuChE IC 50 > 300 µM for curcumin). The authors found in a substrate competition assay that compound 24 followed a non-competitive inhibition mechanism (complemented by molecular docking studies), and interestingly, they conclude that the substitutions on the aromatic ring of the N -benzylpiperidine lead to a decrease in the AChE activity independent of its ability to donating or withdrawing electrons, or its size. The antioxidant activity of compounds was evaluated in vitro by using the radical scavenging DPPH assay; compounds 24 , 25 , and 26 displayed antioxidant profile and were effective in scavenging free radicals compared to Ferulic acid, iso -ferulic acid, and trolox as standards (DPPH EC 50 = 35.54 µM, > 100 µM, and 40.86 µM, respectively). It is worth mentioning that compounds bearing a ferulic acid moiety displayed 100-fold more potency in radical scavenging compared to its iso -ferulic acid counterparts, settling the evidence about the importance of curcuminoid framework as an antioxidant. The neurotoxic effects of compounds 24 and 25 were evaluated in SH-SY5Y cells and they showed the absence of cytotoxic and pro-oxidant effects, and authors suggest that the ferulic acid subunit contributes to counteract the ROS formation. The ability of compounds 24-26 to chelate biometals was studied by UV-Vis spectroscopy: all compounds were able to chelate only Cu +2 and Fe +2 , but no Fe +3 and Zn +2 . Taking these results into consideration, selected compounds were evaluated in vivo , and they showed that compound 24 displayed significant anti-inflammatory activity in different animal models, highlighting this compound as a potential multifunctional lead for AD treatment. RESVERATROL HYBRIDS Resveratrol is a stilbene which contains two aromatic rings linked by an ethylene bridge. This compound exists in two geometric isomers, cis-( Z ) and trans-( E ), as shown in (Fig. ). This compound is found in many vegetables such as peanuts, pistachios, grapes, red and white wine, blueberries, cranberries, and even cocoa and dark chocolate. Some of its studied biological properties include anti-cancer, anti-inflammatory, anti-aging, cardioprotective, antioxidant, chelating, and scavenging capability towards reactive ox-ygen species. Multiple studies detail the ability of resveratrol and its derivatives to inhibit amyloid β aggregation, although their underlying mechanism of action is not well understood . The versatile function of these compounds in plant defense mechanisms as phytoalexins to fight fungal infection, ultraviolet radiation, stress, and injury confers them promising potential as pharmaceutical agents. This framework has attracted lots of interest in order to understand their biosynthetic pathways and their biological properties. One major limitation in the use of resveratrol as a therapeutic agent is associated with their inherent poor aqueous solubility and low bioavailability . The studies of resveratrol and several other stilbenes in AD models suggest that stilbenes may be very effective modulators of AD development and progression, depending on their bioavailability and activity in vivo . To solve the bioavailability and solubility concerns of resveratrol, several drug delivery systems have recently been developed, such as encapsulation in liposomal formulations , use of cyclodextrin complex as a drug carrier for enhanced binding to the protein , and solid lipid nanoparticles to enhance matrix-based delivery , among others . Resveratrol is also associated with the activation of silent information regulator-1 (SIRT1), and it plays a critical role in neuronal protection as it regulates reactive oxygen species (ROS), nitric oxide (NO), proinflammatory cytokine production, and A β expression in AD patients brains . SIRT1 was found to be essential for synaptic plasticity, cognitive functions , modulation of learning and memory function . In a recent review, the importance of the neuroprotective role of resveratrol towards the activation of SIRT1 was discussed, even though the mechanisms of action are still unclear and the anti-inflammatory and antioxidant action of this molecule may be independent of SIRT1 . The challenge of devising resveratrol derivatives is mainly focused on obtaining compounds with improved efficiency, low toxicity, better bioavailability, and solubility for developing more active drugs for clinical application . In a recent paper, Pan et al . described the synthesis and evaluation of resveratrol-based compounds as MTDLs. Inhibitory activities against AChE and BuChE were tested along to tacrine and galantamine as reference standards (Fig. ). Compounds 27 , 28 , and 29 displayed higher inhibitory activity against cholinesterases than resveratrol (AChE IC 50 = 165.24 µM, BuChE IC 50 = 752.46 µM), indicating that the introduction of amino group side chains may result in increasing the inhibitory capability of the target compounds. In their original contribution, the authors evaluated different chain lengths and found that a six-carbon linker between the trans-stilbene moiety and the amino group was the optimal length for biological activity. Besides, they explored different terminal amines resulting in compound 28 as the most potent (almost 8-fold more potent than 27 ), concluding that the methylene group could increase the lipophilicity leading to a rise in AChE inhibitory potency . Compound 28 was selected for kinetic measurements using Lineweaver-Burk plots, the authors found in the graphical representation of the steady-state for the inhibition of AChE that both slopes and intercepts were increased at increasing concentration of the inhibitor, concluding that compound 28 was a mixed-type inhibitor, which could bind to the CAS and the PAS sites of AChE. Besides, the inhibition of A β 42 self-induced aggregation was compared with resveratrol (68.51% at 20 µM) and curcumin (52.21% at 20 µM) as reference compounds, while 28 and 29 displayed a similar inhibition profile. Compound 29 endowed with the terminal cyclic amine displayed stronger inhibitory activity compared to the open-chain amino derivative 28 . Authors also pointed out that MAO-A inhibitory ability of compounds was not relevant and apparently lacked a structure-activity relationship, while their MAO-B inhibitory activity was relatively potent and could be related to the length of the alkyl chain, resulting in a n = 3 carbon spacer compound (data not shown) exerting the best MAO-B inhibition (IC 50 = 5.01 µM). While compounds 27 and 28 did not display relevant activity against MAOs compared to iproniazid as reference (MAO-A IC 50 = 6.58 µM; MAO-B IC 50 = 7.82 µM). Eventually, compound 29 displayed significant inhibition towards both MAO-A and MAO-B at the same time, it showed no toxicity in the SH-SY5Y neuroblastoma cell line at 1-50 µM. Considering that biometal (Fe, Cu, and Zn) ions may be crucial participants in pathological processes of AD, Lu et al . combined resveratrol and clioquinol, a well-known metal chelator, to obtain a novel series of derivatives expected to behave as biometal chelators, antioxidants, and inhibitors of A β aggregation (Fig. ). Compounds 30 (79.50% at 20 µM) and 31 (78.06% at 20 µM) exhibited stronger A β aggregation inhibition than curcumin (52.77% at 20 µM, IC 50 = 12.35 µM) and resveratrol (69.73% at 20 µM, IC 50 = 15.11 µM). Regarding the antioxidant activity, compounds 30 and 31 exhibited strong but lower antioxidant capacity compared to resveratrol (5.92 trolox eq.) as a reference compound. Metal-Chelating properties of compounds were studied by UV−vis spectroscopy and the ability of 30 and 31 to complex biometals such as Cu (II), Fe (II), Fe (III) and Zn (II) was measured. Their results indicated the formation of 30 -Cu (II) and 31 -Cu (II) complexes, with 3:1 and 1:1 stoichiometry, respectively. Moreover, the ability of these compounds to inhibit Cu (II)-induced A β aggregation was investigated by ThT fluorescence and TEM. In the presence of Cu (II) well-defined A β fibrils were observed, while fewer fibrils were present when compounds 30 and 31 were added to the samples, demonstrating its capabilities in disassembling the highly structured fibrils induced by Cu (II). The MAO inhibitory ability of compounds was evaluated using ladostigil as reference (MAO-B inhibitor, IC 50 = 37.1 µM) and clorgyline (MAO-A irreversible and selective inhibitor, IC 50 = 4.1 nM), and both displayed a strong balance in MAO inhibitory activity. Furthermore, compounds 30 and 31 exhibited moderate AChE inhibitory activity. Intracellular antioxidant activity was evaluated in the SH-SY5Y cell line, resulting in 30 and 31 activity more potent than Trolox, indicating that resveratrol derivatives have the potential to be efficient multifunctional agents. Finally, compound 30 was able to cross the blood-brain barrier in vitro and did not exhibit any acute toxicity in mice at doses of up to 2000 mg/kg. Jeřábek et al . fused the cholinesterase inhibitor drug tacrine with resveratrol, designing a series of new MTDLs. All compounds carried a 6-chlorotacrine fragment connected to a resveratrol derivative moiety. Among other selected compounds (Fig. ) only 32 and 33 showed significant AChE inhibitory activity compared to tacrine as reference (tacrine AChE IC 50 = 0.5 µM; 6-chlorotacrine AChE IC 50 = 0.07 µM; data taken from ref .). Compound 34, with a double bond, has a higher degree of structural rigidity in contrast with the other derivatives, displaying a weak AChE inhibition. Additionally, docking investigation revealed that chlorine in 6-position allows compounds to establish Van der Waals interactions in with AChE hydrophobic residues of the active site. Since chlorine can decrease the electron density on the aromatic ring in tacrine moiety, it favors π electron interaction with nearby residues .All compounds evaluated in BuChE displayed no enzyme inhibition when tested at 10 μM. In this sense, compounds 33 and 34 were the most active inhibitors even though it was not possible to establish a correlation between the rigid fragment and the anti-amyloid properties, however, the presence of resorcinol ring (1,3-dihydroxybenzene) seems to be important for the possibility of establishing hydrogen-bond interactions. This is clearly seen in compound 32 , endowed with a 2,4-dimethoxy substituent on the phenyl ring, exhibiting lower inhibitory activity on A β self-aggregation. However, compounds 33 and 34 with a resorcinol moiety displayed a similar A β 42 inhibitory profile than resveratrol as reference (A β 42 self-aggregation % inhibition = 30.0%). The antioxidant activity of compounds 32 , 33 , and 34 was assessed using 2,2-diphenyl-1-picrylhydrazyl (DPPH) in an antioxidant assay, expressed as the concentration that causes a 50% decrease in the DPPH activity (EC 50 values) with Trolox as a reference compound. Compounds 33 and 34 carrying free hydroxyl groups on the phenyl ring were detrimental to the free radical scavenging efficacy, nevertheless, derivative 32 with two methoxyl groups, showed reasonable antioxidant activity although lower than that of resveratrol (EC 50 = 25.6 µM). A clear cytotoxic effect was evident for compound 32 when assessing cerebellar granule neurons of rat at 5 µM concentration, compound 33 showed neurotoxic only at the highest tested concentrations (25 and 50 µM). Compound 34 showed no clear neurotoxicity at all tested concentrations. Finally, the authors found general hepatotoxicity for all derivatives, attributing it to the presence of hepatotoxic tacrine fragment. In 2018, a significant advance was conducted by Cheng et al . reporting the synthesis and in vitro evaluation of hybrids merging maltol and resveratrol as MTDLs (Fig. ) . The ABTS radical scavenging method was used to determine the antioxidant capacity. Compounds 35 and 36 exhibited excellent antioxidant activity even higher than trolox (IC 50 = 3.89 µM), showing that a modification in the substitution pattern of the benzene ring by fluoro, ethoxy, or methoxy resulted in a decrease of the antioxidant activity (compounds not included in this discussion). The A β 1-42 self-aggregation inhibition profile of 35 and 36 resulted to be more potent than resveratrol and curcumin, used as positive controls (IC 50 =11.89 and 18.73 μM, respectively). Biometals (copper, iron, and zinc) were able to facilitate A β aggregation thought binding to three histidines (H 6 , H 13 , and H 14 ) of the A β 1-42 peptide . The TEM experiment demonstrated a disaggregation of A β fibrils, indicating that compounds 35 and 36 can efficiently chelate Fe +3 , Cu +2 , and inhibit Fe +3 /Cu +2 –induced A β aggregation. Synthesis and evaluation of prenylated resveratrol derivatives were recently discussed by Puksasook et al . (Fig. ). Prenylation consists of the addition of a hydrophobic prenyl chain, as a natural active moiety of a β -secretase (BACE1) inhibitor . The A β 1-42 aggregation inhibition was evaluated using curcumin as a positive control (IC 50 = 0.77 µM, Anti A β agg. 87.98% at 100 µM). The best result was obtained from derivative 39 , bearing a geranyl group at the C-4 position on the resorcinol ring, showing a similar effect than curcumin, followed by 37 bearing a prenyl group at the same position. Authors confirmed by molecular docking study that prenyl group at C-4 was less effective than a geranyl group at the same position, this may be due to the shorted alkyl side chain leading to less hydrophobic interactions. The inhibition of BACE1 was carried out using β -secretase inhibitor IV (Calbiochem ® ) as a reference compound (IC 50 = 0.015 µM, β secretase inhibition 96.51% at 50 µM). Compound 40 was the most potent BACE1 inhibitor. The freeradical scavenging activity was evaluated using DPPH according to a modified version of the Brand-Williams method . Compounds 38 , 39 , and 40 showed stronger activity than 37 . Authors attributed this result to the free -OH groups that were essential for the antioxidant activity because these can donate hydrogen atoms and stabilize electrons by conjugation . The IC 50 values were compared to vitamin C (IC 50 = 21.63 µM, antioxidant DPPH inhibition 95.78% at 100 µM) used as a positive control resulting in compound 40 as the most potent antioxidant. Neuronal viability assay was carried out using the P-19-derived neuron cell line. Compound 37 (>100% neuron viability at 1 nM to 10µM concentrations), promoted high viability of the cultured neurons, while compounds 38 , 39 , and 40 geranylated resveratrol derivatives showed stronger neurotoxicity at 1 nM (% viability 51.25 ± 13.12, 70.07 ± 36.33 and 34.17 ± 29.98, respectively). The prenyl substituent at the C-4 position in compound 37 might play an important role in neuronal viability. The neuroprotective ability of compound 37 was evaluated in a serum deprivation model using P-19 derived neurons cultured in a concentration of 1 nM and 10 µM. Compound 37 significantly protected the cultured neurons against serum deprivation at 50.59 ± 3.98 and 53.19 ± 12.48% viability, respectively (assumed ROS toxicity from serum-depravation induced oxidative stress), and it was more effective than resveratrol (37.41 ± 4.40% viability), and comparable to that of the quercetin positive control (58.04 ± 9.20% viability). Finally, the neuritogenic activity of compound 37 caused more branching numbers (9.33) than the control (2.12), and longer neurites (109.74 µM) than the positive control quercetin (104.33 µM). Tang et al . designed and studied isoprenylated resveratrol dimers (Fig. ). The inhibitory activities against MAOs were evaluated in vitro using p -tyramine as a nonselective substrate of MAO-A and MAO-B. Compounds 41 and 42 displayed enhanced inhibition towards MAO-B respect to the A isoform. In addition, in order to evaluate the antioxidant activities of those, three independent approaches were used: DPPH and ABTS radical scavenging methodsand Ferric ion reducing antioxidant power (FRAP) assay. DPPH radical scavenging revealed that compounds 41 and 42 are endowed with significant antioxidant activity in relation to Trolox (IC 50 = 49.77µM). ABTS and FRAP antioxidant analysis showed a similar trend of free radical scavenging activity. Potential toxicity effects were evaluated in PC12 and BV2 cells. Compounds 41 and 42 were tested in their capacities of protecting PC12 cells against oxidative stress associated death by H 2 O 2 . The results showed that these compounds could significantly inhibit cell death at concentrations ranging from 6.25 to 25 μM. Both compounds exhibited very low toxicity in PC12 and BV2 cell lines. The neuroprotective effect was evaluated against oxidative injuries in PC12 cells by using oligomycin-A and rotenone as toxic lesions simulation . Both compounds exerted relatively poor neuroprotective activity against rotenone-induced cell damage, while they showed moderate to high neuroprotective activity against oligomycin-A. As depicted in (Fig. ), compound 41 displayed improved biological activities while its BBB crossing capabilities were enhanced respect to 42 . Xu et al . integrated resveratrol and deferiprone (a known iron metal chelator) scaffolds in a novel series, with the aim of developing new MTDLs for AD. (Fig. ). The A β self-induced aggregation inhibition profile was tested by using the ThT based fluorometric assay. Compound 44 displayed stronger A β inhibitory activity in relation to resveratrol and curcumin (64.08% and 56.44%, respectively), while compound 43 exhibited similar behavior. The antioxidant activity was determined by the ABTS radical scavenging method employing Trolox as a positive control. Compound 43 showed higher antioxidant activity in relation to the reference (IC 50 = 3.89 µM), while compound 44 exerted a similar effect. As expected, compound 43 demonstrated improved antioxidant properties. The pFe(III) values were determined by fluorescence spectroscopy along to deferiprone, which was used as a reference compound . Compounds 43 and 44 displayed closely related Fe(III) scavenging properties in relation to deferiprone (pFe(III) = 20.60). Yang et al . investigated a series of pyridoxine-resveratrol hybrids by introducing Mannich base moieties. According to them, hybrids containing phenolic Mannich base moieties may exhibit good antioxidant properties , AChE inhibitory activity , and metal chelating properties . Vitamin B6 (pyridoxine) has a critical function in cellular metabolism and stress response. Furthermore, it also behaves as a potent antioxidant that effectively quenches reactive oxygen species . The inhibition of cholinesterases was evaluated in vitro using AChE from electrophorus electricus ( Ee AChE) and BuChE from rat serum (Fig. ). Compound 45 was inactive as Ee AChEi, while compound 48 displayed the strongest Ee AChE inhibitory activity in the series even if lower than DPZ (IC 50 = 23.0 nM). On the other hand, compound 46 bearing a piperidine unit showed stronger inhibition in Ee AChE than the structurally related compound 47 , differing only in oxygen in the morpholine moiety. In order to explore the mechanism of action of these hybrids, a kinetic study was carried out for compound 46 , indicating a mixed-type inhibition and supporting a dual-site binding to both CAS and PAS of AChE. All compounds were inactive or weak as BuChE inhibitors. The MAOs inhibition activity were evaluated using clorgyline (MAO-B IC 50 = 8.85 µM; MAO-A IC 50 = 7.9 nM), rasagiline (MAO-B IC 50 = 0.044 µM; MAO-A IC 50 = 0.71 µM), and iproniazid (MAO-B IC 50 = 4.32 µM; MAO-A IC 50 = 1.37 µM) for comparative purposes. All tested compounds showed much stronger inhibitory activities towards MAO-B than MAO-A. The intermediate 45 showed the highest MAO-B inhibition activity, followed by 47 , suggesting that the Mannich base moiety was detrimental for MAO-B inhibition (reminding us all the key importance of extending the biological assays to the intermediates). The antioxidant activity of those was evaluated by the ORAC fluorescein method. All compounds exhibited good ORAC values ranging from 1.76 – 2.56 compared to resveratrol (ORAC = 5.60 trolox eq.), also isopropylidene-protected derivative 48 showed slightly weaker antioxidant activity than 46 , what could be related to the hydroxyl of lacking the latter. CHROMONE DERIVATIVES Chromones are a group of oxygen-containing heterocyclic compounds (Fig. ), widespread and naturally occu-rring. It represents an unusual group of structurally diverse secondary metabolites, derived from the convergence of multiple biosynthetic pathways that are widely distributed through the plant and animal kingdoms . Chromone scaffold ((4H)-1-benzopyran-4-one) has also been extensively recognized as a key pharmacophore . The chromone ring is the core fragment of several flavonoid derivatives, such as flavones and isoflavones . The structural diversity of chromones in nature allows their division into simple and fused chromones. These heterocycles have attracted much attention because they show a variety of pharmacological properties such as anti-inflammatory effect , analgesic , metal chelating ability , antioxidant , antimicrobial , antifungal and neuroprotective effects , among others . In recent years, many research groups optimized their chemical structure in order to develop new derivatives for the potential AD therapy, being the main hallmark related to its neuroprotective capability, cholinesterases (ChEs) inhibitory capabilities, MAO inhibition, and amyloid β aggregation inhibitory activities . Furthermore, Reis and coworkers showed that chromone is a privileged scaffold for the development of novel MAO-B inhibitors, highlighting the effect of the substituent nature located at C3- and/or C6-positions of the benzopyrone ring. Otherwise, chromone derivatives have also been applied to the preparation of fluorescent probes due to its photochemical properties . The chromone core is found in flavones and isoflavones and they are preferential scaffolds for the development of MAO inhibitors . Li et al . reported the synthesis of tacrine-flavonoids hybrids as multifunctional ChEs inhibitors. Their results showed that the chromone framework contributes to the bioactivities of flavonoids hybrids. Based on such findings, in recent work, Wang et al . reported a series of chromone-donepezil hybrids (Fig. ) and their inhibitory activity against eq BuChE and Ee AChE were evaluated. Compound 49 , carrying a 6-methoxy substituent at the chromone moiety displayed the stronger inhibition, similar to DPZ in AChE but much stronger in BuChE ( eq BuChE: IC 50 = 2.47 μM, Ee AChE: IC 50 = 0.032 μM). Compound 50 exerted significant inhibitory activity to both ChEs even though the effect was less. Indeed, since the only difference between both is the OBn group, the steric drawbacks of the latter turns clear. Both compounds endowed with N-ethylcarboxamide linker between the benzylpiperidine and chromone moieties exhibited higher inhibitory activity than the others lacking this spacer (data not shown). Regarding the MAOs, the position and nature of the substituents resulted in a shift of its inhibitory profile. Compounds 49 and 50 showed weak inhibition of MAO-A. However, in MAO-B the inhibitory strength was directly related to the length of the alkylene chain. Compound 50 displayed higher MAO-B inhibitory potency respect to iproniazid (IC 50 = 6.93 µM) and similar to pargyline (IC 50 = 0.12 µM) as reference compounds. Compound 50 was selected for kinetic study to the inhibition of ChEs and MAO-B. Interestingly, a mixed-type inhibitory behavior was found in AChE while in BuChE, a competitive mechanism was pointed out. In addition, the kinetic profile of 50 towards MAO-B was compatible with competitive inhibition. Molecular modeling supported the aforementioned outcomes. Moreover, compound 50 could penetrate the BBB to target the enzyme in the CNS and showed low cell toxicity in rat pheochromocytoma (PC12) cells in vitro . These results shed light on these multifunctional agents that may contribute to the field of multitarget directed ligands for potential AD therapy. Pachón-Angona et al . combined donepezil + chromone + melatonin as scaffolds, prepared by multicomponent reaction (MCR) synthetic strategy, transforming three or more starting material into new products in a one-pot procedure (Fig. ) . In a first trial, the antioxidant behavior of such compounds was carried out by the ORAC-FL method. Ferulic acid and Melatonin were used as positive references (ORAC values of 3.74 and 2.45 , Trolox eq. respectively). Compound 51 exhibited strong antioxidant power, higher than melatonin and similar to Ferulic acid. However, the other compounds with a linker length of n = 1,2 displayed more potent antioxidant capabilities than Ferulic acid (ORAC = 6.52 Trolox eq.; n = 2 and R = H). The MAO activity was evaluated in vitro , by using clorgyline and pargyline as references. Compound 51 showed moderate MAO-A inhibition, less active than clorgyline (IC 50 = 0.05 µM), and lower MAO-B inhibitory activity compared to pargyline (IC 50 = 0.08 µM). The ChEs inhibitory activity was evaluated for Ee AChE and eq BuChE using DPZ and tacrine as references. Compound 51 showed strong eq BuChE inhibition, stronger than DPZ (IC 50 = 840 nM) even though diminished respect to tacrine (IC 50 = 5.1 nM). Regarding the structure-activity relationship, considering the same substituent, the most potent inhibitor was those with a n = 2 linker (IC 50 = 6.29 nM, and R = OCH(CH 3 ) 2 ) while those with n = 3, and n = 4 displayed lower potency. On the other hand, 51 resulted a moderated AChE inhibitor. The most potent compounds were those bearing propoxy or isopropoxy substituents at the indole ring (IC 50 = 0.08 µM and IC 50 = 0.09 µM, respectively). Finally, in molecular docking simulation was noticed that in ChEs the chain ending in pyrrole and chromone ring were crucial for the binding to the active site of the enzyme. Furthermore, the MAO analysis revealed that the N-benzylpiperidine chain was a required feature to achieve good inhibitory profiles. In 2017, Li et al . described the synthesis of chromone derivatives combining the pharmacophore moiety L1 , a previously reported to regulate metal-induced A β aggregation, ROS production, and neurotoxicity in vitro , and clioquinol (Fig. ). The inhibitory activities against MAOs were measured and compared to those of rasagiline (MAO-A IC 50 = 49.7 µM and MAO-B IC 50 = 7.47 µM) and iproniazid (MAO-A IC 50 = 6.46 µM and MAO-B IC 50 = 7.98 µM). Compound 52 displayed strong inhibitory values as MAOs inhibitors. The nature of substituent and their position generated changes regarding the structure-activity relationship. The most potent and selective MAO-A inhibitor was compound 53 (IC 50 = 1.65 µM, R 1 = Cl, R 2 = H, R 3 = H, and R 4 =H). Moreover, 54 (IC 50 = 0.634 µM, R 1 = H, R 2 = H, R 3 = Cl, and R 4 = CH 3 ) displayed the most potent inhibitory activity towards MAO-B. Compound 52 exhibited moderate A β aggregation inhibition even though stronger than curcumin and resveratrol (46.5% and 57.2%, respectively). The stronger A β aggregation inhibitor was compound 55 , exhibiting 89.5% inhibition (R 1 = H, R 2 = H, R 3 = CH 3 , and R 4 = Br) even though it does not meet a multi-target feature. ThT binding assay and TEM were used to identify the degree of A β aggregation . On the basis of the results, they concluded that compound 52 was capable of inhibiting Cu +2 induced A β aggregation, exhibiting significant antioxidant activity, metal chelation capabilities, H 2 O 2 -induced intracellular ROS accumulation reduction properties, and was able to cross the BBB (showed P e values > 4.0). It is worth to mention that it did not show significant toxicity in PC12 cells, suggesting that further investigation and comprehension of this scaffold may achieve advancements in AD multitarget therapy. Reis et al . reported a series of chromone 2- and 3-phenylcarboximide derivatives (Fig. ). Regarding the inhibitory activity towards ChEs, compound 56 displayed submicromolar activity towards AChE and inactivity towards BuChE. Furthermore, compound 57 displayed bifunctional ChEs inhibitory activity in the low micromolar range, while compound 56 bearing a methyl group in the para position of the chromone exocyclic phenyl ring and two methyl group linked to the tertiary amine, also showed submicromolar MAO-A values and micromolar MAO-B values even though still far from clorgyline (IC 50 = 0.0045 μM), and rasagiline (IC 50 = 0.050 μM). Compared with previous works of this group regarding similar structures, this result was less remarkable in regard to MAO-B inhibitory activity . Compound 57 carrying no substituent at the exocyclic ring, resulted inactive in MAO-A and while acted as a selective MAO-B inhibitor. A kinetic study was performed in both MAO-A and MAO-B. The results showed that 56 and 57 behave as competitive MAO inhibitors. The evaluation of the AChE inhibition mechanism of 56 showed a mixture of competitive and non-competitive mechanisms. Most promising chromones were screened towards human BACE-1, however, none of the compounds displayed relevant potency (IC 50 > 10 μM). The cytotoxicity profile was evaluated in differentiated human neuroblastoma (SH-SY5Y) and human hepatocarcinoma (HepG2) cell lines, being both clinically relevant. Compound 57 presented a wider safety profile and promising safety margin. Starowicz et al . studied the ability of various spices and herbs that are characteristic of European cuisineto inhibit the formation of advanced glycation end products (AGEs) and their antioxidant capacity. Glycation is defined as a reaction that leads to the formation of an irreversible structure called AGEs and a high concentration of those could initiate actions leading to various disorders, such as AD, atherosclerosis, diabetes, kidney disease, and chronic heart failure . The research group of Singh and coworkers focused their efforts on the design and synthesis of chromen-4-one derivatives, making modifications at different positions of the “skeleton key” . The inhibition towards AChE was determined and compound 58 (Fig. ) exhibited the stronger inhibitory profile, higher than DPZ (IC 50 = 12.7 nM) as standard. However, a further increase in carbon spacer (n = 6, 8) reduced the activity by 3-folds (IC 50 range from 48.1 to 67.2 nM). Thus, n = 2 and n = 3 spacer chain length was optimal considering cyclic aminoalkyl groups for AChE inhibitory profile. Anti-glycation assay was performed according to the method reported by Matsuura et al . with slight modifications. Compound 58 displayed significant inhibitory activity compared to aminoguanidine as reference drug (AGEs IC 50 = 40.0 µM). Respect to in vitro antioxidant activity, compound 58 showed lower antioxidant activity compared to ascorbic acid (EC 50 = 20.0 nM). Furthermore, the authors concluded that the conjugation system of chromen-4-one moiety appears to be crucial to their radical scavenging behavior. The kinetic study of compound 58 exhibited a mixed-type inhibition, which could bind with both CAS and PAS of the enzyme. Likewise, docking studies revealed the dual binding property as it interacted with both CAS as well as PAS via a hydrogen bond, π-π aromatic, and hydrophobic interactions, complementing the previous information. Coumarin and chromone are two structural isomers that exhibit relevant pharmacological activities . Fonseca and coworkers performed a comparative study of coumarin- and chromone-3-phenyl carboxamide scaffolds and its structure-activity relationship (SAR) as MAOs inhibitors (Fig. ). Firstly, the authors carried out a docking study of ligand-target recognition using the principal skeleton of both series of compounds. The binding modes analysis did not reveal significant differences in coumarin- and chromone- scaffolds. Consequently, the design of new derivatives was focused on i) the effects of the different substituents at the benzopyrone ring; ii) substituent position and its capabilities as electron-donating or withdrawing entities; iii) whether the position of the carbonyl group in the isomeric structures display some impact. Compounds 59 and 60 bearing a meta chlorine substituent at the benzamide portion showed stronger MAO-B inhibition compared to standard drugs deprenyl (IC 50 = 16.73 nM) and safinamide (IC 50 = 23.07 nM). The SAR analysis of remaining compounds (not presented here), bearing para substituents resulted in a decrease of activity, and the presence of a hydroxyl group either in meta or para position also resulted in activity decreasing. The position of the carbonyl group in coumarin or chromone moiety was apparently not relevant. Both compounds were inactive towards MAO-A. Eventually, the kinetic study of both compounds revealed a noncompetitive inhibition mechanism. In a recent article, Shaikh et al . designed a series of chromone-derived aminophosphonates in a one-pot reaction, catalyzed by porcine pancreatic lipase under solvent-free conditions. The α-aminophosphonates are a class of compounds with promising biological and pharmacologicalimportance as anti-AD agents . Compound 61 (Fig. ) was the most potent AChE inhibitor compared to tacrine (IC 50 = 0.29 µM), galantamine (IC 50 = 3.64 µM) and rivastigmine (IC 50 = 5.21 µM), showing higher activity towards AChE than BuChE. As an important observation regarding ChEs inhibitory activity, aliphatic amines displayed a stronger inhibitory profile towards AChE, while aromatic ones showed better performance in BuChE inhibition. The kinetic study of ChEs revealed a mixed-type inhibition, which is in agreement with the molecular docking results . Besides, the antioxidant activity was evaluated against DPPH and hydrogen peroxide scavenging method. Compound 61 exhibited the greatest radical elimination and high scavenging activity comparable to Ascorbic acid (DPPH IC 50 = 42.28 µM; H 2 O 2 IC 50 = 51.45 µM). Finally, 61 showed significant DNA damage protection activity. INDOLE DERIVATIVES Indole is a planar heterocyclic molecule in which a benzene ring is fused to a pyrrole ring through 2 and 3 positions of the latter (Fig. ). Due to the delocalization of π- electrons, it undergoes electrophilic substitution reactions, being a widely used chemical scaffold in medicinal chemistry. Its relevance in biological systems relies on being built into proteins through the indolic amino acid tryptophan . Thus, indole moiety is considered a biologically accepted pharmacophore in medical compounds . Indole is a prominent phytoconstituent across various plant species and is produced by a variety of bacteria . The indole-derived phytoconstituents and bacterial metabolites are a result of biosynthesis via the coupling of tryptophan with other amino acids. For this reason, it is a constituent of flower perfumes, pharmacologically active indole alkaloids, and some animal hormones or neurotransmitters such as serotonin and melatonin . Some naturally occurring indole alkaloids have gained FDA approval, including vincristine, vinblastine, vinorelbine, and vindesine for its anti-tumor activity , ajmaline for its anti-arrhythmic activity , and physostigmine for glaucoma . Taking inspiration from these natural compounds, several synthetic drugs were synthesized having reached the patient's bedside, such as indomethacin (NSAID) , ondansetron (chemotherapy-induced nausea and vomiting) , fluvastatin (hypercholesterolemia) , zafirlukast (leukotriene receptor antagonist) , etc. The success of the above-mentioned compounds indicates the importance of the ring system in multi-disciplinary fields, including the pharmaceutical and agrochemical industry. Luo et al . reported the synthesis of multifunctional hybrids based on melatonin-benzylpyridinium bromides (Fig. ) and their cholinergic activities were evaluated. The most promising derivative was compound 63 , showing significant inhibitory activity in AChE even though 10-fold lower than DPZ as reference compound (IC 50 = 0.014 µM). Otherwise, hybrid 62 exhibited a stronger inhibitory activity to BuChE, resulting in 70-fold higher than DPZ (IC 50 = 5.6 µM). The authors highlighted the relevance of substituentsat the main moieties, as different substitutions with varied electronic properties showed a little fluctuation on the inhibitory activity, except for the introduction of - cyano group at - para position in the benzylpyridinium moiety (AChE: IC 50 = 22.9 µM; BuChE: IC 50 >100 µM). On the other hand, regarding the indole moiety, 5- methoxy substituent had no influence on the inhibitory activity of ChEs compared to the corresponding unsubstituted hybrid. The evaluation of the antioxidant activity was carried out by using oxygen radical absorbance capacity by fluorescence (ORAC-FL) method . Melatonin, an endogenous neurohormone with strong antioxidant properties , was tested as reference (2.34 trolox eq.), and compound 62 exhibited a comparable activity. Compound 63, with an extra double bond within the spacer of both moieties, showed the most potent antioxidant activity. Furthermore, derivatives bearing such 5-methoxy group displayed enhanced activity respect to the unsubstituted one. A kinetic study was performed for compound 63 . In AChE, the Lineweaver-Burk plots indicated a mixed-type inhibition, which suggested that compound 63 could be able to interact with CAS and PAS of AChE. A different behavior was obtained for BuChE, showing different K m and V max at different concentrations; in this case, compound 63 might act as a competitive inhibitor of the BuChE isozymes. Cell viability and neuroprotection studies were assayed in the human neuroblastoma cell line SH-SY5Y. MTT assay was used to examine the potential cytotoxic effects with no toxicity displayed for 62 and 63 at the range of concentrations studied (1-50 μM). Furthermore, both compounds were tested for their capacity to protect human SH-SY5Y cells against oxidative stress-associated death induced by H 2 O 2 . Compounds 62 and 63 showed neuroprotective effects at concentrations ranging from 1 to 10 µM. While compound 62 showed higher protective capability in comparison with the reference melatonin (at 10 µM). In addition to the aforementioned, another target that could play significant roles in the pathophysiology of the neurological diseases correspond to monoamine oxidase inhibitors (IMAOs). MAO is the enzyme that catalyzes the oxidative deamination of a variety of biogenic and xenobiotic amines , due to alterations in other neurotransmitter systems, especially serotoninergic and dopaminergic, which are also thought to be related to many behavioral disturbances observed in AD patients . In this line, Bautista-Aguilera et al. described the synthesis and pharmacological evaluation of novel hybrids designed through a combination of the previously reported N -[(5-benzyloxy-1-methyl-1 H -indol-2-yl)methyl]- N -methylprop-2-yn- 1-amine and 1-benzylpiperidine, fragment present in DPZ (Fig. ). Among the synthesized hybrids, the most promising derivative exhibited potent and moderated values as both MAOs enzymes and ChEs inhibitors, respectively. Compound 64 resulted to be the stronger MAO-A and MAO-B inhibitor compared to DPZ (MAO-A IC 50 = 850 µM; MAO-B IC 50 = 15 µM) and as well as and BuChE inhibitor in relation to the same reference ( Ee AChE IC 50 : 0.013 µM; eq BuChE IC 50 : 0.84 µM). The resulting pharmacological evaluation indicated the 1-benzylpiperidin-4-yl unit plays a key role in the AChE inhibitory activity, suggesting that this moiety mediates the binding to the enzyme. According to the design, the results showed that the linker length did not seem to be a decisive factor for the inhibitory potency against ChEs, whereas it seems to have a relevant effect in MAOs. Otherwise, the replacement of piperidine for bioisostere piperazine had a drastic reduction in the inhibitory activity, resulting in inactive compounds for ChEs (data not shown). A number of dual binding site AChE inhibitors have been found to exhibit a significant inhibitory activity on A β self-aggregation, thus compound 64 exhibits a significant inhibitory effect of A β -self-induced aggregation and human AChE-dependent aggregation, being more potent (human AChE-dependent) than the parent compound DPZ. The inhibition values of A β inhibition for compound 64 were 47.8% self-induced and 32.4% AChE-induced. This behavior may be explained through the kinetic study that exhibited a mixed-type inhibition. Molecular modeling suggests that 64 mimics the binding mode of DPZ in the crystal structure of AChE. Several studies have documented the key activity of melatonin in scavenging a variety of reactive oxygen species, and moderate inhibition of A β aggregation affecting the synthesis and maturation of APP , which play an important role in AD. In this line, Wang et al . described the synthesis and biological evaluation of donepezil-melatonin derivatives (Fig. ), focused on taking advantage of the potential neurogenic profile of melatonin-based hybrids, which are endowed with additional anticholinergic properties. The activity of compound 65 against Ee AChE showed a significant inhibitory profile, higher than tacrine (IC 50 = 0.23 µM), although lower than DPZ (IC 50 = 0.04 µM). Furthermore 65 showed a strong eq BuChE inhibition respect to donepezil (IC 50 = 3.36 µM) and similar to tacrine (IC 50 = 0.05 µM). According to the mentioned results, a modification in the indole ring with a methoxyl group showed a higher inhibitory potency than the compound without substituent (data not shown). Besides, the effect of the alkyl linker length influences the observed activities ( n in (Fig. ). Kinetic analysis and molecular modeling studies revealed that compound 65 acted as a mixed-type AChE inhibitor, binding simultaneous CAS and PAS of the enzyme. The inhibition of A β 1-42 self-aggregation of 65 was improved respect to curcumin (45.2% at 20 µM) and resveratrol (43.5% at 20 µM). For the remaining compounds (not considered in this discussion), the effect of an electron-donating group at the benzene ring (A) might not be favorable for A β 1-42 aggregation inhibition. Likewise, compound 65 exhibited significant antioxidant activity by ORAC assay respect to melatonin (2.3 trolox eq.), it may chelate metal ions, reduce oxygen stress induced PC12 cell death, and penetrate the BBB. Several studies reported that Phosphodiesterase’s (PDE) inhibitors, such as sildenafil , tadalafil , and icariin , also displayed potent anti-AD effects in different mouse models of AD, significantly reversing cognitive impairment and improving learning and memory . To illustrate this, Puzzo et al . reported that sildenafil was beneficial against a mouse model of amyloid deposition, given that it produced amelioration of synaptic function and memory associated with a reduction of A β levels. In 2012, García-Osta et al . revised Phosphodiesterase 5 (PDE5) inhibitors properties, and could act via anti-amyloid mechanisms, exhibit good BBB penetration, decrease p-Tau levels, shed light in their pharmacokinetics, safety and efficacy in vivo in animal models, but highlighted the lack of clinical trials in AD patients. Furthermore, Fiorito and coworkers proposed PDE5 inhibitors as promising therapeutic agents for the treatment of AD. They synthesized quinoline derivatives with prominent outcomes in PDE5 inhibition and promising result in an in vivo mouse model of AD. In addition, Prickaerts et al . carried out a study of rats in the object recognition task, suggested that PDE5 inhibitors improve processes of consolidation of object information, while AChE inhibitors improve processes of consolidation of object information. Therefore, AChE/PDE5 dual inhibitors could play a synergistic anti-AD effect and may supply a new perspective and breakthrough for the treatment of AD . According to the aforementioned information, Mao et al. described a series of novel tadalafil derivatives in order to seek dual-target AChE/PDE5 inhibitors as candidate drugs for potential AD therapy. The design of such derivates was based in PDE5 inhibitory activity presented in the tadalafil scaffold, by only varying the different substituent attached at the N -atom of piperazine-2,5-dione, incorporating different moieties such as morpholine, benzylpyridine, dimethylamine, benzylamine, and benzylpiperidine derivates. These results showed that the substituents in the R 1 group (Fig. ) and absolute configuration (R, R) remarkably affected the AChE inhibitory activities. Compounds 66 and 67 exhibited the strongest AChE inhibitory values, with nanomolar IC 50 values. The results showed that the chain length (n= 2) between both moieties, tadalafil, and 1-benzylpiperidine, played a pivotal role in the AChE activities, so the optimal chain length was established as two methylenes (n = 2). Furthermore, the influence of stereochemistry on AChE inhibition was considered a key factor. The diastereoisomers 66 and 67 showed almost the same AChE inhibitory activity, comparable in potency to DPZ and huperzine A (IC 50 = 0.013 µM and IC 50 = 0.084 µM, respectively). However, both derivatives exhibited weak BuChE inhibitory activity. PDE5 inhibitory activity was determined by an IMAP-FP (immobilized metal ion affinity-based fluorescence polarization) assay .The results showed that most of the tested compounds presented values ranging between 0.032 - 23.20 µM. In this context, the chain length presented no obvious influence on PDE5 type PDE5A1 inhibition. Moreover, compounds bearing aryl methyl and pyridyl substituents at piperidine nitrogen exhibited higher inhibitory activity than unsubstituted ones. Finally, 66 and 67 , exhibited good to moderate PDE5A1 inhibitory activity respect to the other derivatives studied. Besides, the BBB crossing capabilities, 7.67 and 9.25 Pe (10 -6 cm s -1 ) respectively, indicated that both compounds could be considered as potential dual-target AChE/PDE5 inhibitors. The serotonergic system has been widely studied and well documented related to AD progression . The modulation of 5-HT 4 and 5-HT 6 receptors have been recently proved to enhance cognition in AD models . 5-HT 4 receptors (5-HT 4 R) control brain functions, such as learning, memory, feeding, and mood behavior. In the AD context, activation of 5-HT 4 R can promote the nonamyloidogenic cleavage (APP), leading to the formation of a neurotrophic protein, sAPPa . On the other hand, 5-HT 6 receptors (5-HT 6 R) play a role in functions like motor control, cognition, and memory . A new proposal for combining 5HT 4 affinity along with nanomolar AChE inhibition was reported by Lecoutey et al. in 2014 with the design and synthesis of donecopride (Fig. ). RS67333 is a potent 5HT 4 antagonist that had been investigated as a potential antidepressant , nootropic , and as a potential treatment of AD . Interestingly, RS67333 was also established as a low micromolar AChE inhibitor by the aforementioned authors . This finding led them to pharmacomodulate it in order to enhance AChE inhibition profile with no significant effect on 5HT 4 antagonism, while micromolar BuChE inhibition was also achieved (AChE IC 50 = 16.0 nM; BuChE IC 50 = 3.5 µM; 5HT 4 Ki = 6.6 nM). Moreover, sAPPα increasing capabilities of donecopride were also demonstrated (EC 50 = 11.3 nM) . According to the authors, donecopride is able to exert not only a symptomatic effect but also a disease-modifying effect against AD. Among a wide number of tested molecules , donecopride was selected for studies in vivo , showing no effect on the spontaneous locomotor activity at the maximum dose of 10 mg/kg. At 0.3 and 1 mg/kg a precognitive effect with an improvement in memory performances was observed, along with an antiamnesic effect by scopolamine-induced spontaneous alternation deficit. Moreover, they also suggested a slight antidepressant effect by a decreased time of immobility during a forced swimming test. Later on, donecopride was found to display potent anti-amnesic properties in AD animal models, preserving learning capabilities, including working and long-term spatial memories. Clinical trials will soon be undertaken to confirm these findings in a First in Human study . Lalut et al . designed a series of derivatives based on donecopride fine-tuning (Fig. ). By replacing the benzene ring by an indole residue, they obtained MTDLs with enhanced biological activities. Compounds 68 , 69 , 70 , and 71 were evaluated in their capacity to inhibit h AChE and to bind guinea pig ( gp )5-HT 4 R. All compounds displayed a decreased affinity for 5-HT 4 R respect to donecopride ( Ki = 9.5 nM) being 71, which showed the strongest inhibitory profile. The SAR analysis revealed that a cycloalkyl or an alkyl substituent on the piperidine ring improved the affinity for this receptor compared to N-benzyl ring. Besides, substituents (chloro and methoxy) present in the indole moiety, did not significantly influence the activity. For AChE inhibition compounds 69 , 70 , and 71 displayed low IC 50 values, in the same order of donecopride (IC 50 = 16 nM) and DPZ (IC 50 = 6.0 nM). In this case, N-Bn substituent greatly increased AChE inhibition in relation to a cycloalkyl or alkyl substituents. While substitution pattern or nature at the indole moiety seems to have little influence on activity, N-substituents can dramatically decrease it. Concerning kinetic studies, compounds showed non – competitive inhibitions type, therefore interacting with PAS and anionic subsite of AChE. Finally, compound 69 displayed a protective effect against dizocilpine-induced impairment in the passive avoidance test in mice . Previous studies reported that C5-Substituted indole compounds containing a propyl spacer connected to different moieties such as piperazines and arylpiperazines, were endowed with serotoninergic activity . Likewise, other studies have also reported AChE inhibitory activity in compounds containing these skeletons . Intending to combine such discoveries, Rodriguez-Lavado et al . recently reported the synthesis, and in vitro evaluation of a new series of indolylpropyl benzamidopiperazines as promising MTDLs with dual activity against h SERT and h AChE (Fig. ). Compounds 73 and 74 displayed an inhibitory profile in AChE in the same order of magnitude as DPZ (IC 50 = 2.17 nM). The substituents R 1 and R 2 remarkably affected the inhibitory profile in AChE. In this sense: i ) the unsubstituted compound (R 1 = R 2 = H) showed no inhibitory response; ii ) almost all compounds with a methoxyl group at the 5-indolic position were inactive; iii ) R 1 = F or H resulted in a moderate to very active compounds depending on R 2 substituent, as in the case of compounds 73 and 74 . The authors thus explained how the appropriate substitution pattern can make a difference between inactive and very active compounds. On the other hand, compounds 72 and 75 showed a high affinity towards SERT, similar to citalopram (IC 50 = 3.0 nM), both of them carrying R 1 = F (a small and electron-withdraw atom) and R 2 = 2-Br or 4-Br (bulky atom). As expected, C5-Fluorine indole derivatives displayed nanomolar SERT affinity, being this an extensively reported property of fluorinated indoles . Interestingly, such fluorinated derivatives were also among the most active towards h AChE. None of the most active dual compounds resulted to be toxic at the studied concentration range in both HEK-293 and SH-5YSY cells. Molecular docking studies for both targets strongly supported the experimental results. Unfortunately, just one compound of the series resulted in significant β -amyloid self-aggregation inhibition (data not shown). For clarity purposes, activities for some selected compounds endowed with promising multitarget capabilities are summarized in Table . AD is still an incurable disorder mainly due to its multifactorial nature and complex etiology. The more efforts are made by research groups and pharmaceutical companies to understand the underlying mechanism and find a disease- modifying treatment, the more AD-related targets are discovered. Therefore, there is no reason we can expect a solution provided by the ‘one drug-one target’ paradigm. Within the last years, the so-called multitarget paradigm has emerged to stay. In order to shed some light on the recent advances within this field, four biologically active scaffolds (curcumin-, resveratrol- chromone- and indole-) have been selected pointing to the simultaneous interaction towards many AD-related targets/functions, emphasizing on cholinesterases (AChE and BuChE), MAOs (MAO-A/B), 5-HT 4 , SERT, β-amyloid self-aggregation and radical scavenging activity. While many of them are well known AD-related targets, others have not still been so deeply explored. We sincerely hope that this review will help other researchers worldwide to develop future improvements within this exciting field since much more efforts are needed to make this multitarget approach evolve into new drugs that can eventually be used in clinical trials and finally reach the market for the overcoming of such devastating disease.
First report of kyphoscoliosis in the narrow‐ridged finless porpoises (
4db23981-c4aa-475a-81dd-bc416384ca77
10921363
Forensic Medicine[mh]
INTRODUCTION Spinal deformities in mammals can be categorized as congenital, degenerative, traumatic or neuromuscular. Congenital kyphoscoliosis is a spinal deformity that originates during embryonic development (DeLynn et al., ). The aetiology of degenerative or traumatic scoliosis remains poorly understood, but it is generally believed to result from diseases (Kompanje, ) or physical trauma (Robinson, ). Although diverse causes of kyphoscoliosis in cetaceans have been reported, including congenital, degenerative and traumatic factors, these cases are predominantly associated with bottlenose dolphins (Berghan & Visser, ; Cobarrubia‐Russo et al., ; DeLynn et al., ; Morton, ; Robinson, ). Occasional reports have mentioned kyphoscoliosis or associated skeletal deformities in other Delphinidae species, such as killer whales (Berghan & Visser, ), Risso's dolphins (Nutman & Kirk, ), white‐beaked dolphins (Bertulli et al., ), long‐finned pilot whales (Sweeny et al., ), common dolphins and Hector's dolphins (Berghan & Visser, ). To date, there is a lack of published data on kyphoscoliosis in Phocoenidae , particularly in narrow‐ridged finless porpoise (NFP) ( Neophocaena asiaeorientalis ). NFPs are distributed across temperate and subtropical waters from the Taiwan Strait north into the Yellow Sea and into southern Japan (Jefferson & Wang, ). This species has a recorded maximum lifespan of 23 years (Kasuya, ). Although dietary variation was detected in different colonies of NFPs, crustaceans, fish and cephalopods were identified as their common prey organisms (Lu et al., ; Shirakihara et al., ). Kyphoscoliosis in cetaceans is typically reported through in situ visual assessments during photo‐identification field studies (Bertulli et al., ), and more subtle abnormalities are discovered during necroscopic examinations (DeLynn et al., ). As part of the pioneering stranded cetacean imaging programme in the Republic of Korea, NFPs stranded in Korean waters have been routinely assessed using post‐mortem computed tomography (PMCT) to provide initial screening and supplementary evidence for conventional necropsy (Yuen et al., , ). This case report presents the first documented cases of congenital, degenerative and traumatic kyphoscoliosis in two NFPs using PMCT. CASE REPORT In November 2021, one calf and one adult NFPs, hereinafter referred to as CRI‐11873 and CRI‐11874, respectively, were discovered stranded on the coasts of Busan (Nam‐gu, Busan) and Pohang (Nam‐gu, Pohang, Gyeongsangbuk‐do), Republic of Korea, respectively. Both NFP carcasses were in good condition [code 2 per Smithsonian Institution criteria (Geraci & Lounsbury, )]. The age of the carcasses was estimated based on their total body length with reference to Lee et al. ( ). To minimize autolysis, the carcasses were promptly transported to the National Cetacean Research Institute (CRI), Republic of Korea, and stored at −20°C upon their initial report. Ethical approval was waived for this case report as it did not involve living or foetal subjects. PMCT scans were conducted at the College of Veterinary Medicine, Kyungpook National University, using a Toshiba 16‐row multislice spiral CT scanner Alexion (Canon Medical Systems). The PMCT examinations were performed at 120 kVp, 150 mAs, with a section thickness of 0.8 mm. To encompass the entire body girth within the field of view (FOV), the FOV was set at 350 and 430 mm for CRI‐11873 and CRI‐11874, respectively. PMCT data set was reconstructed and assessed using Digital Imaging and Communications in Medicine (DICOM) viewer, Horos version 3.3.6. PMCT images were then reconstructed for three dimensional (3D) volume‐rendered images by accumulating the sequential transaxial data using a high spatial resolution algorithm inbuilt with Horos DICOM viewer. Bone density of the affected vertebrae was measured on PMCT multiplanar reconstructed sagittal images using Hounsfield unit (HU) quantification. All PMCT procedures were executed under the guidance of a registered radiographer (AHLY). Subsequent necroscopic examinations were carried out at a research centre in the Republic of Korea by board‐certified veterinarians (SWK and KL). All the specimens from the necropsy, including vertebral column and parasite specimens, were lodged at the research institute. CRI‐11873 (FEMALE; 104.5 CM; 17.4 KG) (FIGURE ) Postcranial vertebral formula of CRI‐11873 was C 7 + T 13 + L 14 + Ca 29 . Using PMCT 3D volume‐rendered imaging, kyphoscoliosis was observed in lumbar vertebrae (L) 5–8 (Figure ). A unilateral unsegmented bar in L5‐7 resulted in a spinal curvature of 119 degrees (Figure ). Intervertebral discs were found between the vertebral bodies, however, degenerated and partially absent. Deformation of the transverse processes of L5‐8, directed ventrally, compressed the longissimus dorsi muscles, creating noticeable muscular imbalances (Figure ). No signs of osteophytosis were observed. Bone density measurements were within the normal range throughout the vertebral column. Particularly, the bone density, measured in HU, in L5‐8 was 982, 1005, 1023 and 1076, respectively. In addition to kyphoscoliosis, PMCT axial images revealed a thickened forestomach wall measuring 1.73 cm, suggestive of gastritis. Amorphous contents in the forestomach extending into the oesophagus were observed (Figure ), whereas bowel contents were absent, indicating a potential obstruction of the stomach contents. Gross necropsy revealed that the stomach contents consisted of plastic bag pieces (Figure ). The oesophageal mucosa was edematous, and the stomach wall exhibited significant thickening with severe ulceration. Peritoneal fibrosis was observed, along with mild ankylenteron and an enlarged mesenteric lymph node, indicative of peritonitis. CRI‐11874 (MALE; 188.3 CM; 59.6 KG) (FIGURE ) Postcranial vertebral formula of CRI‐11874 was C 7 + T 12 + L 15 + Ca 35 . External examination and necropsy of CRI‐11874 revealed extensive scarring and thinning of the skin in the lumbar region (Figure ). PMCT 3D volume‐rendered images and multiplanar reconstructed coronal images disclosed kyphoscoliosis with a curvature of 59 degrees with severe osteophytosis and osteoporosis in L11‐15 and caudal vertebra (Ca) 1 (Figure ). Multiplanar reconstructed axial images indicated grade 1 osteophytosis in thoracic vertebrae (T) 6–9 and grade 3 osteophytosis in L11‐15 and Ca1, according to the lesion categorization by Nathan ( ). Hyperosteosis was noted at the pedicle and lamina of L14‐15, potentially causing spinal canal compression (Figure ). Intervertebral discs were presented but observably compressed in L11‐Ca1. Hyperosteosis was also observed at the chevrons of Ca1‐3 (Figure ). The bone density, measured in HU, in T6‐9 was 576, 444, 750 and 429, respectively; in L11‐15 was 397, 693, 469, 373 and 441, respectively; and in Ca1 was 382. PMCT examination revealed diffuse ground glass opacity patterns in both lungs, along with hyperattenuating focal lesions, suggesting pulmonary oedema and signs of infection. Gross necropsy confirmed the presence of pulmonary emphysema and gas embolism, particularly severe in the right lung. The presence of pulmonary oedema with haemorrhagic foamy fluid was consistent with findings associated with ‘wet drowning’ in cetaceans (IJsseldijk et al., ) (Figure ). Parasites found in both lungs were identified as Halocercus sunameri ( Nematoda : Pseudaliidae ) based on morphological analyses (Yamaguti, ). DISCUSSION To the best of the authors’ knowledge, this study represents the first‐ever assessment of kyphoscoliosis in cetaceans using PMCT. PMCT, due to its high spatial resolution, contrast resolution and signal‐to‐noise ratio, offers superior diagnostic accuracy for screening osteological anomalies like kyphoscoliosis. It eliminates the need for skeletal excarnation during conventional necroscopic assessment and provides immediate information for evaluating scoliotic spines in their natural position. In CRI‐11873, PMCT and necropsy identified unsegmented bars with hemivertebrae in L5‐7 (Figure ). The occurrence of kyphoscoliosis with unsegmented bar in marine mammals has received limited research attention. To date, DeLynn et al. ( ) have reported the only case of congenital kyphoscoliosis in a bottlenose dolphin, which has demonstrated unsegmented ribs and cervical vertebrae. Despite locational differences in the affected sites, both specimens from DeLynn et al. ( ) and CRI‐11873 have shown characteristic patterns of congenital skeletal deformity, including defects in formation and segmentation. The unsegmented bar is one of the representative anomalies caused by impaired embryonic development in the early stage of pregnancy (Giampietro et al., ). It is caused by unilateral disruption of the segmentation of the primary vertebral mesenchyme, resulting in the connection of two or more adjacent vertebrae. As the opposite side of the vertebrae, on which the growth zones and part of the intervertebral disc are preserved, develops normally, abnormal spinal curvature occurs, resulting in congenital spinal deformity. The presence of a unilateral unsegmented bar with characteristic conformation led to the classification of the kyphoscoliosis as congenital in CRI‐11873. Infanticidal events could be another cause of kyphoscoliosis in young individuals. This behaviour is considered an expression of conflict in cetaceans to protect prey resources and mating access (López et al., ). Robinson ( ) reported a case of potential infanticide‐inflicted kyphoscoliosis in a calf bottlenose dolphin. The calf was found stranded and dead approximately half‐year after an attempted infanticidal event. Acute kyphoscoliosis was revealed during gross necropsy with observed remodelling of the lateral spinal processes and was believed to be acquired from trauma as a neonate. However, during PMCT scanning of CRI‐11873, no bone remodelling and realignment, an essential process to repair damaged bone, was observed. No post‐traumatic scars were found during necropsy. With the presence of unilateral unsegmented bar with hemivertebrae in CRI‐11873, kyphoscoliosis as a result of infanticidal event could be ruled out in this case. Although kyphoscoliosis may not be fatal in cetaceans, previous reports have documented cetaceans with kyphoscoliosis living up to 39 years (Galatius et al., ). However, it may impede effective movement, a fundamental ability for interacting with conspecifics and hunting prey, potentially reducing survival and reproductive success. In CRI‐11873, the longissimus dorsi muscles were noticeably compressed by the transverse processes of kyphoscoliotic L5‐8 (Figure ), potentially limiting the vertical oscillation of the caudal peduncle and flukes (Strickler et al., ; Viglino et al., ). Together with unbalanced and compressed epaxial muscles and reduced joint motion in the affected vertebrae, limited body movements due to spinal deformity could have affected this individual's ability to hunt active prey. The NFPs inhabit the east Asian coast, a geographic hotspot of anthropogenic threats (Walther et al., ; Xiong et al., ; Yuen et al., ). Rapidly growing anthropogenic activities are influencing NFPs in this area. The human‐induced pressure of plastic litter has become a major threat to the marine environment (Villarrubia‐Gómez et al., ). In recent years, scientific attention has turned to the discovery of plastics in cetaceans (Besseling et al., ; Curran et al., ; Lusher et al., ; Xiong et al., ). Several studies have reported an increasing trend in cetaceans interacting with (Rodríguez et al., ), or even ingesting, plastic bags (Đuras et al., ; Sá et al., ). In the present report, our team also discovered that CRI‐11873 may have mistakenly ingested the plastic bags. It is believed that the presence of plastics in cetaceans’ digestive tracts is owing to misinterpretation as natural prey or accidental ingestion (Đuras et al., ; Sá et al., ). Nonetheless, ingested plastics can lead to gastrointestinal disorders such as impacted stomach, ulcerative gastritis and ultimately contributing to the animals’ death. In CRI‐11874, osteophytosis, osteoporosis and hyperosteosis were observed in thoracic, lumbar and caudal vertebrae through PMCT and necropsy (Figure ). Given the adult life stage of this individual (Lee et al., ) with the existence of osteoporosis, the kyphoscoliosis was possibly attributed to degeneration. In the early stages of spinal degeneration, the formation of osteophytes in vertebrae may provide stabilization for mildly wedged segments, counteracting spinal instability. As degeneration progresses, asymmetric osteophyte growth can distort spinal stability, increasing the likelihood of degenerative scoliosis and limiting bending movements (Zhu et al., ). Additionally, antemortem traumatic events were suggested as a contributing factor, as evidenced by the broad scarring and thinning of lumbar skin observed during necropsy. Altogether, while the degenerative change of the skeletal system including the osteoporosis weakened the intensity of the system, traumatic events occurred and possibly contributed to the kyphoscoliosis. Nonetheless, the cause of death in this individual was presumed to be aspiration pneumonitis secondary to drowning (Reijnen et al., ). The reduced spinal motion range due to kyphoscoliosis has potentially contributed to its demise. NFPs are known to be hard to detect due to their small body size, small group sizes, tendency to avoid boats and unobtrusive surface behaviours. The diagnosis of kyphoscoliosis, as well as understanding the associated impact on survival and socio‐ecological behaviour, can be even more challenging and, therefore, remains poorly understood in NFPs. The long‐term survival of cetaceans with kyphoscoliosis may depend on various factors, including, but not limited to, the age and level of maternal dependence of the affected calf, the nature of the kyphoscoliosis (congenital, acquired, or degenerative), the severity of the condition and any resulting complications (Bertulli et al., ; DeLynn et al., ). Nonetheless, the most influential factor that affects a kyphoscoliotic individual's survival remains the degree of anomaly that impedes their mobility. In Indo‐Pacific and Atlantic humpback dolphins, Weir and Wang ( ) reported that impaired mobility not only affects an individual's ability to capture prey and avoid predators but also makes them more vulnerable to anthropogenic activities, such as vessel collisions, due to reduced swimming speed and longer surfacing time. In white‐beaked dolphins, individuals with kyphoscoliosis may be more prone to being caught in gillnets, longline fisheries and trawl nets (Bertulli et al., ). Interactions of kyphoscoliotic cetaceans with conspecifics have also been occasionally reported, although the available data is currently limited to bottlenose dolphins. Haskins and Robinson ( ) reported that a female bottlenose dolphin with lordosis gave birth to two calves over a span of seven years, as observed through photo‐identification. Additionally, Wilson and Krause ( ) documented a case in which a bottlenose dolphin with kyphoscoliosis repeatedly interacted, socialized, and milled with a group of sperm whales. These findings may suggest that despite impaired mobility, the influences of kyphoscoliosis on cetaceans’ socio‐ecological behaviour may be relatively minor than expected. In human medicine, the Cobb angle assessment (Cobb, ) is the gold standard for evaluating kyphoscoliosis, measuring the angulation of the most tilted vertebrae in coronal plain radiographs or CT images. Cobb angle assessment provides important information leading to justified clinical decisions such as progression, as well as physiotherapeutic and surgical interventions. Although standardized measurements for kyphoscoliosis assessment in cetaceans are currently lacking in the literature, our group applied an analogous method of Cobb angle assessment in sagittal CT images to measure spinal curvature in both NFPs. Cobb angle can be measured after finding the most‐tilted vertebral body above and below the main curvature apex of each NFP vertebrae. The angle formed by the extension lines from each vertebral body, as marked in Figures and was measured. The results aligned with the diagnoses made by the board‐certified veterinarians (SWK and KL). Future research with a larger sample size is warranted to establish standardized radiological measurements for diagnosing kyphoscoliosis in cetaceans. For assessing the degree and aetiology of spinal deformities in cetaceans, the use of PMCT is essential. When necropsy is used as a sole methodology for determining spinal deformities, examination of vertebrae can only occur after complete defleshing (Figure ), resulting in drawbacks such as time consumption, loss of in situ location information of the bones and potential loss of small bone fragments. PMCT enables quantitative analyses, including analogous Cobb angle assessment and bone density analysis, and allows for the CT data storing into a database regardless of the passage of time, such as picture archiving and communication system, and could be retrieved at will. This case report not only presents PMCT data on kyphoscoliosis in NFP for the first time but also describes and compares two cases of spinal deformities caused by congenital and acquired factors. It serves as a valuable reference for future studies investigating the aetiology of deformations based on PMCT. CONCLUSION This case report is the first documented instance of kyphoscoliosis in Phocoenidae , specifically in NFP, utilizing PMCT to provide additional information alongside gross necropsy. The causes of kyphoscoliosis in the calf and adult individuals were classified as congenital and degenerative, respectively. This report enhances our understanding of potential causes of kyphoscoliosis in NFP based on PMCT data analysis. Conceptualization : Adams Hei Long Yuen and Sang Wha Kim. Data curation : Sang Wha Kim, Kyunglee Lee, Adams Hei Long Yuen, Min Ju Kim, Cherry Tsz Ching Poon and Young Min Lee. Investigation : Sang Wha Kim, Kyunglee Lee, Sung Bin Lee, Won Joon Jung, Young Min Lee, Su Jin Jo, Mae Hyun Hwang and Jae Hong Park. Writing – original draft : Adams Hei Long Yuen and Sang Wha Kim. Writing – review and editing : Sang Wha Kim, Adams Hei Long Yuen and Sib Sankar Giri. Resources : Sang Wha Kim and Kyunglee Lee. Supervision : Seung Hyeok Seok and Se Chang Park. The authors declare no conflicts of interest. The authors confirm that the ethical policies of the journal, as noted on the journal's author guidelines page, have been adhered to and the appropriate ethical review committee approval has been received. The US National Research Council's guidelines for the Care and Use of Laboratory Animals were followed. The peer review history for this article is available at https://www.webofscience.com/api/gateway/wos/peer-review/10.1002/vms3.1386 .
RT-LAMP-Based Molecular Diagnostic Set-Up for Rapid Hepatitis C Virus Testing
1865f115-c3cc-4578-bb47-79112ba0a444
9138684
Pathology[mh]
According to the World Health Organization (WHO), more than 354 million people worldwide are infected with the Hepatitis C virus (HCV), of which 70 million are chronically infected. Each year, an estimated 1 million people die from this disease . According to the WHO, in the year 2019, 58 million people were reported with HCV infection worldwide . During the same time, in the United States, 4136 were reported with acute infection; however, the estimated number of cases was 57,500 . The Centers for Disease Control and Prevention (CDC) estimates that, in the United States, about 50% of the infected individuals are unaware of their infection . HCV is primarily transmitted by parenteral drug administration, blood/plasma transfusion, the reuse of medical equipment, sexual practices resulting in blood transfer, and sharing the same needle while injecting drugs . Patients are diagnosed with HCV after they exhibit symptoms arising from the chronic infection. However, patients are often asymptomatic or exhibit mild symptoms . Currently, a combination of interferon-free and direct-acting antiviral therapy (DAA) has increased the cure rate to over 95% with fewer side effects . Despite current therapies, only 20% of the population is diagnosed with the disease, and only 7% have received treatment within developed countries . In developing or low-income countries, where 78–80% of the worldwide cases reside, less than 1% receive a diagnosis and treatment . The WHO aims to reduce the HCV infection rate by 90% and the mortality rate by 65% by 2030, compared with a 2015 baseline . This aim can be achieved with increased education, testing, and treatment efforts. Early detection of the viral infection is key for a rapid and successful therapeutic outcome . According to the CDC guidelines, HCV testing should start with an antibody assay, followed by a nucleic acid test (NAT) for RNA detection, to confirm the initial result . Reverse transcriptase loop-mediated isothermal amplification (RT-LAMP), transcription-mediated amplification (TMA), and reverse transcriptase–quantitative polymerase chain reaction (RT-qPCR) are the preferred nucleic acids tests (NAT) utilized . While immunoassays are cost-effective and easier to conduct than NATs, they are less sensitive and lack the ability to detect early infection . By contrast, a NAT-based HCV RNA test detects the infection within 1–2 weeks . Although RT-qPCR remains the gold standard for HCV detection from patient samples, it is time-consuming, requires trained personnel and expensive equipment, and has a long turn-around time . Additionally, RNA needs to be purified before performing RT-qPCR amplification, as impurities and possible sample degradations might increase the rate of false-negative results. Thus, reliable early detection of the infection is limited for people living in low-to-middle-income and underdeveloped areas with scarce access to well-equipped medical facilities. In recent years, HCV point-of-care (POC) testing has improved the screening and management of the disease . However, these tests are expensive, provide less sensitivity, or might have a long turn-around time for the results. The most-used POC-based test is OraQuick. OraQuick is an FDA-approved rapid antibody test for HCV, but it is fairly slow (20 to 40 min for results) and expensive (USD 500 for 25 tests) . In addition, a trained phlebotomist is required for blood collection . Other ELISA-based POC assays include Elecsys Anti-HCV II assay, InTec assay, and Well oral anti-HCV. These assays provide inconsistent results since they are dependent on the level of anti-HCV antibodies in the patient . NAT-based POC HCV tests include the Roche COBAS Taqman HCV test, the Hologic Aptima HCV Quant Dx assay, the Cepheid GeneXpert (not FDA-approved in the US), the Veris (Beckman), the Realtime HCV (Abbott), and the Gendrive (WHO pre-qualified). However, expensive equipment is needed to process the samples, and it can take anywhere from 90 min to 5–6 h to obtain the results . Overall, the available HCV RNA tests are not well-suited for POC settings and community health centers. A low-cost and rapid NAT-based HCV test has yet to be developed. The nucleic acid isolation and the purification for RT-PCR require several manual steps that make this assay impractical for POC settings. An alternate method to RT-PCR is RT-LAMP, which provides an easy, scalable, low-cost, and accurate detection method where amplification can be achieved at a constant temperature with 4–6 primers . RT-LAMP eradicates the requirement of sophisticated equipment (i.e., thermocycler), and the viral nucleic acid can be amplified with minimal purification. Previously, RT-LAMP-based POC devices have been developed; however, these devices require complicated chip assembly, manhandled processing steps, air-drying steps, and the requirement of smartphones to interpret the result . illustrates the comparison of the existing HCV detection methods in terms of target used, detection time, and limit of detection (LOD), along with the limitations. Herein, we report a RT-LAMP-based fully automated sample-in–answer-out molecular diagnostic set-up for rapid HCV detection. Our device consists of a compact microfluidic chip that enables nucleic acid isolation, purification, amplification, and colorimetric detection of the amplified product. We utilize SYBR green 1 dye, which changes from orange to green in the presence of double-stranded DNA, resulting in easy analysis without fluorescent imaging. To test our chip, we utilized plasma spiked with HCV. The RT-LAMP-based microfluidic chip exhibited a LOD of 500 virion copies/mL within 45 min. The device is cost-effective, user-friendly, portable, and provides a visual confirmation while utilizing a small amount of sample and few reagents in a time-efficient manner. This device is, therefore, ideal for application at POC and in underdeveloped countries. summarizes the experimental workflow and the schematics of the microfluidic chip utilized in the assays. 2.1. HCV RT-LAMP Primer Design and Testing The 5’ untranslated region (5’ UTR) of the JFH-1 isolate RNA genomic sequence (GenBank: AB047639.1) [M1] was used as a target. The RT-LAMP primers were designed with the freely available online software Primer Explorer V5 from Eiken Chemical Co. Ltd (Tokyo, Japan). A BLAST search was carried out in the GenBank nucleotide of the primers. The conserved target DNA sequence was synthesized (Integrated DNA Technologies) for the initial sensitivity and specificity testing of the designed primer set. The LavaLAMP RNA Master Mix kit from Lucigen (Middleton, WI, USA) was used for all the RT-LAMP reactions (both benchtop and on-chip testing). The LAMP reactions set-up included Master Mix (12.5 μL), green fluorescent dye (1 μL), HCV RT-LAMP primers (2.5 μL), HCV, ZIKA, SARS-CoV-2, HIV or RNase/DNase-free water (1 μL), and RNase/DNase-free water (8 μL). The AriaMx Real-Time PCR system (Agilent) was utilized to maintain 70 °C for 40 min for the isothermal amplification. The LAMP-amplified products were further confirmed with 1.5% agarose gel electrophoresis. The gel was subjected to 90 volts for 90 min. 2.2. Analysis of the HCV-Spiked Samples by RT-LAMP The HCV cultured samples were obtained from the University of Miami. The HCV strain was propagated in Huh 7.5.1 human hepatoma cell line. The virus titer was quantified by qPCR. Human blood samples from deidentified healthy individuals were obtained from Continental Services Group, Inc. (Fort Lauderdale, FL, USA). Under the institutional review board (IRB)-approved protocol, the plasma was separated from the blood by centrifugation for 15 min at 2000× g . The plasma was spiked with HCV varying from 2.8 × 10 7 to 28 virion copies/mL. The viral RNA was isolated from the plasma samples, utilizing the Dynabeads SILANE viral Nucleic Acid (NA) kit (Invitrogen). A total of 100 μL of plasma sample was used for the RNA isolation, following the manufacturer’s instruction. The RNA was then eluted in 50 μL of elution buffer and the RT-LAMP assay was carried out in a thermocycler, as described above. Fluorescence data was collected for the endpoint analysis. In addition, the amplification products of the LAMP reaction were analyzed by 1.5% gel electrophoresis. A total of 1 μL of SYBR green I nucleic A dye (Invitrogen) was added to the reaction solution after the isothermal amplification for the colorimetric visualization. 2.3. Microfluidic Chip Design Microfluidic chips were fabricated from three-layered poly(methyl methacrylate) (PMMA) sheets. The layers of the chip were attached together using double-sided adhesive (DSA) tape. show the thickness and dimension of each layer and the chambers of the microfluidic chip. a,b show the complete chip (top and vertical view). The chip layout was designed in AutoCAD, and a CO 2 laser cutter was utilized to obtain the chambers, as previously reported . Each microfluidic chip consists of 4 independent diamond-shaped aqueous chambers: one sample inlet chamber, two washing buffer chambers, and one reaction chamber separated by three elliptical-shaped valving chambers containing mineral oil (viscosity- 15 cSt). An unconnected oval-shaped sensor chamber adjacent to the reaction chamber is imprinted for sensor attachment. The assembled microfluidic chip was subjected to UV for 30 min. After 30 min, all the inlets were blocked with scotch tape. The scotch tape was removed from the microfluidic chip for the filling of reagents. 2.4. Diagnostic Platform Set-Up The automated diagnostic platform designed and optimized by our lab is depicted in c . The diagnostic platform moves the magnetic beads automatically. The bead’s movement is directed by two small magnets (5 mm-diameter neodymium) fitted in a 3-D-printed inclusion. This 3-D-printed inclusion can move bidirectionally, and its movement is coordinated by a stepper motor. The stepper motor is guided by a printed circuit board (PCB) Arduino. The movement of the magnetic beads from one chamber to another and the incubation time were controlled by a G-code scripted in Python. We utilized an Arduino temperature controller to maintain the 70 °C temperature required for on-chip amplification . The sensor is placed in the sensor chamber. 2.5. On-Chip Detection of HCV from Human Plasma The microfluidic chip was loaded with reagents and buffers, as described in . The inlet chamber (a in ) contained the lysis/binding buffer + proteinase K + magnetic beads + isopropanol. Washing buffers I and II were added to the washing chambers (b and c in ) and their viscosity was increased by adding RNase/DNase-free water in a 1:1 ratio. The reaction chamber and sensor chamber (d and e in ) solution contained LavaLAMP RNA Master Mix, HCV-specific primers, and elution buffer. The MgSO 4 concentration of LavaLAMP RNA Master Mix was increased from 5 mM to 9.8 mM for on-chip testing. A total of 100 μL of plasma sample was loaded in the inlet chamber of the pre-filled chip and placed on the diagnostic set-up. Next, the heater and magnetic actuation were started concurrently, and the automated set-up was incubated for 45 min. SYBR green 1 dye was added to the reaction chamber after the isothermal incubation and mixed using the oscillatory movement of the magnetic beads to detect the reaction amplification products. All the on-chip experiments were repeated at least three times. 2.6. Results and Discussion HCV is comprised of seven distinct genotypes, which can differ by as much as 30% at the nucleotide level . Though sequence variations are evenly distributed throughout the genome, the 5’ UTR is conserved . The HCV RT-LAMP primers we designed are mapped on the highly conserved 5’ UTR region. The designed primer set was BLAST, against all the genomes present in the National Center of Biotechnology Information (NCBI). The results showed high diversity against other genomes and identify to the HCV genome 5’ UTR region. We performed an initial RT-LAMP assay to test the sensitivity and specificity of the primer set we designed ( A), utilizing a synthetic target HCV DNA sequence with different target concentrations (10 9 to 0 copies per reaction). The lowest LOD observed was 5 HCV copies/reaction within 40 min. A no-template control (NTC) confirmed the absence of primer dimer formation. The R 2 value (0.967) of the time to amplification relative to the concentration of the target substrate indicates that the amplification time of the target is inversely proportional to the target concentration in the reaction. These results were confirmed by analyzing the LAMP products by agarose gel electrophoresis. Amplification products can be observed in the reactions that amplify the target HCV sequences, forming long sharp bands ( B). The HCV primer set was tested by amplifying the cDNA obtained from the RNA genome of ZIKA virus, SARS-CoV-2, and HIV ( and ). The HCV primer sets did not amplify viral sequences from these viruses, confirming the specificity of the RT-LAMP primers. As a proof-of-principle, human plasma was spiked with a cultured HCV strain to replicate clinical samples. HCV preparations were added to the plasma samples with 10-fold dilutions, obtaining final concentrations ranging between 2.8 × 10 7 and 28 virions/mL. The viral RNA was extracted from the spiked plasma samples utilizing magnetic Dynabeads. The RT-LAMP of the samples yielded a LOD of 28 virions/mL within 60 min ( a). From this assay, a fitted linear trendline was observed with R 2 = 0.865, indicating the high efficiency of the RNA extraction/RT-LAMP analysis with limited amounts of virus. These results validate the efficiency of the magnetic beads’ RNA extraction and the fact that the RNA maintains its integrity throughout the purification steps. represents the gel results. A colorimetric test was performed utilizing SYBR green I dye, which is initially orange but turns to green in the presence of DNA, to enable a fast and simpler readout . The SYBR green I was added to the reaction tubes after the isothermal incubation and confirmed the LOD for the assay for the samples containing 28 virion copies/mL, which showed an attenuated shift to the green coloration when compared with the control reactions that retained the original orange color ( b). In recent years, molecular detection microfluidic platforms have transformed the healthcare landscape, as they offer the rapid detection of viruses and other pathogens . Herein, we have designed a microfluidic platform that incorporates different steps that are usually performed by trained personnel with sophisticated lab settings on a single platform. The microchip is designed with distinct shapes of chambers so that the solutions can be retained during the entire execution process. The diamond-shape chambers contain different buffers performing different tasks for optimal RNA purification. The inlet and reaction chambers are dual-purpose chambers, making the plasma processing steps less complex. These diamond chambers are separated by elliptical chambers which contain mineral oil. The oil reduces the interfacial energy barrier that facilitates the smooth passage of the beads from one buffer to another. The geometry of the chambers is designed in such a way that the magnetic beads align with magnets situated on the diagnostic platform. This provides an approachable magnetic field to the beads for flexible RNA isolation processing. Using a 750 µm PMMA sheet at the bottom also contributed to the maximum magnetic force, as the distance between the magnetic beads and the magnet was minimized. The magnetic beads used for the isolation process are composed of cross-linked polystyrene and magnetic material. Silica-like surface chemistry offers excellent binding of RNA, and the magnetic property of the beads aids with fast mobility under the magnetic field. Furthermore, given the beads’ low sediment rate, no adhesion was observed at the bottom. The incubation of the beads in each chamber is facilitated by oscillatory movement directed by a stepper motor on the platform, which is controlled by a G-code. represents the beads’ incubation time in each chamber, directed by the automated magnets located on the platform. For on-chip isothermal temperature, an automated Arduino-based temperature controller is used, which consists of a K-type thermocouple sensor and an ultrathin nano-carbon flexible heater. A K-type thermocouple sensor reads the real-time temperature. The heater heats the reagents in the sensor chamber and reaction chamber. To maintain the set isothermal temperature, the sensor reads the temperature every 2 s. Once the temperature is higher than the set temperature, the sensor turns “off” the heater, and if the temperature is lower than the set temperature, it turns “on” the heater. To avoid direct contact between the sensor and the reaction chamber solution, the sensor was inserted into the sensor chamber. The sensor chamber contains the same reagents as the reaction chamber, and one surface heater was attached on the top of both chambers. RNA degradation could lead to false-negative results in an RNA-based detection device. To address this potential problem, we exposed the RT-LAMP microfluidic chip to UV for 30 min, which eliminated traces of RNases from solid surfaces . On the microfluidic chip, the key role of the inlet chamber is to lyse the viral particles and to allow binding of the viral RNA to the magnetic beads in the presence of lysis and binding buffer. Once the RNA adheres to the surface of the magnetic beads, they are directed toward the reaction chamber under the influence of a magnetic field. The two washing chambers contain two different washing buffers to ensure that reagents and other potential inhibitory compounds are not transferred from the inlet chamber to the reaction chamber. Once the beads reach the reaction chamber, which is already maintained at 70 °C with the help of the sensor chamber and surface heater, beads elute the RNA and move back to the washing chamber. The RT-LAMP HCV primers amplify the HCV RNA if present, and change the color of the chamber to green; if HCV RNA is not present initially, the color of the dye will remain unchanged. For the on-chip assay, plasma samples were spiked with different amounts of HCV concentration (from 2.8 × 10 6 to 282 virions/mL). A no-template control assay was carried with non-spiked plasma samples. Since PMMA adsorbs polar molecules, such as DNA and polymerase enzymes that inhibit amplification , the addition of ions such as Mg ++ , Na + , and K + enhances the enzyme activity and provides DNA stability . To enhance the on-chip amplification efficiency of the target, the MgSO 4 concentration of the Master Mix was increased from 5 mM to 9.8 mM in the reaction chamber. a shows the color comparison of the reaction chambers of the microfluidic chip subjected to a non-spiked plasma sample (orange color in the reaction chamber) and HCV-spiked sample (green color reaction chamber) after 45 min. b demonstrated the on-chip sensitivity of the set-up: 500 virions/mL within 45 min. At the same time, no color change was observed in the negative control (non-spiked plasma) sample and 280 virions/mL. These results validate that the RT-LAMP-based microfluidic chip we developed can efficiently isolate and amplify HCV RNA. The molecular diagnostic set-up here described provides qualitative colorimetric results from plasma samples spiked with HCV. The data presented demonstrate that this compact microfluidic device could be utilized as a “sample-in–answer-out” system for HCV screening in low-to-middle-income areas. POC microfluidic platforms offer comparable results to gold-standard methods using a small quantity of reagents and the sample can be tested outside the laboratory setting. A commercial device derived from the one described here, if utilized broadly, can play an essential role in managing HCV infection worldwide, thereby controlling the spread and enabling the timely treatment and monitoring of the disease. The 3-D-printed platform used in the research consists of a microprocessor controlled by Arduino, a step-up motor, and a circuit for the power supply. The molecular diagnostic platform is fully programmed and reusable for repetitive testing. It can also be operated using batteries to direct the movement of magnetic beads present in the chambers of the disposable microfluidic chip. The operational cost of the diagnostic set-up would be minimal, as it uses relatively inexpensive and sustainable equipment (roughly USD 50) for sample processing and disease detection. This microfluidic chip also offers shorter times for a reliable diagnosis of HCV infection. The disposable microfluidic chip we describe is affordable (roughly USD 2) and can be utilized in POC settings, as the assay is performed in an automated fashion. With this molecular diagnostic set-up, the user can run multiple tests since attention is required only initially and at the end of the colorimetric analysis. contains the full list of materials and costs for the molecular diagnostic microfluidic chip set-up and fabrication. In this study, we developed a low-cost RNA-based POC molecular diagnostic set-up for HCV with a colorimetric result readout. The developed diagnostic method will enable timely HCV screening and prevention in high-risk populations. It is an accurate method that can be implemented in low-income areas, making it accessible to people. The 5’ untranslated region (5’ UTR) of the JFH-1 isolate RNA genomic sequence (GenBank: AB047639.1) [M1] was used as a target. The RT-LAMP primers were designed with the freely available online software Primer Explorer V5 from Eiken Chemical Co. Ltd (Tokyo, Japan). A BLAST search was carried out in the GenBank nucleotide of the primers. The conserved target DNA sequence was synthesized (Integrated DNA Technologies) for the initial sensitivity and specificity testing of the designed primer set. The LavaLAMP RNA Master Mix kit from Lucigen (Middleton, WI, USA) was used for all the RT-LAMP reactions (both benchtop and on-chip testing). The LAMP reactions set-up included Master Mix (12.5 μL), green fluorescent dye (1 μL), HCV RT-LAMP primers (2.5 μL), HCV, ZIKA, SARS-CoV-2, HIV or RNase/DNase-free water (1 μL), and RNase/DNase-free water (8 μL). The AriaMx Real-Time PCR system (Agilent) was utilized to maintain 70 °C for 40 min for the isothermal amplification. The LAMP-amplified products were further confirmed with 1.5% agarose gel electrophoresis. The gel was subjected to 90 volts for 90 min. The HCV cultured samples were obtained from the University of Miami. The HCV strain was propagated in Huh 7.5.1 human hepatoma cell line. The virus titer was quantified by qPCR. Human blood samples from deidentified healthy individuals were obtained from Continental Services Group, Inc. (Fort Lauderdale, FL, USA). Under the institutional review board (IRB)-approved protocol, the plasma was separated from the blood by centrifugation for 15 min at 2000× g . The plasma was spiked with HCV varying from 2.8 × 10 7 to 28 virion copies/mL. The viral RNA was isolated from the plasma samples, utilizing the Dynabeads SILANE viral Nucleic Acid (NA) kit (Invitrogen). A total of 100 μL of plasma sample was used for the RNA isolation, following the manufacturer’s instruction. The RNA was then eluted in 50 μL of elution buffer and the RT-LAMP assay was carried out in a thermocycler, as described above. Fluorescence data was collected for the endpoint analysis. In addition, the amplification products of the LAMP reaction were analyzed by 1.5% gel electrophoresis. A total of 1 μL of SYBR green I nucleic A dye (Invitrogen) was added to the reaction solution after the isothermal amplification for the colorimetric visualization. Microfluidic chips were fabricated from three-layered poly(methyl methacrylate) (PMMA) sheets. The layers of the chip were attached together using double-sided adhesive (DSA) tape. show the thickness and dimension of each layer and the chambers of the microfluidic chip. a,b show the complete chip (top and vertical view). The chip layout was designed in AutoCAD, and a CO 2 laser cutter was utilized to obtain the chambers, as previously reported . Each microfluidic chip consists of 4 independent diamond-shaped aqueous chambers: one sample inlet chamber, two washing buffer chambers, and one reaction chamber separated by three elliptical-shaped valving chambers containing mineral oil (viscosity- 15 cSt). An unconnected oval-shaped sensor chamber adjacent to the reaction chamber is imprinted for sensor attachment. The assembled microfluidic chip was subjected to UV for 30 min. After 30 min, all the inlets were blocked with scotch tape. The scotch tape was removed from the microfluidic chip for the filling of reagents. The automated diagnostic platform designed and optimized by our lab is depicted in c . The diagnostic platform moves the magnetic beads automatically. The bead’s movement is directed by two small magnets (5 mm-diameter neodymium) fitted in a 3-D-printed inclusion. This 3-D-printed inclusion can move bidirectionally, and its movement is coordinated by a stepper motor. The stepper motor is guided by a printed circuit board (PCB) Arduino. The movement of the magnetic beads from one chamber to another and the incubation time were controlled by a G-code scripted in Python. We utilized an Arduino temperature controller to maintain the 70 °C temperature required for on-chip amplification . The sensor is placed in the sensor chamber. The microfluidic chip was loaded with reagents and buffers, as described in . The inlet chamber (a in ) contained the lysis/binding buffer + proteinase K + magnetic beads + isopropanol. Washing buffers I and II were added to the washing chambers (b and c in ) and their viscosity was increased by adding RNase/DNase-free water in a 1:1 ratio. The reaction chamber and sensor chamber (d and e in ) solution contained LavaLAMP RNA Master Mix, HCV-specific primers, and elution buffer. The MgSO 4 concentration of LavaLAMP RNA Master Mix was increased from 5 mM to 9.8 mM for on-chip testing. A total of 100 μL of plasma sample was loaded in the inlet chamber of the pre-filled chip and placed on the diagnostic set-up. Next, the heater and magnetic actuation were started concurrently, and the automated set-up was incubated for 45 min. SYBR green 1 dye was added to the reaction chamber after the isothermal incubation and mixed using the oscillatory movement of the magnetic beads to detect the reaction amplification products. All the on-chip experiments were repeated at least three times. HCV is comprised of seven distinct genotypes, which can differ by as much as 30% at the nucleotide level . Though sequence variations are evenly distributed throughout the genome, the 5’ UTR is conserved . The HCV RT-LAMP primers we designed are mapped on the highly conserved 5’ UTR region. The designed primer set was BLAST, against all the genomes present in the National Center of Biotechnology Information (NCBI). The results showed high diversity against other genomes and identify to the HCV genome 5’ UTR region. We performed an initial RT-LAMP assay to test the sensitivity and specificity of the primer set we designed ( A), utilizing a synthetic target HCV DNA sequence with different target concentrations (10 9 to 0 copies per reaction). The lowest LOD observed was 5 HCV copies/reaction within 40 min. A no-template control (NTC) confirmed the absence of primer dimer formation. The R 2 value (0.967) of the time to amplification relative to the concentration of the target substrate indicates that the amplification time of the target is inversely proportional to the target concentration in the reaction. These results were confirmed by analyzing the LAMP products by agarose gel electrophoresis. Amplification products can be observed in the reactions that amplify the target HCV sequences, forming long sharp bands ( B). The HCV primer set was tested by amplifying the cDNA obtained from the RNA genome of ZIKA virus, SARS-CoV-2, and HIV ( and ). The HCV primer sets did not amplify viral sequences from these viruses, confirming the specificity of the RT-LAMP primers. As a proof-of-principle, human plasma was spiked with a cultured HCV strain to replicate clinical samples. HCV preparations were added to the plasma samples with 10-fold dilutions, obtaining final concentrations ranging between 2.8 × 10 7 and 28 virions/mL. The viral RNA was extracted from the spiked plasma samples utilizing magnetic Dynabeads. The RT-LAMP of the samples yielded a LOD of 28 virions/mL within 60 min ( a). From this assay, a fitted linear trendline was observed with R 2 = 0.865, indicating the high efficiency of the RNA extraction/RT-LAMP analysis with limited amounts of virus. These results validate the efficiency of the magnetic beads’ RNA extraction and the fact that the RNA maintains its integrity throughout the purification steps. represents the gel results. A colorimetric test was performed utilizing SYBR green I dye, which is initially orange but turns to green in the presence of DNA, to enable a fast and simpler readout . The SYBR green I was added to the reaction tubes after the isothermal incubation and confirmed the LOD for the assay for the samples containing 28 virion copies/mL, which showed an attenuated shift to the green coloration when compared with the control reactions that retained the original orange color ( b). In recent years, molecular detection microfluidic platforms have transformed the healthcare landscape, as they offer the rapid detection of viruses and other pathogens . Herein, we have designed a microfluidic platform that incorporates different steps that are usually performed by trained personnel with sophisticated lab settings on a single platform. The microchip is designed with distinct shapes of chambers so that the solutions can be retained during the entire execution process. The diamond-shape chambers contain different buffers performing different tasks for optimal RNA purification. The inlet and reaction chambers are dual-purpose chambers, making the plasma processing steps less complex. These diamond chambers are separated by elliptical chambers which contain mineral oil. The oil reduces the interfacial energy barrier that facilitates the smooth passage of the beads from one buffer to another. The geometry of the chambers is designed in such a way that the magnetic beads align with magnets situated on the diagnostic platform. This provides an approachable magnetic field to the beads for flexible RNA isolation processing. Using a 750 µm PMMA sheet at the bottom also contributed to the maximum magnetic force, as the distance between the magnetic beads and the magnet was minimized. The magnetic beads used for the isolation process are composed of cross-linked polystyrene and magnetic material. Silica-like surface chemistry offers excellent binding of RNA, and the magnetic property of the beads aids with fast mobility under the magnetic field. Furthermore, given the beads’ low sediment rate, no adhesion was observed at the bottom. The incubation of the beads in each chamber is facilitated by oscillatory movement directed by a stepper motor on the platform, which is controlled by a G-code. represents the beads’ incubation time in each chamber, directed by the automated magnets located on the platform. For on-chip isothermal temperature, an automated Arduino-based temperature controller is used, which consists of a K-type thermocouple sensor and an ultrathin nano-carbon flexible heater. A K-type thermocouple sensor reads the real-time temperature. The heater heats the reagents in the sensor chamber and reaction chamber. To maintain the set isothermal temperature, the sensor reads the temperature every 2 s. Once the temperature is higher than the set temperature, the sensor turns “off” the heater, and if the temperature is lower than the set temperature, it turns “on” the heater. To avoid direct contact between the sensor and the reaction chamber solution, the sensor was inserted into the sensor chamber. The sensor chamber contains the same reagents as the reaction chamber, and one surface heater was attached on the top of both chambers. RNA degradation could lead to false-negative results in an RNA-based detection device. To address this potential problem, we exposed the RT-LAMP microfluidic chip to UV for 30 min, which eliminated traces of RNases from solid surfaces . On the microfluidic chip, the key role of the inlet chamber is to lyse the viral particles and to allow binding of the viral RNA to the magnetic beads in the presence of lysis and binding buffer. Once the RNA adheres to the surface of the magnetic beads, they are directed toward the reaction chamber under the influence of a magnetic field. The two washing chambers contain two different washing buffers to ensure that reagents and other potential inhibitory compounds are not transferred from the inlet chamber to the reaction chamber. Once the beads reach the reaction chamber, which is already maintained at 70 °C with the help of the sensor chamber and surface heater, beads elute the RNA and move back to the washing chamber. The RT-LAMP HCV primers amplify the HCV RNA if present, and change the color of the chamber to green; if HCV RNA is not present initially, the color of the dye will remain unchanged. For the on-chip assay, plasma samples were spiked with different amounts of HCV concentration (from 2.8 × 10 6 to 282 virions/mL). A no-template control assay was carried with non-spiked plasma samples. Since PMMA adsorbs polar molecules, such as DNA and polymerase enzymes that inhibit amplification , the addition of ions such as Mg ++ , Na + , and K + enhances the enzyme activity and provides DNA stability . To enhance the on-chip amplification efficiency of the target, the MgSO 4 concentration of the Master Mix was increased from 5 mM to 9.8 mM in the reaction chamber. a shows the color comparison of the reaction chambers of the microfluidic chip subjected to a non-spiked plasma sample (orange color in the reaction chamber) and HCV-spiked sample (green color reaction chamber) after 45 min. b demonstrated the on-chip sensitivity of the set-up: 500 virions/mL within 45 min. At the same time, no color change was observed in the negative control (non-spiked plasma) sample and 280 virions/mL. These results validate that the RT-LAMP-based microfluidic chip we developed can efficiently isolate and amplify HCV RNA. The molecular diagnostic set-up here described provides qualitative colorimetric results from plasma samples spiked with HCV. The data presented demonstrate that this compact microfluidic device could be utilized as a “sample-in–answer-out” system for HCV screening in low-to-middle-income areas. POC microfluidic platforms offer comparable results to gold-standard methods using a small quantity of reagents and the sample can be tested outside the laboratory setting. A commercial device derived from the one described here, if utilized broadly, can play an essential role in managing HCV infection worldwide, thereby controlling the spread and enabling the timely treatment and monitoring of the disease. The 3-D-printed platform used in the research consists of a microprocessor controlled by Arduino, a step-up motor, and a circuit for the power supply. The molecular diagnostic platform is fully programmed and reusable for repetitive testing. It can also be operated using batteries to direct the movement of magnetic beads present in the chambers of the disposable microfluidic chip. The operational cost of the diagnostic set-up would be minimal, as it uses relatively inexpensive and sustainable equipment (roughly USD 50) for sample processing and disease detection. This microfluidic chip also offers shorter times for a reliable diagnosis of HCV infection. The disposable microfluidic chip we describe is affordable (roughly USD 2) and can be utilized in POC settings, as the assay is performed in an automated fashion. With this molecular diagnostic set-up, the user can run multiple tests since attention is required only initially and at the end of the colorimetric analysis. contains the full list of materials and costs for the molecular diagnostic microfluidic chip set-up and fabrication. In this study, we developed a low-cost RNA-based POC molecular diagnostic set-up for HCV with a colorimetric result readout. The developed diagnostic method will enable timely HCV screening and prevention in high-risk populations. It is an accurate method that can be implemented in low-income areas, making it accessible to people. A lack of overt symptoms in HCV patients often results in a lack of diagnosis, which subsequently leads to increased morbidity and mortality. Therefore, it is important to implement regular screening in high-risk populations. Molecular testing is considered the most accurate test for diagnosing HCV infection. Our research describes a single-step procedure for the molecular detection of HCV RNA from a patient’s samples with clinical applicability. The microfluidic chip utilizes an automated system for the hands-off processing of plasma samples. The testing of human plasma samples spiked with HCV particles showed that this set-up offers a sensitivity of 500 viral copies/mL and high specificity, without the need for trained technicians, expensive equipment, or facilities. The hands-free microfluidic chip used for the testing is easy to assemble, low-cost, and provides a practical approach for large-scale testing outside the laboratory. The operating procedure of the chip is straightforward; once the plasma samples are introduced to the inlet chamber, the automated system will self-operate. The cost reduction, need for less equipment, and short time (45 min) required for viral RNA detection offered by this approach would substantially reduce the financial and operational burden of large-scale testing in low-to-middle-income countries. Overall, the HCV diagnostic test approach we propose can be clinically implemented for routine HCV screening and can help in the timely execution of preventive and therapeutic steps to limit HCV spread.
Multicentric Study on the Clinical Mycology Capacity and Access to Antifungal Treatment in Portugal
7863dafb-b1ee-455a-beaa-cb16fbbb5b66
10808446
Microbiology[mh]
Invasive fungal diseases (IFD) are a global burden affecting more than 150 million individuals worldwide and lead to more than 1.7 million deaths every year [ – ]. Furthermore, IFD are associated with a significant socioeconomic burden, due to the elevated number of hospitalizations and the extreme healthcare costs, estimated to surpass $7 billion annually in the US , with similar figures in Europe . With the advent of modern medicine and advances in medical care, the number of susceptible hosts, namely hematological and oncological patients, recipients of hematopoietic stem-cell and solid organ transplantation, and immunosuppressed patients receiving chronic steroids and biologic therapies, is increasing [ – ]. Other vulnerable settings include advanced age and intensive care, and severe debilitating conditions, namely respiratory viral infections, such as COVID-19 and influenza, which underlie the development of COVID-19-associated aspergillosis (CAPA) and COVID-19-associated mucormycosis , and influenza-associated aspergillosis (IAPA) , respectively. Traveling to endemic regions further potentiates the dissemination of endemic mycoses to non-endemic areas [ – ], which in the case of Portugal, may be the reflection of the high number of individuals of South American and African origin . Diagnosing IFD poses a significant challenge. Not only are available tools limited, but affected patients typically present non-specific symptoms or have underlying conditions masking the disease . Individuals with chronic obstructive pulmonary disease (COPD), or bronchiectasis, for instance, not only experience respiratory symptoms that overlap with those of fungal infections but also exhibit challenging radiological presentations since they may be difficult to distinguish from the inherent clinical presentations of the underlying condition itself . In some instances, such as the diagnosis of Candida infections, the correct distinction between colonization and active infection is also difficult. Early diagnosis, however, can improve treatment outcomes and potentially alleviate the financial burden associated with IFD . A major determinant dictating the accessibility to these techniques has been reported to lie in the economic status of the country, with a previous study on the diagnostic capacity of European institutions describing considerable differences among countries according to their gross domestic product (GDP) . On the other hand, in some cases, such as Italy and Argentina , the resources seemed to be homogeneously distributed with no significant differences among the evaluated national institutions. Portugal, a Western European country whose estimated population is approximately 10.3 million inhabitants , was categorized as an European median-income country in our previous nationwide censoring survey, with a GDP encompassing the US$30,000–$45,000 range . The Portuguese healthcare system includes both public and private medical care providers. The public healthcare system, known as the National Healthcare System (NHS), is a free healthcare service including all the public entities designated for the provision of medical care services . The NHS is a platform created to provide equitable treatment among patients regardless of their socioeconomic status, being financially supported mainly through taxation. This medical system has presented good performance outcomes. For example, rates of avoidable hospitalizations for asthma and COPD, conditions well-recognized for their established connection to an elevated risk of IFD development , ranked amongst the top performers within the Organization for Economic Cooperation and Development . Our study aimed to describe the diagnostic capacity of IFD and the global access to antifungal treatments in public Portuguese institutions and identify the most critical aspects for improvement. A comprehensive questionnaire-based approach was implemented at multiple centers between November 1, 2021, to May 31, 2023. The data was gathered using an electronic case report form hosted on the website www.clinicalsurveys.net/uc/IFI_management_capacity/ (EFS Summer 2021, TIVIAN GmbH, Cologne, Germany). Rigorous validation of responses was undertaken to ensure the accuracy, coherence, and completeness of the data. The primary objective of the survey was to evaluate various critical aspects related to the diagnosis and treatment of IFD. These aspects encompassed the examination of institutional profiles, the assessment of perceptions regarding IFD incidence and significance at each institution, the exploration of microscopy techniques, culture, and fungal identification methods, analysis of serology approaches, antigen detection capabilities, molecular assays, and the availability of therapeutic drug monitoring (TDM). Participants were required to provide binary responses indicating the accessibility of specific methods at their respective locations. Additionally, for serology, antigen detection, molecular testing, and TDM, laboratories were asked to specify whether these services were available onsite or outsourced to external institutions. The prevalence of IFD was estimated using a Likert scale, allowing participants to rate the incidence on a scale from 1 (extremely low) to 5 (very high) (Table ). To ensure a diverse and representative group of participants, researchers reached out to individuals from various regions in Portugal through mass e-mails, targeting both close collaborators of the authors and members of key scientific organizations such as the International Society of Human and Animal Mycology (ISHAM; www.isham.org ) and the European Confederation for Medical Mycology (ECMM; www.ecmm.info ). The collected data was analyzed and summarized using frequencies and percentages. For all statistical analyses, SPSS v27.0 (SPSS, IBM Corp., Chicago, IL, United States) was utilized. During the study period, a total of 16 Portuguese institutions self-assessed their capability to manage invasive fungal infections (Fig. ). Table describes the baseline characteristics of the participating institutions in Portugal. Of the 16 participating institutions, 14 (87.5%) were admitting patients with diabetes mellitus or in need of parenteral nutrition, 13 (81.3%) patients with solid or hematological cancer, and 12 (75%) with COVID-19, human immunodeficiency virus (HIV), or blood or bone-marrow related disorders. Participants were inquired about their perception of IFD incidence within their respective institutions. Out of the 16 participating institutions, 11 (68.8%) reported a low-very low incidence of IFD. A high incidence of IFD was reported in 2 (12.5%) institutions, whereas a very high incidence was not perceived at any site. Participants also listed the most relevant pathogens at each institution, with Candida spp. (n = 16, 100%) being pointed out by all institutions as the most significant pathogen, followed by Aspergillus spp. (n = 12, 75%), Cryptococcus spp. (n = 5; 31.3%), Fusarium spp. (n = 2, 12.5%) and Mucorales (n = 1, 6.3%). Microscopy techniques were available in all 16 (100%) institutions (Table ). When cryptococcosis was suspected, microscopy was the method of choice to perform a direct examination of body fluids for most institutions (n = 14, 87.5%). However, only 3 (18.8%) employed direct microscopy or silver staining for suspected cases of mucormycosis and pneumocystosis, respectively. Of note, less than half of the participating institutions (n = 7, 43.8%) had access to fluorescent dyes. China or India ink were the most widely available staining dyes (n = 15, 93.8%), followed by Giemsa staining (n = 10, 62.5%), potassium hydroxide (n = 8, 50%), silver staining (n = 4, 25%), and calcofluor white staining (n = 1, 6.3%). Access to culture media was also widespread (100%), with all 16 institutions offering tests for specific identification. The most prevalent type of test was based on automated identification through VITEK 2® and other commercial tests (n = 16, 100%), followed by classical biochemical tests (n = 11, 68.8%), matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) (n = 10, 62.5%), deoxyribonucleic acid (DNA) sequencing (n = 6, 37.5%) and macro and microscopic observation of the colonies (n = 5, 31.3%). Antifungal susceptibility testing was available in 13 (81.3%) institutions. The VITEK 2® system was the most popular technology employed in these settings (n = 10, 62.5%), followed by the E-test® (n = 9, 56.3%), and broth microdilution published by the European Committee on Antimicrobial Susceptibility Testing (EUCAST; n = 7, 43.8%) and Clinical and Laboratory Standards Institute (CLSI; n = 4, 25%). Remarkably, antifungal susceptibility tests were not available in 2 (12.5%) of the institutions. Molecular diagnosis is prevalent in Portugal with only 1 institution lacking this type of resource. Polymerase chain reaction (PCR) for Pneumocystis detection emerged as the most frequent among Portuguese institutions (n = 14, 87.5%), followed by Aspergillus (n = 12, 75%), Candida (n = 11, 68.8%), and Mucorales (n = 8, 50%). Of note, outsourcing of antibody and antigen detection and molecular techniques to external entities is a highly frequent practice, often prevailing over onsite procedures. Serological antibody detection was available in 14 (87.5%) institutions, of which 12 had the capacity to detect Aspergillus spp. (75%) antibodies (Table ). In contrast, less than half of the institutions were equipped to detect antibodies against Candida spp. and Histoplasma spp. (n = 7, 43.8%). Antigen-detection assays were available in nearly all institutions (n = 15, 93.8%) except for one. Aspergillus and Cryptococcus antigen assays were available in 12 institutions (75%), whereas Candida and Histoplasma assays were limited to only 7 (43.8%) and 6 (37.5%), respectively. Tests to detect galactomannan were available in half of the institutions. Among these, 5 (31.3%) resorted to enzyme-linked immunosorbent assay (ELISA), and 6 (37.5%) to both lateral flow assay (LFA) and lateral flow device (LFD). Regarding Cryptococcus , the most commonly used detection assay was the latex agglutination assay (n = 9, 56.3%), followed by LFA (n = 6, 37.5%). Beta-glucan was also targeted for detection in 7 (43.8%) institutions. All 13 institutions that operate as treating centers and are capable of providing antifungal therapy had access to at least one antifungal (Table ). The most frequently available antifungal agents in Portuguese institutions belonged to the triazole, polyene (n = 13, 100.0%), and echinocandin (n = 11, 84.6%) classes. Among these classes, the most common drugs available included liposomal amphotericin B (n = 11, 84.6%) within the polyene formulations, voriconazole and fluconazole (n = 13, 100.0%) within the triazoles, and micafungin (n = 11, 84.6%) within the echinocandins. The availability of other drugs, namely the allylamine terbinafine (n = 7, 53.8%) or the pyrimidine analog flucytosine (n = 5, 38.5%), was less frequent. According to the obtained data, TDM remains a very limited practice in Portugal, with only 3 institutions (18.8%) engaging in this type of patient follow-up. Three (18.8%) institutions were able to monitor patients receiving voriconazole, whereas only 2 (12.5%) and 1 (6.3%) sites, respectively, had flucytosine, and itraconazole and posaconazole monitoring available. However, this monitoring was conducted mainly through outsourcing, with only 1 institution performing flucytosine and voriconazole onsite. We conducted a systematic analysis of the capacity of Portuguese institutions to diagnose and treat IFD. Overall, over half of the participating institutions documented a low IFD incidence. However, it is important to note the subjective nature of the reported perceptions. Candida spp. was identified as the most prominent pathogen, followed by Aspergillus spp. and Cryptococcus spp., which collectively occupy the top positions on the World Health Organization (WHO) fungal priority pathogen list . All institutions were prepared for fungal identification using culture and microscopy techniques, with the majority also endowed with antibody detection by serology, molecular tests, and antigen-detection assays. Antifungal drugs were generally accessible, with triazoles and amphotericin B being more widely available than echinocandins. TDM, however, stood out as one of the most important limitations, showing very restricted implementation. Portugal follows the same pathogen prevalence pattern observed in European settings . Notably, while other European countries, such as Italy and Austria , identified Aspergillus spp. as their most important pathogen, Portugal aligns with the overall trends in Asian and African countries, where Candida spp. stands out as the predominant fungal pathogen. This mirrors older reports on the epidemiology of IFD in which Candida species were the main etiological agent . However, the epidemiological landscape has markedly changed, with not only an increase in the prevalence of Aspergillus species but also the emergence of rare fungi, such as Fusarium and Mucorales . The relevance given to Aspergillus spp. also follows a previous report from a Portuguese multicentric surveillance program, in which Aspergillus spp. was identified as the most predominant etiological agent of IFD . This change in epidemiology is, at least in part, aligned with the widespread introduction of broad-spectrum antifungal prophylaxis. These preventive measures have been adopted to circumvent the challenges associated with diagnosis. Nonetheless, they often lead to the positive selection of resistant strains, possibly contributing to the emergence of difficult-to-treat fungi . Although Portugal follows the same overall epidemiological pattern observed in Europe, Cryptococcus spp. exhibited a higher prevalence. A previous study had already established the higher incidence of cryptococcal meningitis among human immunodeficiency virus (HIV)-infected patients in Portugal compared to other European countries [ , – ], a difference that might be attributed to the high number of patients without antiretroviral treatment. In contrast to the European landscape, and despite being less frequent, Fusarium was more prevalent in the Portuguese territory than fungi from the Mucorales order. Indeed, the incidence of fusariosis in Portugal has been increasing since the 90s. Although this survey did not include data from Madeira Island, previous reports state the introduction and settling of Fusarium in this geographical area . Importantly, the geoclimatic conditions of Madeira can be suitable for the development and dispersion of these plant-pathogenic fungi . However, no genotyping studies are available to allow the comparison of the involved fungal strains and confirm this hypothesis. Despite Portugal is not considered an endemic area for dimorphic fungi, the Portuguese multicentric surveillance program demonstrated a relatively high frequency of infections caused by these pathogens . Case reports have documented the presence of IFD established by other endemic fungi such as Paracoccidioides brasiliensis and Histoplasma spp. , which could be explained by the significant influx of immigrants from Africa and Brazil . While our study did not report these endemic fungi, not all eligible institutions were surveyed and there is also the possibility of misdiagnosis or underdiagnosis at the participating centers. With this in mind, it is crucial to consider endemic mycoses in the clinical diagnosis of immunocompromised patients who were born in or traveled to endemic areas. Direct microscopy is the method of choice in Portugal and Europe for fungal identification . Among the stainings used, China/India ink and Giemsa are the most commonly accessible. Despite China/India ink being mostly used for Cryptococcus detection and associated with low sensitivity, its accessible price puts it on top of the available stains . The popularity of Giemsa stain might be explained not only by its efficient staining but also by its cost-effectiveness. Conversely, the fluorescent dye calcofluor white is a very scarce resource across institutions due to its comparatively higher costs, compromising the identification of Aspergillus and Mucorales, for which it is strongly recommended [ – ]. This limitation is in line with what has been reported among European institutions, where countries with a GDP inferior to 45,000$, such as Portugal, had limited or no access to this fluorescent dye . Besides microscopy, the identification of fungal species involved predominantly automated identification, biochemical tests, and mounting medium. Of note, while in countries with a GDP exceeding $45,000, advanced and expensive diagnostic techniques were generally more accessible, countries with lower GDPs relied more frequently on more cost-effective techniques, considered obsolete, and surpassed by superior methods in high-income countries. These observations illustrate how the effectiveness of IFD diagnosis is influenced by the economic power of a country and how patient outcomes are contingent upon this factor. Of interest, MALDI-TOF MS was the most commonly available resource after automated identification and biochemical tests, being the most effective platform for microorganism identification . Strong performance and notable investment-to-return ratios have rendered Portugal well-prepared in the most recent Euro Health Consumer Index Report (2018), an annual classification of national healthcare systems in Europe . This philosophy adopted by the NHS might explain the high availability of MALDI-TOF MS, despite its price. Antibody and antigen detection assays are readily available, with over half of the institutions outsourcing these techniques. The exception is the detection of Cryptococcus spp. antigen, which is primarily conducted onsite. Molecular diagnosis is also often performed by external entities, although Pneumocystis spp. PCR is widely accessible and frequently performed onsite. It is important to note that while outsourcing widens the pool of resources available, it also comes with significant bureaucratic processes, including requests and approvals that can cause delays in responses, hindering timely clinical decisions. Hence, prioritizing the implementation of onsite resources should be a primary focus when aiming to improve the diagnosis and treatment of IFD. The frequency of antifungal susceptibility testing in Portugal is lower than the overall European frequency . Antifungal susceptibility testing is a resource in need of improvement, as its availability is not only below the European average but also the number of institutions conducting it onsite is limited. However, it is important to mention that Portugal is still ahead of other non-European countries, namely from the African continent , and Latin America and the Caribbean , which have significantly larger populations and face critical situations regarding fungal infections. In recent years, the emergence of azole-resistant Aspergillus strains has substantially risen in Portugal, with the use of antifungal drugs per capita increasing more than in other European countries [ – ]. The release of over-the-counter antifungals and the increased industry marketing have encouraged and promoted self-medication, enabling uncontrolled, inadequate, and prolonged azole utilization. Moreover, azoles are also widely used in the agriculture and wood industry, contributing to prolonged exposure and resistance development [ – ]. To tackle this concern and prevent therapeutic failures, it is crucial to prioritize antifungal susceptibility testing of clinical isolates. This practice ensures the accurate identification of fungal species to tailor antifungal treatment strategies, avoiding the emergence of resistant strains. Despite Portugal being well-prepared in comparison to other European countries, this is still an area requiring improvement, as susceptibility testing should be performed routinely and include both yeast and mold-directed tests. A straightforward approach to help mitigate this challenge could involve, for instance, the use of multiplex real-time PCR assays, which would allow for the direct differentiation between susceptible and resistant strains. According to the WHO, the most essential systemic antifungals encompass amphotericin B, in deoxycholate and liposomal formulations, anidulafungin, caspofungin, fluconazole, flucytosine, itraconazole, micafungin, and voriconazole . Amphotericin B deoxycholate and lipid complex, anidulafungin, and flucytosine presented the lowest availability among Portuguese institutions. These numbers might be explained in part by the high anidulafungin costs when compared to other interchangeable echinocandins and the difficulties in acquiring amphotericin B deoxycholate and flucytosine in Portugal . It is worth noting, however, that not all participating institutions have the ability to administer antifungal treatment, as they do not operate as treating centers. Despite the remaining drugs presenting better availability, Portugal still falls behind the European average for median-income countries in terms of access to echinocandins and triazoles, suggesting it might not be adequately prepared compared to counterparts with similar economic profiles. For instance, despite Candida spp. being perceived as the most significant fungal pathogen, echinocandins are the least frequently used antifungal. Our study thus highlights the need to improve the accessibility and distribution of antifungal drugs in Portugal to align with the country's specific fungal pathogen profile. Portugal is still very much behind concerning TDM, with only a few institutions with the capacity to conduct such analyses. This process holds significant importance as it provides insight into appropriate drug dosages and potential adverse effects, enabling treatment optimizations . Besides flucytosine, only the prescription of certain triazoles is monitored in these institutions. This represents a critical limitation since not all the antifungal classes are being covered, with a specific focus on echinocandins, which should be of special interest due to the high incidence of invasive Candida infections. This situation is particularly worrisome as most of the limited TDM procedures conducted are outsourced, and given the often inability to resort to the same provider this follow-up service. Our study has several limitations that should be acknowledged. The available information and data do not include all the institutions that manage IFD in Portugal. Also important is the lack of information concerning the turnaround time of the available tests, particularly when outsourced, as this could impact the clinical effectiveness of such tests. The absence of data regarding the outsourcing of critical techniques, namely DNA sequencing and antifungal susceptibility testing, is also a limitation. Finally, data collection for this survey took place during the COVID-19 pandemic. This context may have imposed time restrictions and increased workload on laboratory professionals, microbiologists, and infectious disease specialists, potentially affecting the accuracy of the survey. Furthermore, the data obtained from institutions with greater experience and capacity in diagnosing and treating IFD may not fully represent smaller institutions. Variations in resources, expertise, and infrastructure across different types of healthcare facilities can significantly impact the management of these infections. In summary, Portugal is well-prepared to manage IFD, but there are limitations to be addressed. These include ensuring widespread access to specific diagnostic tools, eliminating the high rates of outsourcing, improving the availability and suitability of antifungal drugs, and prioritizing the implementation of TDM across institutions. Addressing these limitations is crucial for facilitating earlier diagnosis and effective treatment of patients, ultimately improving outcomes in the management of IFD in Portugal.
Response of soil microecology to different cropping practice under
4e6f68a5-121a-4d7e-b17b-5b29b73f1934
9494904
Microbiology[mh]
Bupleurum chinense (Apiaceae) is an important medicinal plant that has been used in China and other Asian countries for thousands of years. The plant has many important properties, such as anti-inflammatory , liver-protecting , anti-depressant , anti-tumor , and immunomodulatory activities, and is widely used in the clinical treatment of fever, influenza, malaria, distending pain in the chest, menstrual disorders, and other symptoms. The active components of Bupleuri radix mainly include saponins , polysaccharides , essential oil , flavones , and coumarin . These compounds are not only related to Bupleurum germplasm, but are also influenced by the production environment and cropping practices . In the recent decades, as a result of rapid development and large-scale cultivation, the planting area of B. chinense has expanded substantially. However, due to the limited available land and maximum economic benefits, the continuous cultivation of B. chinense is becoming increasingly popular. Studies showed that long-term continuous cropping resulted in decreased abundance of beneficial microorganisms in soil, the increase in pathogenic microorganisms, and the decrease in yield and quality of medicinal materials . For example, the continuous planting of American ginseng and Sophora flavescens not only decreased weakened soil microbial diversity and amassed fungal root pathogens, but also changed soil physical properties, resulting in decreased crop yield and quality. Studies have shown that multiple cropping system [characterized by more than one crop grown together, either mixed in space (intercropping) or time (crop rotation)] can effectively alleviate the problems associated with monocropping. Intercropping, in which two or more crops are planted in the same field, can increase the absorption of trace elements, improve soil fertility and reduce the risk of pests and diseases . For example, the intercropping of turmeric, ginger and patchouli not only changed the soil physical properties and the microbial community structure, but also improved the quality of patchouli . Crop rotation involves the systematic rotation of different types of crops in the same field. Crop rotation can balance soil nutrients, improve soil chemical properties, increase the abundance of beneficial microorganisms, and enhance disease resistance . For example, the rotation Pinellia ternata- wheat improved soil microecological environment, enriched beneficial microorganisms and diminished pathogenic microorganisms . However, the effects of cropping practices on the rhizosphere soil microecology of B. chinense have not been studied in detail, especially the dynamic changes in rhizosphere soil microorganisms and soil physical and chemical properties after continuous planting of B. chinense . This lack of knowledge affects the development of B. chinense planting industry. The objective of the study was to investigate the effect of cropping practices on soil rhizosphere microecology of B. chinense. A high-throughput Illumina MiSeq sequencing platform was used to determine the microbial community structures in the B. chinense rhizosphere soil in different cropping practices. The chemical properties of rhizosphere soil were determined by the previously reported methods . Our study could provide a new basis for overcoming continuous-cropping obstacles and promote development of B. chinense planting industry. Field experiment The experimental site was a trial plot of Shandong University of Chinese Medicine, Shandong Province, China (117°22′54″ E 36°35′27″ N, altitude 524 m). The annual average sunshine was 2647.6 h, and the sunshine rate was 60%. The annual average temperature was 12.8 °C, and the annual average precipitation was 600.8 mm. The soil type was brown soil. The field experiment was conducted from June 2016 to October 2020. The field trial area was divided into three plots of 5 × 5 square meters each. Three treatments were implemented: B. chinense continuous cultivation (BCC), B. chinense intercropped with corn (BIC) and growing corn after B. chinense (BCR); each treatment had three repetitions. Cultivation time and sowing of Bupleurum seeds and corn are shown in Table . All the experimental plots were subjected to the same field management practices, including manual weeding, no fertilizer and no watering. During the experiment, the soil microbial and chemical characteristics were analyzed for three consecutive years to assess temporal variation. After the flowering of B. chinense in September of the second year, soil samples of the three cropping treatments were taken for comparative analysis. Collection of soil samples Rhizosphere soil samples were collected in October 2020. Rhizosphere soil samples from 30 plant were collected from five different sites using the Z-type method in each experimental plot. Then, 30 rhizosphere samples were combined into a composite sample . There were triplicate rhizosphere soil samples for each treatment. Firstly, the loose soil was shaken off from roots (the depth of roots was about 10 cm), and the soil closely adhering to the root system was sampled as rhizosphere soil by brushing it off . The collected soil was then placed in a sealed sterile bag and taken back to the laboratory. Each soil sample was divided into two subsamples: one for chemical analysis, and the other was stored at − 20 °C for microbial analysis. Chemical properties After air-drying, the pH value of the soil was measured using a pH meter (pHS-3S) (2.5:1 water:soil ratio), and the contents of soil organic matter (SOM) (SOM = SOC × 1.724), available phosphorus (Ava-P), and available potassium (Ava-K) were determined by the methods reported by Qu et al. . Determination of NO 3 − -N and NH 4 + -N in soil was done by UV spectrometry as reported by Xing et al. . DNA extraction A PowerSoil DNA Isolation Kit (MoBio Laboratories, Carlsbad, CA, USA) was used to extract DNA from soil samples. Each soil sample was extracted according to the PowerSoil kit manufacture’s protocol. The extracted DNA was eluted using 100 μL sterile water, quantified using a NanoDrop 2000 spectrophotometer (Thermo Scientific, Canada) and stored at − 20 °C for further use. PCR amplification and Illumina MiSeq sequencing The V4-V5 region of bacterial 16S rDNA was amplified using the primers 515F 5′-GTGCCAGCMGCCGCGGTAA-3′ and 926R 5′-CCGTCAATTCMTTTGAGTTT-3′, whereas the fungal ITS1 region was amplified using F 5′-CTTGGTCATTTAGAGGAAGTAA-3′, and R 5′-GCTGCGTTCTTCATCGATGC-3′). The primers also contained the Illumina 5′-overhang adapter sequences for two-step amplicon library building, following manufacturer’s instructions. The initial PCR reactions were carried out in 50 μL reaction volumes with 1–2 μL DNA template, 200 μM dNTPs, 0.2 μM of each primer, 5 × reaction buffer 10 μL, and 1 U Phusion DNA Polymerase (New England Biolabs, USA). PCR conditions consisted of initial denaturation at 94 °C for 2 min, followed by 25 cycles of denaturation at 94 °C for 30 s, annealing at 56 C for 30 s and extension at 72 °C for 30 s, with a final extension at 72 °C for 5 min . The barcoded PCR products were purified using a DNA gel extraction kit (Axygen, USA) and quantified using an FTC − 3000 TM real-time PCR (Funglyn Shanghai). The PCR products from different samples were mixed at equal ratios. The second step PCR with dual 8 bp barcodes was used for multiplexing. Eight-cycle PCR reactions were used to incorporate two unique barcodes to either end of the amplicons. Cycling conditions consisted of 1 cycle at 94 °C for 3 min, followed by 8 cycles at 94 °C for 30 s, 56 °C for 30 s and 72 °C for 30 s, and a final extension cycle at 72 °C for 5 min. The library was purified using a DNA gel extraction kit (Axygen, USA) and sequenced by 2 × 250 bp paired-end sequencing on a Novaseq platform using a Novaseq 6000 SP 500 Cycle Reagent Kit (Illumina USA) at TinyGen Bio-Tech (Shanghai) Co., Ltd. Illumina data analysis The raw fastq files were demultiplexed based on the barcode. The PE reads for all samples were run through Trimmomatic (version 0.35) to remove low quality base pairs using the parameters SLIDINGWINDOW 50:20 and MINLEN 50. The trimmed reads were then cut to separate adaptors using Cutadapt (version 1.16) and were merged using FLASH program (version 1.2.11) with default parameters. The sequences were analyzed using a combination of software Mothur (version 1.33.3), UPARSE (usearch version v8.1.1756, http://drive5.com/uparse/ ), and R (version 3.6.3). The demultiplexed reads were clustered at 97% sequence identity into operational taxonomic units (OTUs). The singleton OTUs were deleted using the UPARSE pipeline ( http://drive5.com/usearch/manual/uparse cmds.html ). The representative OTU sequences of bacteria were assigned taxonomically against the Silva 128 database (ITS in Unite database) with confidence score ≥ 0.6 by the classify.seqs command in Mothur. The indices of alpha diversity were calculated by Mothur. For the beta diversity analysis, the Weighted UniFrac distance algorithm was used to calculate the distance between samples. In LEfSe analysis, the linear discriminant analysis (LDA) score was computed for taxa differentially abundant between the two treatments. A taxon at P < 0.05 (Kruskal–Wallis test) and log10[LDA] ≥ 2.0 (or ≤ − 2.0) was considered significant. Statistical and visual analysis of dilution curves, community structure histogram, NMDS and RDA were performed using R language (Version 3.6.3). PICRUSt software and FUNGuild software were used to predict the function of bacterial and fungal gene sequences, respectively. All statistical analyses were performed using SPSS Statistics 21.0. The data on the chemical properties and microbial diversity of rhizosphere soil were analyzed by Duncan’s multiple range test. Differences in the relative abundances of microbial taxa among treatments were analyzed using one-way analysis of variance (ANOVA) at the 0.05 probability level. The experimental site was a trial plot of Shandong University of Chinese Medicine, Shandong Province, China (117°22′54″ E 36°35′27″ N, altitude 524 m). The annual average sunshine was 2647.6 h, and the sunshine rate was 60%. The annual average temperature was 12.8 °C, and the annual average precipitation was 600.8 mm. The soil type was brown soil. The field experiment was conducted from June 2016 to October 2020. The field trial area was divided into three plots of 5 × 5 square meters each. Three treatments were implemented: B. chinense continuous cultivation (BCC), B. chinense intercropped with corn (BIC) and growing corn after B. chinense (BCR); each treatment had three repetitions. Cultivation time and sowing of Bupleurum seeds and corn are shown in Table . All the experimental plots were subjected to the same field management practices, including manual weeding, no fertilizer and no watering. During the experiment, the soil microbial and chemical characteristics were analyzed for three consecutive years to assess temporal variation. After the flowering of B. chinense in September of the second year, soil samples of the three cropping treatments were taken for comparative analysis. Rhizosphere soil samples were collected in October 2020. Rhizosphere soil samples from 30 plant were collected from five different sites using the Z-type method in each experimental plot. Then, 30 rhizosphere samples were combined into a composite sample . There were triplicate rhizosphere soil samples for each treatment. Firstly, the loose soil was shaken off from roots (the depth of roots was about 10 cm), and the soil closely adhering to the root system was sampled as rhizosphere soil by brushing it off . The collected soil was then placed in a sealed sterile bag and taken back to the laboratory. Each soil sample was divided into two subsamples: one for chemical analysis, and the other was stored at − 20 °C for microbial analysis. After air-drying, the pH value of the soil was measured using a pH meter (pHS-3S) (2.5:1 water:soil ratio), and the contents of soil organic matter (SOM) (SOM = SOC × 1.724), available phosphorus (Ava-P), and available potassium (Ava-K) were determined by the methods reported by Qu et al. . Determination of NO 3 − -N and NH 4 + -N in soil was done by UV spectrometry as reported by Xing et al. . A PowerSoil DNA Isolation Kit (MoBio Laboratories, Carlsbad, CA, USA) was used to extract DNA from soil samples. Each soil sample was extracted according to the PowerSoil kit manufacture’s protocol. The extracted DNA was eluted using 100 μL sterile water, quantified using a NanoDrop 2000 spectrophotometer (Thermo Scientific, Canada) and stored at − 20 °C for further use. The V4-V5 region of bacterial 16S rDNA was amplified using the primers 515F 5′-GTGCCAGCMGCCGCGGTAA-3′ and 926R 5′-CCGTCAATTCMTTTGAGTTT-3′, whereas the fungal ITS1 region was amplified using F 5′-CTTGGTCATTTAGAGGAAGTAA-3′, and R 5′-GCTGCGTTCTTCATCGATGC-3′). The primers also contained the Illumina 5′-overhang adapter sequences for two-step amplicon library building, following manufacturer’s instructions. The initial PCR reactions were carried out in 50 μL reaction volumes with 1–2 μL DNA template, 200 μM dNTPs, 0.2 μM of each primer, 5 × reaction buffer 10 μL, and 1 U Phusion DNA Polymerase (New England Biolabs, USA). PCR conditions consisted of initial denaturation at 94 °C for 2 min, followed by 25 cycles of denaturation at 94 °C for 30 s, annealing at 56 C for 30 s and extension at 72 °C for 30 s, with a final extension at 72 °C for 5 min . The barcoded PCR products were purified using a DNA gel extraction kit (Axygen, USA) and quantified using an FTC − 3000 TM real-time PCR (Funglyn Shanghai). The PCR products from different samples were mixed at equal ratios. The second step PCR with dual 8 bp barcodes was used for multiplexing. Eight-cycle PCR reactions were used to incorporate two unique barcodes to either end of the amplicons. Cycling conditions consisted of 1 cycle at 94 °C for 3 min, followed by 8 cycles at 94 °C for 30 s, 56 °C for 30 s and 72 °C for 30 s, and a final extension cycle at 72 °C for 5 min. The library was purified using a DNA gel extraction kit (Axygen, USA) and sequenced by 2 × 250 bp paired-end sequencing on a Novaseq platform using a Novaseq 6000 SP 500 Cycle Reagent Kit (Illumina USA) at TinyGen Bio-Tech (Shanghai) Co., Ltd. The raw fastq files were demultiplexed based on the barcode. The PE reads for all samples were run through Trimmomatic (version 0.35) to remove low quality base pairs using the parameters SLIDINGWINDOW 50:20 and MINLEN 50. The trimmed reads were then cut to separate adaptors using Cutadapt (version 1.16) and were merged using FLASH program (version 1.2.11) with default parameters. The sequences were analyzed using a combination of software Mothur (version 1.33.3), UPARSE (usearch version v8.1.1756, http://drive5.com/uparse/ ), and R (version 3.6.3). The demultiplexed reads were clustered at 97% sequence identity into operational taxonomic units (OTUs). The singleton OTUs were deleted using the UPARSE pipeline ( http://drive5.com/usearch/manual/uparse cmds.html ). The representative OTU sequences of bacteria were assigned taxonomically against the Silva 128 database (ITS in Unite database) with confidence score ≥ 0.6 by the classify.seqs command in Mothur. The indices of alpha diversity were calculated by Mothur. For the beta diversity analysis, the Weighted UniFrac distance algorithm was used to calculate the distance between samples. In LEfSe analysis, the linear discriminant analysis (LDA) score was computed for taxa differentially abundant between the two treatments. A taxon at P < 0.05 (Kruskal–Wallis test) and log10[LDA] ≥ 2.0 (or ≤ − 2.0) was considered significant. Statistical and visual analysis of dilution curves, community structure histogram, NMDS and RDA were performed using R language (Version 3.6.3). PICRUSt software and FUNGuild software were used to predict the function of bacterial and fungal gene sequences, respectively. All statistical analyses were performed using SPSS Statistics 21.0. The data on the chemical properties and microbial diversity of rhizosphere soil were analyzed by Duncan’s multiple range test. Differences in the relative abundances of microbial taxa among treatments were analyzed using one-way analysis of variance (ANOVA) at the 0.05 probability level. The effect of cropping practices on the rhizosphere soil chemical properties The chemical properties of B. chinense rhizosphere soil in different treatments are shown in Table . Compared with intercropping and crop rotation, soil pH and the contents of NO 3 − -N and Ava-K decreased after continuous planting of B. chinense , but the Ava-P content increased. The chemical parameters of rhizosphere soil differed significantly among the treatments , except for the NO 3 − -N content. Amplicon sequencing and rarefaction curves To characterize the microbiome in the B. chinense rhizosphere soil in different cropping practices, nine samples were sequenced by Illumina MiSeq. The amplicon sequencing resulted in 450,038 effective reads of bacterial 16S rRNA genes and 437,141 effective reads of fungal ITS region. Based on 97% similarity, the OTUs of microbial community in the rhizosphere soil were obtained. The results are shown in supplementary Table . To construct rarefaction curves, the dataset was flattened according to the minimum number of sample sequences. The rarefaction curves of nine rhizosphere soil samples were constructed based on the number of OTUs observed (Supplementary Fig. ). The rarefaction curves showed that the number of OTUs rose sharply and then gradually flattened out, indicating that the sequencing library reached saturation. Therefore, it could be used for analyzing the diversity of microorganisms in the rhizosphere soil of B. chinense . Alpha diversity of bacterial and fungal communities The alpha diversity represents the measurement of within-community microbial diversity (Table ). Theoretically, the larger the Shannon index or the smaller the Simpson index, the higher the community diversity. According to the Shannon index, the bacterial richness was highest (6.513) in the rhizosphere soil of the rotation of B. chinense and corn, followed by continuous monocropping (6.421) and intercropping of B. chinense and corn (6.328). The Simpson index analysis confirmed the above-mentioned diversity analysis. Shannon index and Simpson index values for fungal communities in the rhizosphere of B. chinense -corn intercropping were 4.401 and 0.029, respectively, followed by those of rotation with corn (4.250 and 0.033, respectively), and the lowest diversity values were in B. chinense monocropping (4.201 and 0.049, respectively). The results showed that the rotation and intercropping of B. chinense with corn were the main factors affecting the diversity of, respectively, bacteria and fungi in the rhizosphere. In summary, the cropping practices had an important effect on the diversity of rhizosphere microorganisms. Beta diversity of bacterial and fungal communities In order to shed more light on the differences in microbial community structure, NMDS analysis was performed based on the Weighted UniFrac distance (Fig. ), and the samples could be divided into three groups, according to the species composition in the B. chinense rhizosphere. There were similarities in the structure of microbial communities within the treatments and significant differences in the structure among the treatments, which indicated that the cropping practices in the same field strongly influenced the composition of microbial communities in the B. chinense rhizosphere. The composition and structure of the bacterial community In order to clarify the microbial community structure in the B. chinense rhizosphere, two taxonomic levels (phylum and genus) were analyzed. As shown in Fig. A, 13 bacterial phyla were detected in the soil from different cropping practices. The dominant bacterial phyla in the B. chinense rhizosphere soil were Proteobacteria, followed by Actinobacteria, Acidobacteria, and Chloroflexi. As compared to BIC and BCC, continuous cropping of B. chinense for 3 years resulted in higher abundance of Proteobacteria and Actinobacteria, but in lower abundance of Acidobacteria. At the genus level (Fig. A), 79 bacterial genera were detected in the rhizosphere soil from different cropping practices. The dominant genera were Pseudarthrobacter , Microvirga , Gaiella , Nitrospira , and Pirellula . Compared with the intercropping and rotation of B. chinense and corn, the relative abundance of Pseudarthrobacter and Gaiella increased after continuous cropping. However, the relative abundance of Microvirga and Nitrospira showed a downward trend after continuous cropping and intercropping (Fig. A). The composition and structure of the fungal community As shown in Fig. B, five fungal phyla were detected in the soil from different cropping practices. The dominant fungal phyla were Ascomycota, Basidiomycota and Zygomycota. The relative abundance of Ascomycota decreased after continuous cropping and intercropping, but increased after rotation with corn. The relative abundance of Basidiomycetes increased after continuous cropping, but decreased after intercropping and rotation. At the genus level (Fig. B), 60 fungal genera were detected in the soil from different cropping practices. The dominant fungal genera in the rhizosphere soil were Gibberella , Cercophora , Fusarium , Chaetomium , Mortierella , Preussia , Cryptococcus , Alternaria , unclassified _ Ascobolaceae , Cladorhinum , Paraphoma , Knufia , and Cladosporium . After 3 years of continuous cultivation of B. chinense , the relative abundance of Cercophora , Cryptococcus , Alternaria , Paraphoma and Cladosporium increased, but the relative abundance of Chaetomium , Mortierella , Preussia and Cladorrhinum significantly decreased (Fig. B). Correlation analysis of dominant microorganisms and soil properties Soil chemical properties were important explanatory factors that determined the clustering patterns of soil microbial communities in different cropping treatments . The chemical properties of the B. chinense rhizosphere soil were significantly different under different cropping practices (Table ). Therefore, redundancy analysis (RDA) was conducted on the relative abundance of dominant bacterial and fungal genera and soil chemical factors (Fig. ). The results showed that the cumulative variation explained by the soil chemical properties was 87.84 and 59.31% for bacteria and fungi, respectively, indicating that explanatory variables had a significant influence on the structure of microbial communities. The effects of soil chemical properties on bacteria and fungi were in the order of NH 4 + -N > SOM > Ava-K > pH > NO 3 − -N > Ava-P and NH 4 + -N > SOM > Ava-K > Ava-P > pH > NO 3 − -N, respectively (Fig. ). In conclusion, NH 4 + -N, SOM and Ava-K were the main chemical properties that affected the microbial abundance and composition in the Bupleurum rhizosphere soil. Biomarker analysis In order to identify the dominant microbial biomarkers in the B. chinense rhizosphere soil under different cropping practices, the linear discriminant analysis (LDA) effect size (LEfSe) was carried out (Fig. ). The LDA results identified 30, 33 and 55 bacterial biomarkers in continuous monocropping, intercropping and rotation with corn , respectively (Fig. A). The most abundant bacterial family was Comamonadaceae from B. chinense continuous monocropping soil. Rhizobium giardinii , Desulfurellaceae and Burkholderiales were abundant in the rhizosphere of B. chinense intercropped with corn, whereas Methylobacteriaceae and Microvirga were significantly enriched in the rhizosphere of B. chinense in rotation with corn. For the fungal community, we identified 92, 57 and 34 fungal biomarkers in continuous monocropping, intercropping and rotation with corn , respectively (Fig. B). The relatively abundant biomarker fungal taxa included Dothideomycetes and Pleosporales in the B. chinense continuous monocropping, Chaetomiaceae, Mortierellaceae and Zygomycota in B. chinense intercropped with corn, and Nectriaceae, Chytridiomycetes and Rhizophlyctidales in B. chinense in rotation with corn. Functional analysis In order to explore the functional changes in soil bacteria in different cropping treatments, six categories of biological metabolic pathways (main functional levels) were identified by comparing with KEGG database. It included metabolism, genetic information processing, environmental information processing, cellular processes, human diseases, and organismal systems, accounting for 66, 11, 8, 7, 5, and 3%, respectively. In addition, 24 sub-functions such as amino acid metabolism, energy metabolism, metabolism of cofactors and vitamins, and translation were found by analyzing the secondary functional layers of predictive genes (Fig. A). The secondary pathways of B. chinense under different cropping treatments were similar, that is, carbohydrate metabolism and amino acid metabolism were significantly higher than the other metabolic pathways. The carbohydrate metabolism and amino acid metabolism were higher in continuous cropping of B. chinense than in other cropping patterns. According to the FUNGuild database, at least eight nutrient patterns were detected in this study, whereby saprophytes were most abundant, followed by pathotroph-saprotroph-symbiotroph and pathotroph-saprotroph patterns. The relative abundance of fungal functions varied significantly among different treatments. Compared with intercropping and rotation, pathotrophs, pathotroph-symbiotrophs, and pathotroph-saprotrophs were most abundant in continuous cropping (Fig. B). The chemical properties of B. chinense rhizosphere soil in different treatments are shown in Table . Compared with intercropping and crop rotation, soil pH and the contents of NO 3 − -N and Ava-K decreased after continuous planting of B. chinense , but the Ava-P content increased. The chemical parameters of rhizosphere soil differed significantly among the treatments , except for the NO 3 − -N content. To characterize the microbiome in the B. chinense rhizosphere soil in different cropping practices, nine samples were sequenced by Illumina MiSeq. The amplicon sequencing resulted in 450,038 effective reads of bacterial 16S rRNA genes and 437,141 effective reads of fungal ITS region. Based on 97% similarity, the OTUs of microbial community in the rhizosphere soil were obtained. The results are shown in supplementary Table . To construct rarefaction curves, the dataset was flattened according to the minimum number of sample sequences. The rarefaction curves of nine rhizosphere soil samples were constructed based on the number of OTUs observed (Supplementary Fig. ). The rarefaction curves showed that the number of OTUs rose sharply and then gradually flattened out, indicating that the sequencing library reached saturation. Therefore, it could be used for analyzing the diversity of microorganisms in the rhizosphere soil of B. chinense . The alpha diversity represents the measurement of within-community microbial diversity (Table ). Theoretically, the larger the Shannon index or the smaller the Simpson index, the higher the community diversity. According to the Shannon index, the bacterial richness was highest (6.513) in the rhizosphere soil of the rotation of B. chinense and corn, followed by continuous monocropping (6.421) and intercropping of B. chinense and corn (6.328). The Simpson index analysis confirmed the above-mentioned diversity analysis. Shannon index and Simpson index values for fungal communities in the rhizosphere of B. chinense -corn intercropping were 4.401 and 0.029, respectively, followed by those of rotation with corn (4.250 and 0.033, respectively), and the lowest diversity values were in B. chinense monocropping (4.201 and 0.049, respectively). The results showed that the rotation and intercropping of B. chinense with corn were the main factors affecting the diversity of, respectively, bacteria and fungi in the rhizosphere. In summary, the cropping practices had an important effect on the diversity of rhizosphere microorganisms. In order to shed more light on the differences in microbial community structure, NMDS analysis was performed based on the Weighted UniFrac distance (Fig. ), and the samples could be divided into three groups, according to the species composition in the B. chinense rhizosphere. There were similarities in the structure of microbial communities within the treatments and significant differences in the structure among the treatments, which indicated that the cropping practices in the same field strongly influenced the composition of microbial communities in the B. chinense rhizosphere. In order to clarify the microbial community structure in the B. chinense rhizosphere, two taxonomic levels (phylum and genus) were analyzed. As shown in Fig. A, 13 bacterial phyla were detected in the soil from different cropping practices. The dominant bacterial phyla in the B. chinense rhizosphere soil were Proteobacteria, followed by Actinobacteria, Acidobacteria, and Chloroflexi. As compared to BIC and BCC, continuous cropping of B. chinense for 3 years resulted in higher abundance of Proteobacteria and Actinobacteria, but in lower abundance of Acidobacteria. At the genus level (Fig. A), 79 bacterial genera were detected in the rhizosphere soil from different cropping practices. The dominant genera were Pseudarthrobacter , Microvirga , Gaiella , Nitrospira , and Pirellula . Compared with the intercropping and rotation of B. chinense and corn, the relative abundance of Pseudarthrobacter and Gaiella increased after continuous cropping. However, the relative abundance of Microvirga and Nitrospira showed a downward trend after continuous cropping and intercropping (Fig. A). As shown in Fig. B, five fungal phyla were detected in the soil from different cropping practices. The dominant fungal phyla were Ascomycota, Basidiomycota and Zygomycota. The relative abundance of Ascomycota decreased after continuous cropping and intercropping, but increased after rotation with corn. The relative abundance of Basidiomycetes increased after continuous cropping, but decreased after intercropping and rotation. At the genus level (Fig. B), 60 fungal genera were detected in the soil from different cropping practices. The dominant fungal genera in the rhizosphere soil were Gibberella , Cercophora , Fusarium , Chaetomium , Mortierella , Preussia , Cryptococcus , Alternaria , unclassified _ Ascobolaceae , Cladorhinum , Paraphoma , Knufia , and Cladosporium . After 3 years of continuous cultivation of B. chinense , the relative abundance of Cercophora , Cryptococcus , Alternaria , Paraphoma and Cladosporium increased, but the relative abundance of Chaetomium , Mortierella , Preussia and Cladorrhinum significantly decreased (Fig. B). Soil chemical properties were important explanatory factors that determined the clustering patterns of soil microbial communities in different cropping treatments . The chemical properties of the B. chinense rhizosphere soil were significantly different under different cropping practices (Table ). Therefore, redundancy analysis (RDA) was conducted on the relative abundance of dominant bacterial and fungal genera and soil chemical factors (Fig. ). The results showed that the cumulative variation explained by the soil chemical properties was 87.84 and 59.31% for bacteria and fungi, respectively, indicating that explanatory variables had a significant influence on the structure of microbial communities. The effects of soil chemical properties on bacteria and fungi were in the order of NH 4 + -N > SOM > Ava-K > pH > NO 3 − -N > Ava-P and NH 4 + -N > SOM > Ava-K > Ava-P > pH > NO 3 − -N, respectively (Fig. ). In conclusion, NH 4 + -N, SOM and Ava-K were the main chemical properties that affected the microbial abundance and composition in the Bupleurum rhizosphere soil. In order to identify the dominant microbial biomarkers in the B. chinense rhizosphere soil under different cropping practices, the linear discriminant analysis (LDA) effect size (LEfSe) was carried out (Fig. ). The LDA results identified 30, 33 and 55 bacterial biomarkers in continuous monocropping, intercropping and rotation with corn , respectively (Fig. A). The most abundant bacterial family was Comamonadaceae from B. chinense continuous monocropping soil. Rhizobium giardinii , Desulfurellaceae and Burkholderiales were abundant in the rhizosphere of B. chinense intercropped with corn, whereas Methylobacteriaceae and Microvirga were significantly enriched in the rhizosphere of B. chinense in rotation with corn. For the fungal community, we identified 92, 57 and 34 fungal biomarkers in continuous monocropping, intercropping and rotation with corn , respectively (Fig. B). The relatively abundant biomarker fungal taxa included Dothideomycetes and Pleosporales in the B. chinense continuous monocropping, Chaetomiaceae, Mortierellaceae and Zygomycota in B. chinense intercropped with corn, and Nectriaceae, Chytridiomycetes and Rhizophlyctidales in B. chinense in rotation with corn. In order to explore the functional changes in soil bacteria in different cropping treatments, six categories of biological metabolic pathways (main functional levels) were identified by comparing with KEGG database. It included metabolism, genetic information processing, environmental information processing, cellular processes, human diseases, and organismal systems, accounting for 66, 11, 8, 7, 5, and 3%, respectively. In addition, 24 sub-functions such as amino acid metabolism, energy metabolism, metabolism of cofactors and vitamins, and translation were found by analyzing the secondary functional layers of predictive genes (Fig. A). The secondary pathways of B. chinense under different cropping treatments were similar, that is, carbohydrate metabolism and amino acid metabolism were significantly higher than the other metabolic pathways. The carbohydrate metabolism and amino acid metabolism were higher in continuous cropping of B. chinense than in other cropping patterns. According to the FUNGuild database, at least eight nutrient patterns were detected in this study, whereby saprophytes were most abundant, followed by pathotroph-saprotroph-symbiotroph and pathotroph-saprotroph patterns. The relative abundance of fungal functions varied significantly among different treatments. Compared with intercropping and rotation, pathotrophs, pathotroph-symbiotrophs, and pathotroph-saprotrophs were most abundant in continuous cropping (Fig. B). Soil chemical characteristics are the important indices for evaluating soil quality. Cropping practices influenced not only the chemical properties of soil, but also governed the composition of rhizosphere microorganisms . Therefore, elucidating the changes in soil chemical properties can provide a basis for characterizing soil productivity under different cropping practices. A decrease in soil nutrients was associated with a decrease in diversity of rhizosphere microbial community, which is one of the main causes of problems with crop continuous cropping. However, intercropping and rotation increased soil nutrient contents, thereby increasing the diversity of rhizosphere microbial community and alleviating continuous cropping problems . In our study, the contents of pH, NO 3 − -N and Ava-K decreased after continuous cropping of B. chinense, but increased after intercropping and rotation with corn. Studies have shown that continuous monocropping systems have a negative impact on soil function and sustainability . Soil nutrient contents such as SOM, Ava-P, Ava-K, NO 3 − -N and NH 4 + -N showed a decreasing trend after continuous cropping. However, rotation or intercropping with corn effectively alleviated this decline and imbalance in soil nutrients caused by continuous monocropping . Our experiment, confirming the above findings, contributed to the sustainable development of B. chinense planting industry through rotation of B. chinense with corn. Studies have shown that soil organic matter is a key factor affecting soil microbial community diversity, and high soil organic matter content is conducive to improving soil bacterial community diversity . In our study, the rhizosphere soil bacterial diversity increased after B. chinense rotation, but decreased after B. chinense intercropping with corn, which might have been related to the decrease in organic matter content after B. chinense intercropping and the increase of organic matter after rotation. Higher soil microbial community diversity is indicative of higher soil health and plant productivity . Soil microbial diversity not only has an important impact on soil quality, function and sustainability , but also is a key factor in the control of pathogenic microorganisms . Therefore, the loss of soil microbial diversity and function is one of the reasons for poor crop growth under continuous monocropping. In order to ensure the accuracy and reliability of the test results, the soil microbial and chemical properties were analyzed for three consecutive times during the experiment. Our results showed the cropping practice of B. chinense significantly affected the structure and composition of soil microbial community. In B. chinense continuous monocropping, alpha diversity decreased, but this change could be alleviated by rotation or intercropping with corn. Similar results were also obtained in continuous monocropping of sugar beet and Coptis chinensis Franch ., intercropping of potato with onion and tomato and intercropping of black pepper and vanilla , as well as rotation of Brassica vegetables with eggplant , and Pinellia ternata with wheat . Beta diversity showed that cropping practices had a strong influence on the soil microbial community. In other words, the use of different cropping systems may lead to significant differences in the structure of microbial communities in soil . Importantly, changes in microbial community structure and composition usually are associated with changes in plant metabolic capacity, biodegradation, disease inhibition, and other functions . In our study, the continuous monocropping of B. chinense strongly reduced the abundance of beneficial microorganisms, such as Microvirga , Haliangium , Chaetomium , Mortierella , Preussia , and Cladorrhinum . These rhizosphere microorganisms played an important role in plant growth and the inhibition of pathogenic microorganisms [ – ]. By contrast, some potentially pathogenic microorganisms, such as Cercophora , Alternaria , Paraphoma , Cladosporium , Monographella , Hydropisphaera , and Colletotrichum were significantly amplified. For example, Alternaria and Paraphoma can cause root rot of B. chinense , and Colletotrichum can infect leaves to produce disease spots . In this study, ecological functions in the rhizosphere soil of B. chinense in different cultivation modes were predicted. Among fungi, compared with other groups, the abundance of pathotrophs, pathotroph-symbiotrophs, and pathotroph-saprotrophs, which may cause plant diseases, increased significantly in continuous cropping. The results showed that continuous cropping of B. chinense resulted in the decrease of pH, NO 3 − -N and Ava-K in the rhizosphere soil, and the decrease in rhizosphere bacterial and fungal α-diversity. The relative abundance of beneficial microorganisms was reduced, and intercropping and rotation could alleviate these problems. Soil chemical properties, especially the contents of NH 4 + -N, SOM and Ava-K, influenced the microbial structure and composition of the B. chinense rhizosphere soil. These findings could provide a new basis for overcoming the problems associated with continuous cropping and promoting development of B. chinense planting industry by improving soil microbial communities. Additional file 1: Supplement Table 1 . High-throughput results for bacteria and fungi in the rhizosphere soil of Bupleurum chinese under different cropping practices. Additional file 2: Supplement Figure 1 . Rarefaction curves of Bupleurum chinense samples in different cropping practices. (A) Bacteria, (B) Fungi.
Renewed calls for abortion-related research in the post-Roe era
19032aea-801c-4c74-8839-f1c9881444a9
10765585
Psychiatry[mh]
Nearly 50 years after Roe versus Wade, the United States Supreme Court’s decision in Dobbs versus Jackson Women’s Health Organization unraveled the constitutional right to abortion, allowing individual states to severely restrict or ban the procedure. In response, leading medical, public health, and community organizations have renewed calls for research to elucidate and address the burgeoning social and medical consequences of new abortion restrictions ( ). Abortion research not only includes studies that establish the safety, quality, and efficacy of evidence-based abortion care protocols, but also encompasses studies on the availability of abortion care, the consequences of being denied an abortion, and the legal and social burdens surrounding abortion ( , ). The urgency of these calls for new evidence underscores the importance of ensuring that research in this area is conducted in an ethical and respectful manner, cognizant of the social, political, and structural conditions that shape reproductive health inequities and impact each stage of research—from protocol design to dissemination of findings. Research ethics relates to the moral principles undergirding the design and execution of research projects, and concerns itself with the technicalities of ethical questions related to the research process, such as informed consent, power relations, and confidentiality ( ). Critical insights and reflections from reproductive justice, community engagement, and applied ethics frameworks have bolstered existing research ethics scholarship and discourse by underscoring the importance of meaningful engagement with community stakeholders—bringing attention to overlapping structures of oppression, including racism, sexism, and ways that these structures are perpetuated in the research process ( ). Scholars have critiqued traditional research ethics models for being too narrowly focused on investigator expertise and conventional measures of scientific validity. While helpful in some scenarios, this narrow focus can obscure the needs of minoritized communities with structural vulnerabilities and silence their voices across the research continuum. In essence, research can only be ethical when it prioritizes equity, justice, and respect for groups burdened with the potential to be most harmed during the research process. Considering the heightened challenges posed by the post- Roe era, the commentary that follows is a call for researchers, research institutions, funding agencies, Institutional Review Boards (IRBs) and other regulatory bodies to safeguard against potential research-related harms by ( ) prioritizing the needs, concerns, and preferences of populations burdened by social and structural vulnerabilities ( ) promoting reproductive justice-oriented, community-engaged scholarship, and ( ) providing evidence-based training and robust support for researchers. Given the history of medical exploitation and reproductive violence in communities with structural vulnerabilities, ethical and respectful research in the post- Roe environment requires prioritizing the voices of the most marginalized to mitigate iatrogenic research harms and promote reproductive health equity ( ). Early research on abortion focused on instances in which pregnancy terminations went horribly awry. Physicians published case reports detailing the management of septic, radically ill patients who risked their lives procuring illegal abortions ( ). As some states liberalized their abortion laws, other researchers focused their work on the public health impacts of safe and legal abortions enabled by better policies, techniques, and antibiotics ( , ). Their combined efforts eventually pushed professional medical and public health organizations to support abortion rights through advocacy and amicus curiae briefs filed in the United States Supreme Court cases Roe and Casey. Legalized abortion opened new research avenues and sparked ethical debates regarding the social and legal complexities of biomedical research during pregnancy. Notably, concerns about the outcome of Roe and pressure from anti-abortion groups shaped the first federal “protections” governing research on pregnant patients—regulations first established in the 1970s that excluded pregnant women from clinical trials and created gaps in knowledge about prescription drug use during pregnancy and the postpartum period ( , ). In recent years, leading research and federal organizations have discussed the need to address these knowledge gaps and have called for a range of studies on reproductive and maternal health needs with an increased emphasis on the social, behavioral, biological, and environmental forces that shape health outcomes at the individual, local, state, and national levels ( , ). In response to these calls, equity-focused scholars have conducted a range of important studies that prioritize community perspectives and values ( ). Research on maternal and reproductive health requires considerable sensitivity, as it often involves meeting people in especially vulnerable moments. For example, studies on stillbirth may require clinicians to approach grieving parents after a pregnancy loss to obtain consent for fetal tissue sampling. Research on maternal morbidity and mortality often necessitates conversations with women after near-death experiences or with families who have lost loved ones in cases of maternal death ( ). Abortion research similarly involves these weighty social and emotional considerations, in addition to heightened ethical and legal concerns about stigma, confidentiality, trauma, and criminalization. In environments where abortion is criminalized and stigmatized, contemporary research ethics guidelines call for population-sensitive research practices to protect participants and communities that may face threats of persecution or harm ( ). Thus, examining how intersectional structures of oppression, stigma, and vulnerability influence abortion research is critical for advancing and informing research ethics practices and protocols in the context of reproductive and maternal health. Intersecting structures of oppression and research “vulnerability” Research ethics guidelines predicated on the assumption of participant autonomy obscure how structural issues threaten reproductive autonomy, perpetuate trauma and stigmatization, and give rise to significant moral distress in groups already burdened by poverty, stigma, and inequitable access to healthcare. Respectful and compassionate research requires an understanding ways in which intersecting, multidimensional structures of oppression shape participant-level vulnerability in research settings. Even in instances where research participants have given informed consent and assumed the individual risks associated with research involving sensitive information, researchers in the post- Roe environment have a moral and professional responsibility to grapple with the systems and structures that sharpen participant vulnerability and research risks. When individuals occupy multiple marginalized identities, they may be rendered more vulnerable in settings where social and structural forces collide to limit their agency, visibility, and voice ( ). However, the traditional approach to categorical research protections outlined in the Belmont Report classifies certain groups as vulnerable based on singularly defined identities, namely, incarcerated individuals, children, and people with disabilities. Recent scholarship has expanded the concept of vulnerability to include the intersectional experiences of communities burdened by excessive research risks. Pregnant women were officially removed as a vulnerable population under the Revised Common Rule in 2017, a shift to ensure that they were justly represented in biomedical research and development and were able to reap the benefits of scientific advancement ( ). However, this adjustment preceded the complications posed by the end of the constitutional right to abortion, including threats of bodily harm, stigma, and criminalization. These threats are particularly salient for Black women living in the United States, who are three times more likely to die from preventable pregnancy complications than white women. Racial disparities in maternal health outcomes are amplified by other forms of oppression, such as lack of access to reproductive healthcare, structural racism, and lack of social support, which make women more vulnerable to harm during pregnancy ( ). Furthermore, recent estimates indicate that abortion bans have the potential to increase maternal mortality by 21% overall and up to 33% among Black Americans. Additionally, women who are denied abortions experience a cascade of economic hardships and serious health complications associated with carrying a pregnancy to term ( ). Before Dobbs, Texas Senate Bill 8 offered a glimpse into the dangerous future of abortion bans and raised questions about which communities were disproportionately harmed by abortion restrictions and increasingly made vulnerable by the research process ( ). Previous scholarship reveals that women in minoritized communities may experience excessive research risks and barriers to meaningful research participation because of preexisting comorbidities, environmental factors, and structural inequities ( , , ). These concerns are heightened in states and territories that restrict or ban abortion. Notably, eroding access to abortion care has the most profound and pernicious ramifications for Black families, as Black people are disproportionately burdened by various forms of economic and social inequalities that diminish birth equity and just access to all forms of reproductive healthcare ( , ). As an interdisciplinary group of scholars and practitioners with a focus on reproductive health equity, we raise important questions related to power asymmetries between those conducting research and the individuals volunteering as participants. Our concerns include: how might data intended to better understand various birth control methods be safeguarded from surveillance and criminalization? How might vulnerable populations be prioritized in the current political climate? And how might the conceptual frameworks, underlying assumptions, and language used by researchers perpetuate harmful narratives about sexuality, pregnancy, birth control, and abortion? In light of these questions, we understand research as a powerful tool to advance social justice. We argue that the inclusion of vulnerable groups in research can be a pathway to affirming the rights of all people to partake in social life, public expression, and bodily freedom. Individuals can share invaluable insights derived from navigating their marginalized social positionality, which otherwise may be undervalued, misunderstood, or concealed. Most evidently, research findings can mobilize healthcare systems to better meet the needs of populations who stand to benefit most from new understandings and health innovations. It is in the spirit of balancing these potential benefits and risks that the authors offer these considerations. Considerations for ethically responsible abortion research Abortion restrictions heighten risks for all parties involved in scientific research. However, it is imperative to recognize that research participants are especially vulnerable to research-related harms in the post- Roe era. Conducting ethical and respectful abortion research requires investigators to focus on the needs and preferences of marginalized communities across the research continuum, starting with the development of research questions and continuing through the study development, implementation, and dissemination of research findings. In the absence of formal guidance on abortion-related research ethics, the recommendations that follow have been shaped by the authors’ collective experiences working with structurally vulnerable and disadvantaged populations. The considerations presented in the following sections are intended to highlight the value of meaningful community engagement, dialogue, and collaboration when engaging participants burdened by social and structural vulnerabilities. Community and stakeholder engagement The equitable and just engagement of individuals and communities in abortion research requires working with community leaders and local organizations to improve ethical decision-making. Sophisticated engagement strategies, especially those that elevate the lived experiences of community members, are critical for understanding and mitigating barriers to reproductive health research participation ( ). Community-engaged research prioritizes an iterative, dynamic research process with heightened attention to the needs (i.e., perceived and actual), realities, and experiences of local stakeholders who ultimately shape the research design, implementation, and dissemination of findings ( , ). Notably, community-engaged frameworks shift the emphasis of research away from the benefits received by the research team and instead prioritize the needs and preferences of study participants ( ). Scott, Bray, McLemore, and other scholars highlight the urgent need for collaborative, community-engaged research marked by “radical curiosity and courage” to advance health equity and reproductive justice ( ). We follow their lead, embracing cultural humility and meaningful community partnerships, to advocate for a braver, bolder approach to abortion research and reproductive ethics. While traditional research ethics models focus heavily on institutional- and investigator-driven values, we advocate for an expanded understanding of scholarship that accurately reflects and elevates the voices and values of research participants. Risks to participants with social and structural vulnerabilities Research with communities burdened with social and structural vulnerabilities has given rise to unique ethical challenges that require context-specific research protection and stakeholder engagement. Psychological, legal, social, and economic harms are among the many risks relevant to research in post-Roe environments ( , ). Volunteers in abortion research may face stigma, criminalization, discrimination, health surveillance, and iatrogenic harms. These considerations are especially applicable to abortion research that employs wastewater metabolite testing, health apps for tracking, and interview and focus group research to understand the experiences of people trying to access abortion ( , ). In light of these risks, researchers should seek guidance from trustworthy stakeholders and local organizations to ensure that their involvement and visibility in the community does not exacerbate risks for already vulnerable groups. Abortion research participants may be hesitant to disclose the location and state of abortion access because of the potential consequences. Indeed, researchers should evaluate relevant legal risks when working with communities living in areas with restricted abortion access and plan to anonymize or minimize location data collection accordingly. Future research is needed to elicit feedback from community stakeholders to understand how various research settings and social contexts influence the experiences and safety of research participants ( ). It is especially important to engage in discourse with community stakeholders to understand their interpretation of the current political landscape as it relates to reproductive healthcare so that researchers can avoid perpetuating harm. Privacy and confidentiality Prior studies involving individuals with substance use disorders and people who use drugs remind us that privacy and confidentiality concerns are critically important to take into account when data can be used to criminalize and stigmatize individuals and communities ( ). Strategies that have been used to enhance privacy and confidentiality include: (1) Certificates of Confidentiality (CoC) which protect the privacy of research participants by restricting access to identifiable, sensitive study information so that it may only be accessed by members of the research team ( ); (2) Protocols that require the anonymization and minimization of nonessential sensitive personal health information; (3) Generation of synthetic datasets that mimic the structure and statistical distribution of organically obtained study data while protecting the identity and private health information of the research participants ( ); (4) “Shield laws” that protect abortion seekers and their helpers from state interference and other forms of legal harm ( ). Notably, the Department of Health and Human Services (HHS) recently proposed rule changes intended to strengthen the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule to shield private health information related to pregnancy and reproductive health from law enforcement officials ( ). Legislators in some states are discussing broader information privacy laws to protect commercially obtained data such as those collected in period-tracking apps. Some states have passed “shield laws” intended to protect abortion providers, patients, and their helpers, but these laws do not include specific protections for persons involved in abortion research ( ). Ultimately, researchers and funding agencies must not only consider how to protect private health information, but also how data generated in abortion research will be communicated and disseminated to the public. Communication and dissemination Ethical scientific research requires effective communication and timely dissemination of findings to individuals and communities most affected by a particular health issue. Disseminating data to communities is critical for strengthening public trust in clinicians, public health workers, and healthcare systems ( , ). A thorough, evidence-based understanding of health issues is also integral to advocating for policy changes and interventions that promote reproductive and maternal health equity. This is especially true when a health issue is highly stigmatized or politically charged, as in the case of abortion. In the current political context, in which abortion research generates partisan divides and purposeful disinformation is rampant, it is critically important to consider how study data are communicated and presented to the public. Ethical attention to abortion research involves engaging trusted community leaders and stakeholders to inform equity-centered research communication. This can be accomplished by developing and committing to communication strategies that outline a plan for if and when research findings are misinterpreted or weaponized against marginalized communities. Research ethics guidelines predicated on the assumption of participant autonomy obscure how structural issues threaten reproductive autonomy, perpetuate trauma and stigmatization, and give rise to significant moral distress in groups already burdened by poverty, stigma, and inequitable access to healthcare. Respectful and compassionate research requires an understanding ways in which intersecting, multidimensional structures of oppression shape participant-level vulnerability in research settings. Even in instances where research participants have given informed consent and assumed the individual risks associated with research involving sensitive information, researchers in the post- Roe environment have a moral and professional responsibility to grapple with the systems and structures that sharpen participant vulnerability and research risks. When individuals occupy multiple marginalized identities, they may be rendered more vulnerable in settings where social and structural forces collide to limit their agency, visibility, and voice ( ). However, the traditional approach to categorical research protections outlined in the Belmont Report classifies certain groups as vulnerable based on singularly defined identities, namely, incarcerated individuals, children, and people with disabilities. Recent scholarship has expanded the concept of vulnerability to include the intersectional experiences of communities burdened by excessive research risks. Pregnant women were officially removed as a vulnerable population under the Revised Common Rule in 2017, a shift to ensure that they were justly represented in biomedical research and development and were able to reap the benefits of scientific advancement ( ). However, this adjustment preceded the complications posed by the end of the constitutional right to abortion, including threats of bodily harm, stigma, and criminalization. These threats are particularly salient for Black women living in the United States, who are three times more likely to die from preventable pregnancy complications than white women. Racial disparities in maternal health outcomes are amplified by other forms of oppression, such as lack of access to reproductive healthcare, structural racism, and lack of social support, which make women more vulnerable to harm during pregnancy ( ). Furthermore, recent estimates indicate that abortion bans have the potential to increase maternal mortality by 21% overall and up to 33% among Black Americans. Additionally, women who are denied abortions experience a cascade of economic hardships and serious health complications associated with carrying a pregnancy to term ( ). Before Dobbs, Texas Senate Bill 8 offered a glimpse into the dangerous future of abortion bans and raised questions about which communities were disproportionately harmed by abortion restrictions and increasingly made vulnerable by the research process ( ). Previous scholarship reveals that women in minoritized communities may experience excessive research risks and barriers to meaningful research participation because of preexisting comorbidities, environmental factors, and structural inequities ( , , ). These concerns are heightened in states and territories that restrict or ban abortion. Notably, eroding access to abortion care has the most profound and pernicious ramifications for Black families, as Black people are disproportionately burdened by various forms of economic and social inequalities that diminish birth equity and just access to all forms of reproductive healthcare ( , ). As an interdisciplinary group of scholars and practitioners with a focus on reproductive health equity, we raise important questions related to power asymmetries between those conducting research and the individuals volunteering as participants. Our concerns include: how might data intended to better understand various birth control methods be safeguarded from surveillance and criminalization? How might vulnerable populations be prioritized in the current political climate? And how might the conceptual frameworks, underlying assumptions, and language used by researchers perpetuate harmful narratives about sexuality, pregnancy, birth control, and abortion? In light of these questions, we understand research as a powerful tool to advance social justice. We argue that the inclusion of vulnerable groups in research can be a pathway to affirming the rights of all people to partake in social life, public expression, and bodily freedom. Individuals can share invaluable insights derived from navigating their marginalized social positionality, which otherwise may be undervalued, misunderstood, or concealed. Most evidently, research findings can mobilize healthcare systems to better meet the needs of populations who stand to benefit most from new understandings and health innovations. It is in the spirit of balancing these potential benefits and risks that the authors offer these considerations. Abortion restrictions heighten risks for all parties involved in scientific research. However, it is imperative to recognize that research participants are especially vulnerable to research-related harms in the post- Roe era. Conducting ethical and respectful abortion research requires investigators to focus on the needs and preferences of marginalized communities across the research continuum, starting with the development of research questions and continuing through the study development, implementation, and dissemination of research findings. In the absence of formal guidance on abortion-related research ethics, the recommendations that follow have been shaped by the authors’ collective experiences working with structurally vulnerable and disadvantaged populations. The considerations presented in the following sections are intended to highlight the value of meaningful community engagement, dialogue, and collaboration when engaging participants burdened by social and structural vulnerabilities. Community and stakeholder engagement The equitable and just engagement of individuals and communities in abortion research requires working with community leaders and local organizations to improve ethical decision-making. Sophisticated engagement strategies, especially those that elevate the lived experiences of community members, are critical for understanding and mitigating barriers to reproductive health research participation ( ). Community-engaged research prioritizes an iterative, dynamic research process with heightened attention to the needs (i.e., perceived and actual), realities, and experiences of local stakeholders who ultimately shape the research design, implementation, and dissemination of findings ( , ). Notably, community-engaged frameworks shift the emphasis of research away from the benefits received by the research team and instead prioritize the needs and preferences of study participants ( ). Scott, Bray, McLemore, and other scholars highlight the urgent need for collaborative, community-engaged research marked by “radical curiosity and courage” to advance health equity and reproductive justice ( ). We follow their lead, embracing cultural humility and meaningful community partnerships, to advocate for a braver, bolder approach to abortion research and reproductive ethics. While traditional research ethics models focus heavily on institutional- and investigator-driven values, we advocate for an expanded understanding of scholarship that accurately reflects and elevates the voices and values of research participants. Risks to participants with social and structural vulnerabilities Research with communities burdened with social and structural vulnerabilities has given rise to unique ethical challenges that require context-specific research protection and stakeholder engagement. Psychological, legal, social, and economic harms are among the many risks relevant to research in post-Roe environments ( , ). Volunteers in abortion research may face stigma, criminalization, discrimination, health surveillance, and iatrogenic harms. These considerations are especially applicable to abortion research that employs wastewater metabolite testing, health apps for tracking, and interview and focus group research to understand the experiences of people trying to access abortion ( , ). In light of these risks, researchers should seek guidance from trustworthy stakeholders and local organizations to ensure that their involvement and visibility in the community does not exacerbate risks for already vulnerable groups. Abortion research participants may be hesitant to disclose the location and state of abortion access because of the potential consequences. Indeed, researchers should evaluate relevant legal risks when working with communities living in areas with restricted abortion access and plan to anonymize or minimize location data collection accordingly. Future research is needed to elicit feedback from community stakeholders to understand how various research settings and social contexts influence the experiences and safety of research participants ( ). It is especially important to engage in discourse with community stakeholders to understand their interpretation of the current political landscape as it relates to reproductive healthcare so that researchers can avoid perpetuating harm. Privacy and confidentiality Prior studies involving individuals with substance use disorders and people who use drugs remind us that privacy and confidentiality concerns are critically important to take into account when data can be used to criminalize and stigmatize individuals and communities ( ). Strategies that have been used to enhance privacy and confidentiality include: (1) Certificates of Confidentiality (CoC) which protect the privacy of research participants by restricting access to identifiable, sensitive study information so that it may only be accessed by members of the research team ( ); (2) Protocols that require the anonymization and minimization of nonessential sensitive personal health information; (3) Generation of synthetic datasets that mimic the structure and statistical distribution of organically obtained study data while protecting the identity and private health information of the research participants ( ); (4) “Shield laws” that protect abortion seekers and their helpers from state interference and other forms of legal harm ( ). Notably, the Department of Health and Human Services (HHS) recently proposed rule changes intended to strengthen the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule to shield private health information related to pregnancy and reproductive health from law enforcement officials ( ). Legislators in some states are discussing broader information privacy laws to protect commercially obtained data such as those collected in period-tracking apps. Some states have passed “shield laws” intended to protect abortion providers, patients, and their helpers, but these laws do not include specific protections for persons involved in abortion research ( ). Ultimately, researchers and funding agencies must not only consider how to protect private health information, but also how data generated in abortion research will be communicated and disseminated to the public. Communication and dissemination Ethical scientific research requires effective communication and timely dissemination of findings to individuals and communities most affected by a particular health issue. Disseminating data to communities is critical for strengthening public trust in clinicians, public health workers, and healthcare systems ( , ). A thorough, evidence-based understanding of health issues is also integral to advocating for policy changes and interventions that promote reproductive and maternal health equity. This is especially true when a health issue is highly stigmatized or politically charged, as in the case of abortion. In the current political context, in which abortion research generates partisan divides and purposeful disinformation is rampant, it is critically important to consider how study data are communicated and presented to the public. Ethical attention to abortion research involves engaging trusted community leaders and stakeholders to inform equity-centered research communication. This can be accomplished by developing and committing to communication strategies that outline a plan for if and when research findings are misinterpreted or weaponized against marginalized communities. The equitable and just engagement of individuals and communities in abortion research requires working with community leaders and local organizations to improve ethical decision-making. Sophisticated engagement strategies, especially those that elevate the lived experiences of community members, are critical for understanding and mitigating barriers to reproductive health research participation ( ). Community-engaged research prioritizes an iterative, dynamic research process with heightened attention to the needs (i.e., perceived and actual), realities, and experiences of local stakeholders who ultimately shape the research design, implementation, and dissemination of findings ( , ). Notably, community-engaged frameworks shift the emphasis of research away from the benefits received by the research team and instead prioritize the needs and preferences of study participants ( ). Scott, Bray, McLemore, and other scholars highlight the urgent need for collaborative, community-engaged research marked by “radical curiosity and courage” to advance health equity and reproductive justice ( ). We follow their lead, embracing cultural humility and meaningful community partnerships, to advocate for a braver, bolder approach to abortion research and reproductive ethics. While traditional research ethics models focus heavily on institutional- and investigator-driven values, we advocate for an expanded understanding of scholarship that accurately reflects and elevates the voices and values of research participants. Research with communities burdened with social and structural vulnerabilities has given rise to unique ethical challenges that require context-specific research protection and stakeholder engagement. Psychological, legal, social, and economic harms are among the many risks relevant to research in post-Roe environments ( , ). Volunteers in abortion research may face stigma, criminalization, discrimination, health surveillance, and iatrogenic harms. These considerations are especially applicable to abortion research that employs wastewater metabolite testing, health apps for tracking, and interview and focus group research to understand the experiences of people trying to access abortion ( , ). In light of these risks, researchers should seek guidance from trustworthy stakeholders and local organizations to ensure that their involvement and visibility in the community does not exacerbate risks for already vulnerable groups. Abortion research participants may be hesitant to disclose the location and state of abortion access because of the potential consequences. Indeed, researchers should evaluate relevant legal risks when working with communities living in areas with restricted abortion access and plan to anonymize or minimize location data collection accordingly. Future research is needed to elicit feedback from community stakeholders to understand how various research settings and social contexts influence the experiences and safety of research participants ( ). It is especially important to engage in discourse with community stakeholders to understand their interpretation of the current political landscape as it relates to reproductive healthcare so that researchers can avoid perpetuating harm. Prior studies involving individuals with substance use disorders and people who use drugs remind us that privacy and confidentiality concerns are critically important to take into account when data can be used to criminalize and stigmatize individuals and communities ( ). Strategies that have been used to enhance privacy and confidentiality include: (1) Certificates of Confidentiality (CoC) which protect the privacy of research participants by restricting access to identifiable, sensitive study information so that it may only be accessed by members of the research team ( ); (2) Protocols that require the anonymization and minimization of nonessential sensitive personal health information; (3) Generation of synthetic datasets that mimic the structure and statistical distribution of organically obtained study data while protecting the identity and private health information of the research participants ( ); (4) “Shield laws” that protect abortion seekers and their helpers from state interference and other forms of legal harm ( ). Notably, the Department of Health and Human Services (HHS) recently proposed rule changes intended to strengthen the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule to shield private health information related to pregnancy and reproductive health from law enforcement officials ( ). Legislators in some states are discussing broader information privacy laws to protect commercially obtained data such as those collected in period-tracking apps. Some states have passed “shield laws” intended to protect abortion providers, patients, and their helpers, but these laws do not include specific protections for persons involved in abortion research ( ). Ultimately, researchers and funding agencies must not only consider how to protect private health information, but also how data generated in abortion research will be communicated and disseminated to the public. Ethical scientific research requires effective communication and timely dissemination of findings to individuals and communities most affected by a particular health issue. Disseminating data to communities is critical for strengthening public trust in clinicians, public health workers, and healthcare systems ( , ). A thorough, evidence-based understanding of health issues is also integral to advocating for policy changes and interventions that promote reproductive and maternal health equity. This is especially true when a health issue is highly stigmatized or politically charged, as in the case of abortion. In the current political context, in which abortion research generates partisan divides and purposeful disinformation is rampant, it is critically important to consider how study data are communicated and presented to the public. Ethical attention to abortion research involves engaging trusted community leaders and stakeholders to inform equity-centered research communication. This can be accomplished by developing and committing to communication strategies that outline a plan for if and when research findings are misinterpreted or weaponized against marginalized communities. Developing, implementing, and translating ethically sound abortion research policies and procedures calls for concrete and tailored strategies to advance equitable access to scientific discovery and translation. Promoting the ethical inclusion of minoritized groups in reproductive and maternal health research requires specific attention to a myriad of issues, including privacy and fairness in the use of abortion information, informed consent, and the return of results to participants. Further, dedicated attention to the historical realities, contextual challenges, and concerns of diverse research communities is critical to promoting equity in research. Fostering research justice also involves demonstrating optimal respect for reproductive preferences, lived experiences, overlapping social identities, and the moral agency of minority women ( , ). Conceptually aligning research with reproductive justice, birth justice, and respectful maternity care frameworks fosters analytic liberation and bolsters scientific rigor ( ). Centering equity and respect in research also has salient implications for equipping future scientists, investigators, and clinician scholars with the knowledge, skills, and structural competency to disrupt longstanding oppression in the research enterprise that prevents certain topics from being prioritized, namely those affecting the health and well-being of Black women and other populations made vulnerable by overlapping systems of oppression. Furthermore, respectful and ethical research highlights the importance of bioethicists with empirical and normative training leading robust discourse around abortion-related research and the healthcare needs of Black women. To safeguard against research-related harms in the post- Roe era, it is essential that funding agencies, research institutions, IRBs, and investigators elucidate the needs, values, and preferences of marginalized communities across the research continuum. Insights from existing training programs, funding mechanisms, and organizations are foundational for informing broader research ethics frameworks that responsibly address the complexities that arise in maternal and reproductive health research, especially related to abortion ( , , ). Ethically responsible research in the post-Roe era—especially research with minoritized communities demands equity, justice, and respect. The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s. SS: Conceptualization, Writing – original draft, Writing – review & editing. AA: Writing – review & editing. RD: Writing – review & editing. TM: Writing – review & editing. FL: Writing – review & editing. FF: Conceptualization, Writing – original draft, Writing – review & editing.
Biochar-based organic fertilizer application promotes the alleviation of tobacco (
28d5f941-ae13-4a25-8b67-4f94c3528391
11871607
Microbiology[mh]
Crop intensive monoculture has become a widespread cropping regime in agricultural production activities due to a global shortage of arable land, the drive of commercial interests and a lack of rational cropping concept . However, a variety of crops, such as cotton, soybean, Panax notoginseng , tobacco ( Nicotiana tabacum L.), peanut ( Arachis hypogaea L.), cucumber ( Cucumis sativus L.), are sensitive and inadaptable to the continuous cropping (CC) pattern. That means, their yield and quality will drastically decline accompanied with the eruption of soil-borne diseases, sometimes even to disastrous levels after a few years of CC . This phenomenon is also known as “continuous cropping obstacles” (CCO) or “replant diseases” , and the issues associated with CCO have aroused extensive attention as it seriously jeopardizes the soil ecosystem’s health and agricultural sustainability. It is well documented that over 20% of arable land resources in China have been adversely influenced by crop CCO, leading to low agricultural yields, heavy economic losses, and poor soil quality . Deterioration of rhizosphere soil microecology is considered the dominant reason for crop CCO, including the degradation of soil physicochemical properties, the deficiency and imbalance of nutrient elements, the aggravation of plant allelopathic autotoxicity, and the destruction of rhizosphere microbial community composition and structure . The diversity and complexity of the causes of crop CCO bring us great challenges to thoroughly solve the problem when taking different crop varieties and their variable habitats into consideration. Although a majority of plant diseases can be quickly and effectively controlled by using chemical fumigants and pesticides, which are still popular in recent agriculture activities, their excess use has led to a series of environmental problems . Therefore, environmentally friendly and sustainable measures are urgently needed, for instance, the adoption of crop rotation regime and the applications of biochar, biocontrol agents, (bio-) organic fertilizers, among others . China ranks first in producing and consuming tobacco products, accounting for one-third of the global total tobacco leaf yield (over 2.2 million tons per year). Additionally, Yunnan province occupies around 45% of tobacco leaf production in China, and has historically been one of the core tobacco production regions . The tobacco business contributes a lot to the national total revenue and millions of tobacco growers in China, with the sale of flue-cured tobacco leaves becoming their main income source . As a member of the plant family Solanaceae, tobacco is sensitive to intensive monoculture, and long-term tobacco CC results in considerable yield and economic losses every year . The rotation regime is the most direct and extraordinary approach to manage tobacco CCO currently, such as tobacco rice ( Oryza sativa L.) rotation every two years, one year for tobacco cultivation and the next year for rice. However, this way is only applicable in large-scale intensive production regions with convenient irrigation systems, and usually requires more economic input when the field tillage pattern undergoes a major shift. Therefore, there is a dire need to explore sustainable and environmentally friendly solutions to alleviate tobacco CCO within the limitation of chemical pesticides usage due to environmental concerns. Organic fertilizers (OFs) are favored by researchers and tobacco peasants, because they are characterized by excellent plant growth-promotion functions, low environmental risk and the realization of resource recycle on farmland. It has been well reported across many studies that OFs amendment promotes the alleviation of crop CCO through improving soil fertility, enzyme activities, organic matter (OM) content, carbon (C) and nitrogen (N) stocks, macro-aggregates’ formation, and microbial community structure . Specifically, our previous study has demonstrated the benefits of biochar and vermicompost application in ameliorating tobacco CCO, and it could be at least partly explained by the improvement of soil physicochemical properties and bacterial community structure . Biochar-based organic fertilizers (BFs) application with biochar and vermicompost as the main components is scarce in the tobacco cultivation field. Although many studies have preliminary revealed its associated action mechanisms, most of them merely focus on the changes of soil characteristics, elemental concentration, and the composition and diversities of microbial communities. The relationships between soil properties, tobacco yield and economic value, and the key factors triggering the alleviation of tobacco COO remain poorly understood. Based on this, we explored the effects of different novel types of BFs on tobacco CCO, elucidated more detained and targeted potential mechanisms, and further discovered the key contributors from the correlation and explanatory power analyses. Meanwhile, two local common rotation regimes involving tobacco-broad bean ( Vicia faba L.) and tobacco-oilseed rape ( Brassica napus L.) were designed accompanied with fertilizer treatments in a two-year field experiment, which can better verify the effects of BFs application on tobacco CCO under different study years and crop regimes. Compared to tobacco-rice rotation, tobacco rotation with broad bean and oilseed rape was executed in one year. The objectives of the current study were to disentangle the changes in soil chemical properties, and the composition and structure of rhizosphere microbial communities under the impact of CF and BFs application. As the change patterns were assessed, the key factors contributing to the alleviation of tobacco CCO were discovered. We hypothesize that BFs addition can efficiently improve tobacco yield and economic value under CC circumstances, and the positive functions can be largely ascribed to the amelioration of soil chemical characteristics and the shifts in rhizospheric microbial communities. Field experiment design A two-year field trial was conducted from 2021 to 2022 in Yuxi City, Yunnan Province of Southwest China (24°38′29″N, 102°52′23″E), with an altitude of 1740 m, an annual mean temperature of 16 ℃, and a precipitation of 890 mm. The study field was red soil, which is the predominant soil type for tobacco plantations in Yunnan. Tobacco cultivar “Zhusha No.2” had been continuously monocultured for three years in the study field before starting our experiment. Tobacco rotation with broad bean and oilseed rape were separately executed in the field. Each rotation regime had four fertilizer treatments with a split plot design, wood, rice straw, compound biochar-based organic fertilizers (WBF, RBF, CBF) and chemical fertilizer (CF). There were four repetitions for each treatment, and each plot had a size of 66 m 2 . BFs and CF were applied into plantation holes as base manure at rates of 1650 and 225 kg ha –1 , respectively, before tobacco seedling transplantation at a distance of 1.20 × 0.55 m in April 2021 and 2022. Tobacco seeds of cultivar “Zhusha No.2” were provided by Yuxi Zhongyan Seed Co., Ltd, and then sowed and cultivated in tobacco floating-seedling system in Yunnan Yuxi Tobacco Science Institute. About four times of topdressing were employed during the entire tobacco growth duration based on the local optimal tobacco production standard. In October, we planted broad bean and oilseed rape in the corresponding rotation field after the tobacco harvest. Broad bean and oilseed rape seeds were provided from Yuxi Sannong Plateau-Characteristic Modern Agriculture Co., Ltd. The field soil and fertilizer properties are presented in Tables and , respectively. Tobacco plant and soil sampling In July 2022, tobacco morbidity was recorded in each plot by counting the number of tobacco plants with typical disease symptoms among the total numbers of tobacco plants chosen for calculation, such as black shank, root black rot, potato Y virus, tobacco tomato spotted wilt virus, and weather fleck . In order to reduce personal error, two individuals with disease investigation experience independently executed the disease observation in each plot, and the average disease rate was calculated. In July 2021 and 2022, five representative tobacco plants under topping stage were selected in each plot for agricultural traits analysis with a tape, including stem girth, plant height, maximum leaf length and width, and the number of productive leaves . The selected plants were dug up with roots for wet biomass determination and dried at 65 ℃ for dry biomass analysis. Dried tobacco roots, stems, and leaves were digested with HNO 3 -HClO 4 -H 2 O 2 solutions before element (Ca, P, S, Cr, V, Cu, Zn) concentration measurement by an inductively coupled plasma optical emission spectrometer (ICP-OES, iCAP 6000 series, Thermo Scientific, United States) . Correspondingly, the rhizosphere soil of the five chosen plants was collected to form a composite sample in each replicate by the root shaking method . In total, we obtained 64 rhizosphere soil samples (4 fertilization treatments × 4 replicates × 2 rotation regimes × 2 cropping years). A subset of the fresh soil samples was stored at 4 ℃ for soil NH 4 + -N, microbial biomass C and N (MBC and MBN) content measurement; a part was stored at − 80 ℃ until soil DNA extraction followed by microbial community analysis; and the remaining was air-dried, ground, and sieved for chemical property determination. Meanwhile, the tobacco leaves of another ten plants in each plot were labeled and collected at four separate times at 10-day intervals under the harvest stage from July to August based on the maturity of the leaves. All manure leaves were flue-cured in sequential batches with three steps of yellowing, fixed color, and dry gluten period according to “Rules for curing technique of flue-cured tobacco” . The curing conditions in the yellowing period were temperature 32–45 ℃, humidity 70–98%, and duration 24–72 h, while they were 45–55 ℃, 30–70%, and 20–40 h for the period of fixed color, 55–75 ℃, 30%, and 16–36 h for the dry gluten period, respectively. According to the national standard “Flue-cured tobacco” , the flue-cured tobacco was classified into 10 grades based on the leaf maturity, structure, body, oil, color intensity, length, waste and injury with the help of professional graders. We separately recorded the dry weight of all the flue-cured tobacco of each grade for the calculation of economic parameters as follows. 1 [12pt]{minimal} $$\:\:\:(\:{}^{-1})\:=\:\:\:\:\:$$ 2 [12pt]{minimal} $$&\:\:(\:{}^{-1})&=\:\:\:\:&\:\:\:&\:\:\:\:$$ 3 [12pt]{minimal} $$ \:\:\:(\:{}^{-1})\:=\:}{}$$ 4 [12pt]{minimal} $$ = }{ }$$ Six C3F-grade flue-cured tobacco leaves were randomly selected in each plot, smashed into small pieces and filtered through a 0.25-mm mesh screen. The routine chemical components (total sugar, reducing sugar, total N, nicotine, K 2 O, and chlorine) were measured using the continuous flow method (SEAL AA3, Germany) . Analysis of soil chemical properties Soil pH (soil: water = 1: 2.5, w/v) was measured with the glass electrode method . Soil ammonium (NH 4 + ) and nitrate (NO 3 ‒ ) concentrations in KCl solution were assessed colorimetrically with an ultraviolet spectrophotometer (UV-1890, Daojin Instrument Co., Ltd., Jiangsu, China) . After extraction with NaHCO 3 solution, we used an UV spectrophotometer to detect soil available P (AP) by the molybdenum blue method (UV-1890, Daojin Instrument Co., Ltd., China) . Using an atomic absorption spectrometer (AAS, Analytik Jena novAA 300, Germany), soil available K (AK) was evaluated with neutral NH 4 OAC extraction . Soil OM and total organic carbon (TOC) were determined by the K 2 Cr 2 O 7 –H 2 SO 4 colorimetric method, and total carbon and nitrogen (TC and TN) were assayed by the combustion method using an elemental analyzer (Elemental Vario EL Cube, Germany) . The chloroform fumigation extraction procedure was used to measure soil MBC and MBN contents with 0.5 M K 2 SO 4 solution . Quantitative polymerase chain reaction (qPCR) assay for tobacco pathogens The absolute abundances of two pathogens causing tobacco black shank and root rot diseases, Phytophthora nicotianae and Fusarium oxysporum , were determined in rhizosphere soil using a SYBR Green assay with the primers Pn3 (forward, 5’-GACAAACCAGTCGCCAATTT-3; reverse, 5’-TGAACGCATATTGCACTTCC-3’) and JBR (forward, 5’- CATACCACTTGTTGTCTCGGC-3; reverse, 5’-GAACGCGAATTAACGCGAGTC-3’), respectively . Standard curves were constructed by 10-fold serial dilutions of a plasmid containing a fragment copy of the target gene from the two pathogens. A serial dilution from 10 8 to 10 5 gene copies µL ‒1 was used as a standard for Phytophthora nicotianae , with the amplification efficiency of 90.9%, while it was from 10 7 to 10 2 gene copies µL ‒1 for Fusarium oxysporum with an amplification efficiency of 90.4%. Briefly, the qPCR assay was performed with a T100 Thermal Cycler (Bio-Red, United States) in 20 µL reaction mixtures, which contained 1 µL of templates, 10 µL of the Taq Plus Master Mix (2×), 0.8 µL of each primer, and 7.4 µL of distilled water. The thermal cycling procedure was 1 cycle of initial denaturation (95 °C for 5 min), denaturation (95 °C for 30 s), 35 cycles of annealing (58 °C for 30 s), and a final extension (72 °C for 60 s). Each sample was replicated three times. The melt curve was run at the end of the PCR process to verify the specificity of the amplified fragments. Finally, the absolute abundances of the two pathogens were expressed as copies g ‒1 soil from their Ct value and the standard curves. Soil DNA extraction, PCR amplification, and high-throughput sequencing According to tobacco growth and yield detected above, rhizosphere bacterial and fungal community diversity in CBF and CF treatments were analyzed to elucidate the potential microbial mechanisms. Soil genomic DNA was extracted using the Fast DNA ® Spin Kit for Soil (MP Biomedicals, Solon, OH, USA), and the concentration and purity of DNA were evaluated using a micro-spectrophotometer (Nano-300, Allsheng, Hangzhou, China) and 1.0% agarose gel electrophoresis. The V3-V4 region of the bacterial 16S rRNA gene was amplified with the primers 338F (5’-ACTCCTACGGGAGGCAGCAG-3’) and 806R (5’-GGACTACHVGGGTWTCTAAT-3’), and the ITS1 region of fungi was amplified using ITS1F (5’-CTTGGTCATTTAGAGGAAGTAA-3’) and ITS2R (5’-GCTGCGTTCTTCATCGATGC-3’) . We obtained a total of 733,616 and 738,898 high-quality bacterial and fungal sequences, respectively, with their average counts of 45,851 (ranging from 37145 to 53237) and 46,181 (ranging from 36245 to 64209) in all samples. The purified PCR products were pooled (equimolar) and sequenced on the Illumina MiSeq PE300 platform (Illumina, San Diego, USA) by Majorbio Bio-Pharm Technology Co. Ltd. (Shanghai, China). We used FLASH v1.2.11 to merge pair-end sequences . By using UPARSE v11, operational taxonomic units (OTUs) at a 97% similarity cutoff were constructed , and chimeras, erroneous sequences, those matching the chloroplast and mitochondria were identified and discarded. To facilitate downstream analyses, we only remained the OTUs with sequence counts greater than 5 in at least three samples and total counts greater than 20 in all samples. With a confidence threshold of 0.7, RDP Classifier v2.13 was used to determine the taxonomy affiliation of each 16S rRNA gene and ITS sequence in comparison to the 16S rRNA Silva 138 ( https://www.arb-silva.de/ ) and the fungal ITS UNITE 8.0 reference database ( https://unite.ut.ee/ ) . Statistical analysis Significance test was conducted in SPSS 26.0 at the level of P < 0.05. Prior to bioinformatic analyses, we rarefied the sequencing depth to the smallest sample size across all samples, and then processed through QIIME v1.9.1 . ACE and Chao indicators were utilized to obtain the bacterial and fungal alpha-diversity (α-diversity) using Mothur v1.30.2 . Meanwhile, we used the Kruskal-Wallis H test for the significant difference analysis of microbial abundances between two groups. Shared and unique OTUs among various samples were displayed using a Venn diagram. Microbial co-occurrence network diagrams under CBF and CF treatments were investigated and visualized using Gephi v0.9.2 . According to , “keystone species” refer to the network members with high degree and between centrality values. The correlations among tobacco leaf yield, economic value, soil chemical properties, and microbial communities were assessed by Spearman correlations and Mantel tests in R v4.3.0 . Meanwhile, we analyzed the relative importance of soil chemical properties, bacterial and fungal community structure, and diversity for tobacco leaf yield and economic value based on a linear regression model in Relaimpo package. A two-year field trial was conducted from 2021 to 2022 in Yuxi City, Yunnan Province of Southwest China (24°38′29″N, 102°52′23″E), with an altitude of 1740 m, an annual mean temperature of 16 ℃, and a precipitation of 890 mm. The study field was red soil, which is the predominant soil type for tobacco plantations in Yunnan. Tobacco cultivar “Zhusha No.2” had been continuously monocultured for three years in the study field before starting our experiment. Tobacco rotation with broad bean and oilseed rape were separately executed in the field. Each rotation regime had four fertilizer treatments with a split plot design, wood, rice straw, compound biochar-based organic fertilizers (WBF, RBF, CBF) and chemical fertilizer (CF). There were four repetitions for each treatment, and each plot had a size of 66 m 2 . BFs and CF were applied into plantation holes as base manure at rates of 1650 and 225 kg ha –1 , respectively, before tobacco seedling transplantation at a distance of 1.20 × 0.55 m in April 2021 and 2022. Tobacco seeds of cultivar “Zhusha No.2” were provided by Yuxi Zhongyan Seed Co., Ltd, and then sowed and cultivated in tobacco floating-seedling system in Yunnan Yuxi Tobacco Science Institute. About four times of topdressing were employed during the entire tobacco growth duration based on the local optimal tobacco production standard. In October, we planted broad bean and oilseed rape in the corresponding rotation field after the tobacco harvest. Broad bean and oilseed rape seeds were provided from Yuxi Sannong Plateau-Characteristic Modern Agriculture Co., Ltd. The field soil and fertilizer properties are presented in Tables and , respectively. In July 2022, tobacco morbidity was recorded in each plot by counting the number of tobacco plants with typical disease symptoms among the total numbers of tobacco plants chosen for calculation, such as black shank, root black rot, potato Y virus, tobacco tomato spotted wilt virus, and weather fleck . In order to reduce personal error, two individuals with disease investigation experience independently executed the disease observation in each plot, and the average disease rate was calculated. In July 2021 and 2022, five representative tobacco plants under topping stage were selected in each plot for agricultural traits analysis with a tape, including stem girth, plant height, maximum leaf length and width, and the number of productive leaves . The selected plants were dug up with roots for wet biomass determination and dried at 65 ℃ for dry biomass analysis. Dried tobacco roots, stems, and leaves were digested with HNO 3 -HClO 4 -H 2 O 2 solutions before element (Ca, P, S, Cr, V, Cu, Zn) concentration measurement by an inductively coupled plasma optical emission spectrometer (ICP-OES, iCAP 6000 series, Thermo Scientific, United States) . Correspondingly, the rhizosphere soil of the five chosen plants was collected to form a composite sample in each replicate by the root shaking method . In total, we obtained 64 rhizosphere soil samples (4 fertilization treatments × 4 replicates × 2 rotation regimes × 2 cropping years). A subset of the fresh soil samples was stored at 4 ℃ for soil NH 4 + -N, microbial biomass C and N (MBC and MBN) content measurement; a part was stored at − 80 ℃ until soil DNA extraction followed by microbial community analysis; and the remaining was air-dried, ground, and sieved for chemical property determination. Meanwhile, the tobacco leaves of another ten plants in each plot were labeled and collected at four separate times at 10-day intervals under the harvest stage from July to August based on the maturity of the leaves. All manure leaves were flue-cured in sequential batches with three steps of yellowing, fixed color, and dry gluten period according to “Rules for curing technique of flue-cured tobacco” . The curing conditions in the yellowing period were temperature 32–45 ℃, humidity 70–98%, and duration 24–72 h, while they were 45–55 ℃, 30–70%, and 20–40 h for the period of fixed color, 55–75 ℃, 30%, and 16–36 h for the dry gluten period, respectively. According to the national standard “Flue-cured tobacco” , the flue-cured tobacco was classified into 10 grades based on the leaf maturity, structure, body, oil, color intensity, length, waste and injury with the help of professional graders. We separately recorded the dry weight of all the flue-cured tobacco of each grade for the calculation of economic parameters as follows. 1 [12pt]{minimal} $$\:\:\:(\:{}^{-1})\:=\:\:\:\:\:$$ 2 [12pt]{minimal} $$&\:\:(\:{}^{-1})&=\:\:\:\:&\:\:\:&\:\:\:\:$$ 3 [12pt]{minimal} $$ \:\:\:(\:{}^{-1})\:=\:}{}$$ 4 [12pt]{minimal} $$ = }{ }$$ Six C3F-grade flue-cured tobacco leaves were randomly selected in each plot, smashed into small pieces and filtered through a 0.25-mm mesh screen. The routine chemical components (total sugar, reducing sugar, total N, nicotine, K 2 O, and chlorine) were measured using the continuous flow method (SEAL AA3, Germany) . Soil pH (soil: water = 1: 2.5, w/v) was measured with the glass electrode method . Soil ammonium (NH 4 + ) and nitrate (NO 3 ‒ ) concentrations in KCl solution were assessed colorimetrically with an ultraviolet spectrophotometer (UV-1890, Daojin Instrument Co., Ltd., Jiangsu, China) . After extraction with NaHCO 3 solution, we used an UV spectrophotometer to detect soil available P (AP) by the molybdenum blue method (UV-1890, Daojin Instrument Co., Ltd., China) . Using an atomic absorption spectrometer (AAS, Analytik Jena novAA 300, Germany), soil available K (AK) was evaluated with neutral NH 4 OAC extraction . Soil OM and total organic carbon (TOC) were determined by the K 2 Cr 2 O 7 –H 2 SO 4 colorimetric method, and total carbon and nitrogen (TC and TN) were assayed by the combustion method using an elemental analyzer (Elemental Vario EL Cube, Germany) . The chloroform fumigation extraction procedure was used to measure soil MBC and MBN contents with 0.5 M K 2 SO 4 solution . The absolute abundances of two pathogens causing tobacco black shank and root rot diseases, Phytophthora nicotianae and Fusarium oxysporum , were determined in rhizosphere soil using a SYBR Green assay with the primers Pn3 (forward, 5’-GACAAACCAGTCGCCAATTT-3; reverse, 5’-TGAACGCATATTGCACTTCC-3’) and JBR (forward, 5’- CATACCACTTGTTGTCTCGGC-3; reverse, 5’-GAACGCGAATTAACGCGAGTC-3’), respectively . Standard curves were constructed by 10-fold serial dilutions of a plasmid containing a fragment copy of the target gene from the two pathogens. A serial dilution from 10 8 to 10 5 gene copies µL ‒1 was used as a standard for Phytophthora nicotianae , with the amplification efficiency of 90.9%, while it was from 10 7 to 10 2 gene copies µL ‒1 for Fusarium oxysporum with an amplification efficiency of 90.4%. Briefly, the qPCR assay was performed with a T100 Thermal Cycler (Bio-Red, United States) in 20 µL reaction mixtures, which contained 1 µL of templates, 10 µL of the Taq Plus Master Mix (2×), 0.8 µL of each primer, and 7.4 µL of distilled water. The thermal cycling procedure was 1 cycle of initial denaturation (95 °C for 5 min), denaturation (95 °C for 30 s), 35 cycles of annealing (58 °C for 30 s), and a final extension (72 °C for 60 s). Each sample was replicated three times. The melt curve was run at the end of the PCR process to verify the specificity of the amplified fragments. Finally, the absolute abundances of the two pathogens were expressed as copies g ‒1 soil from their Ct value and the standard curves. According to tobacco growth and yield detected above, rhizosphere bacterial and fungal community diversity in CBF and CF treatments were analyzed to elucidate the potential microbial mechanisms. Soil genomic DNA was extracted using the Fast DNA ® Spin Kit for Soil (MP Biomedicals, Solon, OH, USA), and the concentration and purity of DNA were evaluated using a micro-spectrophotometer (Nano-300, Allsheng, Hangzhou, China) and 1.0% agarose gel electrophoresis. The V3-V4 region of the bacterial 16S rRNA gene was amplified with the primers 338F (5’-ACTCCTACGGGAGGCAGCAG-3’) and 806R (5’-GGACTACHVGGGTWTCTAAT-3’), and the ITS1 region of fungi was amplified using ITS1F (5’-CTTGGTCATTTAGAGGAAGTAA-3’) and ITS2R (5’-GCTGCGTTCTTCATCGATGC-3’) . We obtained a total of 733,616 and 738,898 high-quality bacterial and fungal sequences, respectively, with their average counts of 45,851 (ranging from 37145 to 53237) and 46,181 (ranging from 36245 to 64209) in all samples. The purified PCR products were pooled (equimolar) and sequenced on the Illumina MiSeq PE300 platform (Illumina, San Diego, USA) by Majorbio Bio-Pharm Technology Co. Ltd. (Shanghai, China). We used FLASH v1.2.11 to merge pair-end sequences . By using UPARSE v11, operational taxonomic units (OTUs) at a 97% similarity cutoff were constructed , and chimeras, erroneous sequences, those matching the chloroplast and mitochondria were identified and discarded. To facilitate downstream analyses, we only remained the OTUs with sequence counts greater than 5 in at least three samples and total counts greater than 20 in all samples. With a confidence threshold of 0.7, RDP Classifier v2.13 was used to determine the taxonomy affiliation of each 16S rRNA gene and ITS sequence in comparison to the 16S rRNA Silva 138 ( https://www.arb-silva.de/ ) and the fungal ITS UNITE 8.0 reference database ( https://unite.ut.ee/ ) . Significance test was conducted in SPSS 26.0 at the level of P < 0.05. Prior to bioinformatic analyses, we rarefied the sequencing depth to the smallest sample size across all samples, and then processed through QIIME v1.9.1 . ACE and Chao indicators were utilized to obtain the bacterial and fungal alpha-diversity (α-diversity) using Mothur v1.30.2 . Meanwhile, we used the Kruskal-Wallis H test for the significant difference analysis of microbial abundances between two groups. Shared and unique OTUs among various samples were displayed using a Venn diagram. Microbial co-occurrence network diagrams under CBF and CF treatments were investigated and visualized using Gephi v0.9.2 . According to , “keystone species” refer to the network members with high degree and between centrality values. The correlations among tobacco leaf yield, economic value, soil chemical properties, and microbial communities were assessed by Spearman correlations and Mantel tests in R v4.3.0 . Meanwhile, we analyzed the relative importance of soil chemical properties, bacterial and fungal community structure, and diversity for tobacco leaf yield and economic value based on a linear regression model in Relaimpo package. Tobacco plant responses to BFs and CF application under field condition Figure a and b depicted tobacco disease rate and the abundance of two typical pathogens in 2022, Phytophthora nicotianae and Fusarium oxysporum . The incidence of tobacco diseases decreased dramatically from 16.7– 17.8% in CF amendment treatment to 2.7–15.0% in BFs amendment treatments, with the lowest of 2.7% under CBF addition coupled with tobacco broad bean rotation. However, no significant difference was observed in disease incidence ( P ≥ 0.05) between WBF and CF treatments. Meanwhile, the numbers of pathogens decreased in CBF-treated rhizosphere soil relative to CF-treated soil under both rotation regimes (Fig. b). BFs addition in soil improved tobacco agricultural traits, wet and dry biomass under both rotation regimes and study years compared to CF, especially for CBF addition (Fig. ). Moreover, all tobacco economic parameters were higher under BFs treatments relative to CF (Fig. c). Tobacco grown in CBF amendment soil owned the greatest trade yield (average 3105 kg ha –1 in 2021 and 2042 kg ha –1 in 2022) and crop value (average 12.0 ten thousand CNY ha –1 in 2021 and 8.3 ten thousand CNY ha –1 in 2022) in all treatments, presenting a 20.5% and 34.4% increment for average trade yield and crop value, respectively, as compared to CF. Simultaneously, a significant decline in tobacco trade yield and value was observed in 2022 than those in 2021, indicating the occurrence of an intensified tobacco CCO phenomenon with the prolongation of CC years. The chemical components of flue-cured tobacco leaves are listed in Table . Overall, CBF application contributed to higher values in the total sugar, reducing sugar, K 2 O contents, and the total sugar/nicotine, K 2 O/Cl, reducing sugar/total sugar ratios in flue-cured tobacco. In comparison to CF, three types of BFs amendment reduced the Cl content in flue-cured tobacco leaves, validating the functions of BFs in the enhancement of chemical quality, with CBF posing the most favorable impacts in all treatments. A three-way ANOVA was employed to analyze the influences of different fertilization applications, rotation regimes, CC years, and their interactions on tobacco leaf yield and chemical quality. The results showed that tobacco leaf yield and chemical quality were strongly steered by different fertilizer additions and CC years ( F = 3.3–149.4, P < 0.05) except the total sugar/nicotine ratio (Table ). Variation of soil chemical properties and element contents in tobacco The results of soil chemical properties showed that BFs addition increased rhizosphere soil pH, with WBF presented the most pronounced effect in all BFs types (Fig. ). There were improvements in soil OM, AP, AK, MBC, MBN, TC, and TN contents under BFs treatments, particularly for CBF addition (Fig. ). However, this was not the case for soil NH 4 + -N content (Fig. ). With respect to element contents in tobacco organs, BFs amendment resulted in higher Ca content, lower Cd, V, Cu, Zn contents in all tobacco organs, higher S and P contents in tobacco stems and roots, and lower S and P contents in tobacco leaves under both rotation regimes (Fig. ). Soil microbial community shift under different fertilizer treatments Soil microbial community composition There were at least 3800 bacterial and 1140 fungal OTUs existing in each sample from the Venn diagram (Fig. a). Specifically, 38.4% of soil bacterial OTUs (number of 2265) and 16.2% of fungal OTUs (number of 420) were shared across all treatments, while only 5.8–7.6% (344–447) and 9.0–17.9% (233–463) OTUs were unique in each treatment for bacteria and fungi, respectively. Application of CBF tended to increase soil bacterial OTUs numbers in both CC years, and different fertilization patterns shaped a more distinct fungal OTUs distribution than bacteria. The top five most abundant rhizobacteria were phylum Actinobacteriota (39.1% of the total reads), Proteobacteria (27.9%), Chloroflexi (9.8%), Acidobacteriota (7.9%) and Bacteroidota (3.6%), while on the genus level were Sphingomonas (5.8%), Intrasporangium (5.7%), Arthrobacter (5.7%), Nocardioides (3.3%) and norank_f__norank_o__Vicinamibacterales (2.4%), which collectively accounted for 88.4% and 22.9% of all bacterial communities, respectively (Fig. b, a). Moreover, rhizosphere fungal communities were predominantly composed of phylum Ascomycota (80.9%), Mortierellomycota (9.6%), Basidiomycota (3.7%), unclassified_k__Fungi (3.0%) and Chytridiomycota (1.8%), while on the genus level were Fusarium (14.5%), Mortierella (10.6%), Plectosphaerella (9.3%), Gibberella (8.4%) and Talaromyces (4.4%), owning a total relative abundance of 99.0% and 47.2%, respectively (Fig. b, b). Remarkably, Ascomycota occupied the overwhelming majority of soil fungi (> 80%). Soil microbial diversity and difference analysis Higher bacterial and fungal richness and diversity were observed in the rhizosphere soil under CBF treatment as compared to CF (Fig. c). Bacterial Ace and Chao indices showed a significant difference between CBF and CF groups (Wilcoxon test, P FDR < 0.01), however, there was no significant difference in all fungal α-diversity indices (Fig. c). Interestingly, rhizosphere microbial community diversity decreased with the prolongation of tobacco CC years, as corroborated by the reduced OTUs numbers, Ace and Chao index from 2021 to 2022. Rhizobacterial and fungal community structure showed significant differences between CBF and CF treatments in two CC years (PERMANOVA and ANOSIM tests, P < 0.01), and this phenomenon was more obvious in different CC years than those in fertilizer treatments, which was primarily presented along the first axis (Fig. d). Similar to microbial α-diversity, there was a more obvious difference in bacterial β-diversity than that of fungi under different groups. To further explore the distinct taxonomy under CBF and CF treatments, we analyzed the heatmap and fold change of the relative abundance of the top 30 bacterial and fungal genera with a significant difference test. The results showed that CBF amendment soil enriched bacteria Arthrobacter , Pseudomonas in 2021, Gemmatimonas in 2022, and depleted Mycobacterium in both CC years (Fig. a). In the case of soil fungi, CBF addition increased Trichoderma in 2021, increased Mortierella , Penicillium , Chaetomium , Gibellulopsis , and decreased Rhizophlyctis , Debaryomyces , Talaromyces , and Alternaria in both CC years (Fig. b). Soil microbial co-occurrence networks As expected, soil bacterial and fungal co-occurrence networks under CBF treatment depicted higher nodes, positive link percent, average clustering index, and lower network diameter and average path length as compared to CF (Fig. ). Higher edges in bacterial and higher modularity in fungal co-occurrence networks were also detected in CBF amendment soil (Fig. ). The results suggested that a more stable, complicated, and closely-linked microbial co-occurrence system was produced in the rhizosphere soil of tobacco plants in response to CBF application. Furthermore, different keystone taxonomies were shaped in microbial co-occurrence networks under CBF and CF treatments. The most important keystone species in CBF were bacteria s_Stenotrophomonas_sp._MYb57 , Bradyrhizobium_elkanii_g__Bradyrhizobium , Rhodococcus_erythropolis , Novosphingobium_resinovorum , Bacillus_megaterium_NBRC_15308__ATCC_14581 , fungi s_Penicillium_alogum , Microdochium_colombiense , Chaetomium_grande , Sampaiozyma_sp , Oidiodendron_truncatum , and those in CF were bacteria s _Lentzea_aerocolonigenes , Sphingobacterium_multivorum , Rhodococcus_wratislaviensis , uncultured_bacterium_5G12 , Flexivirga_sp._g__Flexivirga , and fungi s_Olpidium_brassicae , Staphylotrichum_sp , Papulaspora_sepedonioides and Mortierellomycotina (Fig. ). Correlations among tobacco leaf yield, value, soil chemical properties and microbial community Lastly, correlation and explanatory power analyses were applied to disentangle the relationships among tobacco leaf yield, value, and soil chemical and microbial properties. Soil pH, OM, and TN contents were identified as the most important chemical factors influencing soil bacterial and fungal community composition (Mantel test, P < 0.05, Fig. b). Furthermore, tobacco leaf yield and value were positively associated with soil TC, OM contents, bacterial and fungal Ace and Chao index, and the relative abundances of Arthrobacter , Streptomyces and Fusarium ( P < 0.05). However, they showed negative relationships with the relative abundances of Sphingomonas , Lysobacter , norank_f__Gemmatimonadaceae and Cladosporium ( P < 0.05, Fig. a). Soil chemical properties, bacterial and fungal community diversity explained 25.6, 21.4, and 32.5% of the total variance for tobacco yield, and they were 24.4, 29.1, and 23.0% for tobacco value, respectively. Soil TC, bacterial PCoA1, and the bacterial and fungal Chao index were the most important explanation factors in all the analyzed indicators (Fig. c). Figure a and b depicted tobacco disease rate and the abundance of two typical pathogens in 2022, Phytophthora nicotianae and Fusarium oxysporum . The incidence of tobacco diseases decreased dramatically from 16.7– 17.8% in CF amendment treatment to 2.7–15.0% in BFs amendment treatments, with the lowest of 2.7% under CBF addition coupled with tobacco broad bean rotation. However, no significant difference was observed in disease incidence ( P ≥ 0.05) between WBF and CF treatments. Meanwhile, the numbers of pathogens decreased in CBF-treated rhizosphere soil relative to CF-treated soil under both rotation regimes (Fig. b). BFs addition in soil improved tobacco agricultural traits, wet and dry biomass under both rotation regimes and study years compared to CF, especially for CBF addition (Fig. ). Moreover, all tobacco economic parameters were higher under BFs treatments relative to CF (Fig. c). Tobacco grown in CBF amendment soil owned the greatest trade yield (average 3105 kg ha –1 in 2021 and 2042 kg ha –1 in 2022) and crop value (average 12.0 ten thousand CNY ha –1 in 2021 and 8.3 ten thousand CNY ha –1 in 2022) in all treatments, presenting a 20.5% and 34.4% increment for average trade yield and crop value, respectively, as compared to CF. Simultaneously, a significant decline in tobacco trade yield and value was observed in 2022 than those in 2021, indicating the occurrence of an intensified tobacco CCO phenomenon with the prolongation of CC years. The chemical components of flue-cured tobacco leaves are listed in Table . Overall, CBF application contributed to higher values in the total sugar, reducing sugar, K 2 O contents, and the total sugar/nicotine, K 2 O/Cl, reducing sugar/total sugar ratios in flue-cured tobacco. In comparison to CF, three types of BFs amendment reduced the Cl content in flue-cured tobacco leaves, validating the functions of BFs in the enhancement of chemical quality, with CBF posing the most favorable impacts in all treatments. A three-way ANOVA was employed to analyze the influences of different fertilization applications, rotation regimes, CC years, and their interactions on tobacco leaf yield and chemical quality. The results showed that tobacco leaf yield and chemical quality were strongly steered by different fertilizer additions and CC years ( F = 3.3–149.4, P < 0.05) except the total sugar/nicotine ratio (Table ). The results of soil chemical properties showed that BFs addition increased rhizosphere soil pH, with WBF presented the most pronounced effect in all BFs types (Fig. ). There were improvements in soil OM, AP, AK, MBC, MBN, TC, and TN contents under BFs treatments, particularly for CBF addition (Fig. ). However, this was not the case for soil NH 4 + -N content (Fig. ). With respect to element contents in tobacco organs, BFs amendment resulted in higher Ca content, lower Cd, V, Cu, Zn contents in all tobacco organs, higher S and P contents in tobacco stems and roots, and lower S and P contents in tobacco leaves under both rotation regimes (Fig. ). Soil microbial community composition There were at least 3800 bacterial and 1140 fungal OTUs existing in each sample from the Venn diagram (Fig. a). Specifically, 38.4% of soil bacterial OTUs (number of 2265) and 16.2% of fungal OTUs (number of 420) were shared across all treatments, while only 5.8–7.6% (344–447) and 9.0–17.9% (233–463) OTUs were unique in each treatment for bacteria and fungi, respectively. Application of CBF tended to increase soil bacterial OTUs numbers in both CC years, and different fertilization patterns shaped a more distinct fungal OTUs distribution than bacteria. The top five most abundant rhizobacteria were phylum Actinobacteriota (39.1% of the total reads), Proteobacteria (27.9%), Chloroflexi (9.8%), Acidobacteriota (7.9%) and Bacteroidota (3.6%), while on the genus level were Sphingomonas (5.8%), Intrasporangium (5.7%), Arthrobacter (5.7%), Nocardioides (3.3%) and norank_f__norank_o__Vicinamibacterales (2.4%), which collectively accounted for 88.4% and 22.9% of all bacterial communities, respectively (Fig. b, a). Moreover, rhizosphere fungal communities were predominantly composed of phylum Ascomycota (80.9%), Mortierellomycota (9.6%), Basidiomycota (3.7%), unclassified_k__Fungi (3.0%) and Chytridiomycota (1.8%), while on the genus level were Fusarium (14.5%), Mortierella (10.6%), Plectosphaerella (9.3%), Gibberella (8.4%) and Talaromyces (4.4%), owning a total relative abundance of 99.0% and 47.2%, respectively (Fig. b, b). Remarkably, Ascomycota occupied the overwhelming majority of soil fungi (> 80%). Soil microbial diversity and difference analysis Higher bacterial and fungal richness and diversity were observed in the rhizosphere soil under CBF treatment as compared to CF (Fig. c). Bacterial Ace and Chao indices showed a significant difference between CBF and CF groups (Wilcoxon test, P FDR < 0.01), however, there was no significant difference in all fungal α-diversity indices (Fig. c). Interestingly, rhizosphere microbial community diversity decreased with the prolongation of tobacco CC years, as corroborated by the reduced OTUs numbers, Ace and Chao index from 2021 to 2022. Rhizobacterial and fungal community structure showed significant differences between CBF and CF treatments in two CC years (PERMANOVA and ANOSIM tests, P < 0.01), and this phenomenon was more obvious in different CC years than those in fertilizer treatments, which was primarily presented along the first axis (Fig. d). Similar to microbial α-diversity, there was a more obvious difference in bacterial β-diversity than that of fungi under different groups. To further explore the distinct taxonomy under CBF and CF treatments, we analyzed the heatmap and fold change of the relative abundance of the top 30 bacterial and fungal genera with a significant difference test. The results showed that CBF amendment soil enriched bacteria Arthrobacter , Pseudomonas in 2021, Gemmatimonas in 2022, and depleted Mycobacterium in both CC years (Fig. a). In the case of soil fungi, CBF addition increased Trichoderma in 2021, increased Mortierella , Penicillium , Chaetomium , Gibellulopsis , and decreased Rhizophlyctis , Debaryomyces , Talaromyces , and Alternaria in both CC years (Fig. b). Soil microbial co-occurrence networks As expected, soil bacterial and fungal co-occurrence networks under CBF treatment depicted higher nodes, positive link percent, average clustering index, and lower network diameter and average path length as compared to CF (Fig. ). Higher edges in bacterial and higher modularity in fungal co-occurrence networks were also detected in CBF amendment soil (Fig. ). The results suggested that a more stable, complicated, and closely-linked microbial co-occurrence system was produced in the rhizosphere soil of tobacco plants in response to CBF application. Furthermore, different keystone taxonomies were shaped in microbial co-occurrence networks under CBF and CF treatments. The most important keystone species in CBF were bacteria s_Stenotrophomonas_sp._MYb57 , Bradyrhizobium_elkanii_g__Bradyrhizobium , Rhodococcus_erythropolis , Novosphingobium_resinovorum , Bacillus_megaterium_NBRC_15308__ATCC_14581 , fungi s_Penicillium_alogum , Microdochium_colombiense , Chaetomium_grande , Sampaiozyma_sp , Oidiodendron_truncatum , and those in CF were bacteria s _Lentzea_aerocolonigenes , Sphingobacterium_multivorum , Rhodococcus_wratislaviensis , uncultured_bacterium_5G12 , Flexivirga_sp._g__Flexivirga , and fungi s_Olpidium_brassicae , Staphylotrichum_sp , Papulaspora_sepedonioides and Mortierellomycotina (Fig. ). Correlations among tobacco leaf yield, value, soil chemical properties and microbial community Lastly, correlation and explanatory power analyses were applied to disentangle the relationships among tobacco leaf yield, value, and soil chemical and microbial properties. Soil pH, OM, and TN contents were identified as the most important chemical factors influencing soil bacterial and fungal community composition (Mantel test, P < 0.05, Fig. b). Furthermore, tobacco leaf yield and value were positively associated with soil TC, OM contents, bacterial and fungal Ace and Chao index, and the relative abundances of Arthrobacter , Streptomyces and Fusarium ( P < 0.05). However, they showed negative relationships with the relative abundances of Sphingomonas , Lysobacter , norank_f__Gemmatimonadaceae and Cladosporium ( P < 0.05, Fig. a). Soil chemical properties, bacterial and fungal community diversity explained 25.6, 21.4, and 32.5% of the total variance for tobacco yield, and they were 24.4, 29.1, and 23.0% for tobacco value, respectively. Soil TC, bacterial PCoA1, and the bacterial and fungal Chao index were the most important explanation factors in all the analyzed indicators (Fig. c). There were at least 3800 bacterial and 1140 fungal OTUs existing in each sample from the Venn diagram (Fig. a). Specifically, 38.4% of soil bacterial OTUs (number of 2265) and 16.2% of fungal OTUs (number of 420) were shared across all treatments, while only 5.8–7.6% (344–447) and 9.0–17.9% (233–463) OTUs were unique in each treatment for bacteria and fungi, respectively. Application of CBF tended to increase soil bacterial OTUs numbers in both CC years, and different fertilization patterns shaped a more distinct fungal OTUs distribution than bacteria. The top five most abundant rhizobacteria were phylum Actinobacteriota (39.1% of the total reads), Proteobacteria (27.9%), Chloroflexi (9.8%), Acidobacteriota (7.9%) and Bacteroidota (3.6%), while on the genus level were Sphingomonas (5.8%), Intrasporangium (5.7%), Arthrobacter (5.7%), Nocardioides (3.3%) and norank_f__norank_o__Vicinamibacterales (2.4%), which collectively accounted for 88.4% and 22.9% of all bacterial communities, respectively (Fig. b, a). Moreover, rhizosphere fungal communities were predominantly composed of phylum Ascomycota (80.9%), Mortierellomycota (9.6%), Basidiomycota (3.7%), unclassified_k__Fungi (3.0%) and Chytridiomycota (1.8%), while on the genus level were Fusarium (14.5%), Mortierella (10.6%), Plectosphaerella (9.3%), Gibberella (8.4%) and Talaromyces (4.4%), owning a total relative abundance of 99.0% and 47.2%, respectively (Fig. b, b). Remarkably, Ascomycota occupied the overwhelming majority of soil fungi (> 80%). Higher bacterial and fungal richness and diversity were observed in the rhizosphere soil under CBF treatment as compared to CF (Fig. c). Bacterial Ace and Chao indices showed a significant difference between CBF and CF groups (Wilcoxon test, P FDR < 0.01), however, there was no significant difference in all fungal α-diversity indices (Fig. c). Interestingly, rhizosphere microbial community diversity decreased with the prolongation of tobacco CC years, as corroborated by the reduced OTUs numbers, Ace and Chao index from 2021 to 2022. Rhizobacterial and fungal community structure showed significant differences between CBF and CF treatments in two CC years (PERMANOVA and ANOSIM tests, P < 0.01), and this phenomenon was more obvious in different CC years than those in fertilizer treatments, which was primarily presented along the first axis (Fig. d). Similar to microbial α-diversity, there was a more obvious difference in bacterial β-diversity than that of fungi under different groups. To further explore the distinct taxonomy under CBF and CF treatments, we analyzed the heatmap and fold change of the relative abundance of the top 30 bacterial and fungal genera with a significant difference test. The results showed that CBF amendment soil enriched bacteria Arthrobacter , Pseudomonas in 2021, Gemmatimonas in 2022, and depleted Mycobacterium in both CC years (Fig. a). In the case of soil fungi, CBF addition increased Trichoderma in 2021, increased Mortierella , Penicillium , Chaetomium , Gibellulopsis , and decreased Rhizophlyctis , Debaryomyces , Talaromyces , and Alternaria in both CC years (Fig. b). As expected, soil bacterial and fungal co-occurrence networks under CBF treatment depicted higher nodes, positive link percent, average clustering index, and lower network diameter and average path length as compared to CF (Fig. ). Higher edges in bacterial and higher modularity in fungal co-occurrence networks were also detected in CBF amendment soil (Fig. ). The results suggested that a more stable, complicated, and closely-linked microbial co-occurrence system was produced in the rhizosphere soil of tobacco plants in response to CBF application. Furthermore, different keystone taxonomies were shaped in microbial co-occurrence networks under CBF and CF treatments. The most important keystone species in CBF were bacteria s_Stenotrophomonas_sp._MYb57 , Bradyrhizobium_elkanii_g__Bradyrhizobium , Rhodococcus_erythropolis , Novosphingobium_resinovorum , Bacillus_megaterium_NBRC_15308__ATCC_14581 , fungi s_Penicillium_alogum , Microdochium_colombiense , Chaetomium_grande , Sampaiozyma_sp , Oidiodendron_truncatum , and those in CF were bacteria s _Lentzea_aerocolonigenes , Sphingobacterium_multivorum , Rhodococcus_wratislaviensis , uncultured_bacterium_5G12 , Flexivirga_sp._g__Flexivirga , and fungi s_Olpidium_brassicae , Staphylotrichum_sp , Papulaspora_sepedonioides and Mortierellomycotina (Fig. ). Lastly, correlation and explanatory power analyses were applied to disentangle the relationships among tobacco leaf yield, value, and soil chemical and microbial properties. Soil pH, OM, and TN contents were identified as the most important chemical factors influencing soil bacterial and fungal community composition (Mantel test, P < 0.05, Fig. b). Furthermore, tobacco leaf yield and value were positively associated with soil TC, OM contents, bacterial and fungal Ace and Chao index, and the relative abundances of Arthrobacter , Streptomyces and Fusarium ( P < 0.05). However, they showed negative relationships with the relative abundances of Sphingomonas , Lysobacter , norank_f__Gemmatimonadaceae and Cladosporium ( P < 0.05, Fig. a). Soil chemical properties, bacterial and fungal community diversity explained 25.6, 21.4, and 32.5% of the total variance for tobacco yield, and they were 24.4, 29.1, and 23.0% for tobacco value, respectively. Soil TC, bacterial PCoA1, and the bacterial and fungal Chao index were the most important explanation factors in all the analyzed indicators (Fig. c). The primary objectives of this study were to determine how BFs application alleviate tobacco CCO and further disentangle the key factors that were potentially associated with this function. Our results revealed a positive effect of BFs addition in soil on the alleviation of tobacco CCO under two rotation regimes and two study years, which was reflected in reduced tobacco morbidity and improved tobacco agronomic traits, wet and dry biomass, economic parameters, and intrinsic chemical quality under CC conditions. CBF owned the optimal efficiency as compared with WBF and RBF, and this might be ascribed to synergistic cooperation between wood and rice straw biochar in the enhancement of tobacco growth. Nonetheless, the phenomenon of tobacco CCO was gradually aggravated with the prolongation of CC years. Therefore, it is suggested to combine BFs amendment with other practical agricultural measures for tobacco sustainable production. A considerable number of previous studies have demonstrated the excellent ability of biochar and vermicompost as soil conditioners to regulate and recover soil properties and improve crop yield and quality . Co-application of biochar and vermicompost significantly alleviated cucumber ( Cucumis sativus L.) CCO , and this is coherent with the results in our present study. reported that the growth of Radix pseudostellariae was improved with biochar addition by stimulating the changes in abundance and metabolism processes of rhizobacteria and fungi, and by inhibiting the activities of pathogenic fungi on plants. As recently shown by , biochar addition in soil (15t ha –1 ) significantly increased the survival rate of Panax notoginseng under ten-years CC circumstances through changing soil physicochemical properties and microbial diversities. Vermicompost function in soil-borne disease control and crop yield and quality enhancement is largely linked to its properties of porous structure, rich nutrients, good water-holding capacity, and abundant beneficial antagonistic microbes . Based on these previous reports, we hypothesized that the mechanisms of BFs-associated alleviation of tobacco CCO was related to one or more of the following factors: inhibition in pathogen loads, amelioration in soil chemical characteristics, or shifts in rhizospheric microbial communities. To address this hypothesis, we focused on analyzing the changes of these factors under different fertilizer treatments and seeking the closed correlations between the factors and tobacco indicators. Soil acidification has been identified as an important contributor to tobacco CCO, as it can result in phenolic acid accumulation and the alteration of bacterial community structures and diversities in soil . Specifically, acidic soil conditions (pH 4.5–5.5) foster the growth of tobacco pathogen Ralstonia solanacearum and suppress the activities of antagonistic bacteria, e.g., Pseudomonas fluorescens and Bacillus cereus . Biochar addition can increase soil pH due to its alkaline feature, thus contributing to the amelioration of soil acidification caused by long-term CF application and CC pattern , and this is in accordance with our present study. Simultaneously, soil OM, AP, AK, MBC, MBN, TC, and TN contents increased under BFs treatments in our study. Biochar and vermicompost additions directly supplement soil with organic C and OM, which are essential for nutrient retention, the formation of soil macroaggregates, and the improvement of soil fertility, quality, and ecosystem’s balance . Noteworthily, soil C pool loss has aroused considerable concerns in the tobacco industry because it coincides with reduced tobacco chemical quality and desirable smoking characteristics, such as a strong caramel-sweet aroma . Biochar application promotes C restoration and hormonal actions in soil, then changes the secondary metabolisms in tobacco plants, and ultimately stimulates aroma substances and taste production in tobacco leaves . Biochar and vermicompost can improve soil nutrient levels through the direct release of rich nutrients in these two materials and the indirect changes in soil enzymic activities relating to soil C, N, P, K-cycling , which provide tobacco plants with more available nutrients. These results suggested that an amelioration in soil chemical properties plays an important role in alleviating tobacco CCO with BFs addition, particularly for soil TC and OM contents, as supported by the Spearman correlation and explanatory power analyses. The availability of heavy metals in soil and their accumulation in tobacco have a great possibility to hinder the normal growth of tobacco and pose a severe underlying risk to the health of human beings and animals . In addition to soil basic chemical properties, BFs application may reduce the accumulation of heavy metals in tobacco, which is a typical Cd accumulator . In this study, a reduced trend was observed in Cd, V, Cu, and Zn contents in tobacco plants with BFs application, and this is in consensus with the previous studies, which reported lower Cr, Cu, Cd, Ni, Pb, Zn contents in soil and their accumulation in tobacco under biochar application alone or with vermicompost . This phenomenon may be attributed to the immobilization processes of ion exchange, surface precipitation, electrostatic attraction in biochar, and adsorption processes of OM and humus in vermicompost, which synergistically declined heavy metal availability . Besides, biochar can increase soil pH value, and then indirectly reduce the availability of heavy metals in soil . We speculate that BFs addition in soil weakens the negative influences of heavy metals on tobacco plants’ growth, and this can at least partially explain the positive effects of BFs application on tobacco CCO alleviation. Although the heavy metal contents in tobacco plants decreased with BFs application for 2 years, this phenomenon need to be further studied over a longer experiment period, as the heavy metals in biochar may be released with the aging processes in soil. Soil microbiome plays crucial roles in soil multifunctionality and is linked to stimulation of plant defense mechanisms . A majority of studies have broadly reported the excellent abilities of organic amendments to alleviate environmental stresses and shift soil microbial communities . In our study, rhizobacterial OTUs numbers and α-diversity increased with CBF amendment under both tillage regimes, demonstrating that CBF can improve soil bacterial community diversity and richness, similar to earlier observations in organic manure-treated or biochar-treated soil . Microbial diversity is considered an important indicator to evaluate soil health as it can control the invasion of crop pathogens by stimulating the functionality of terrestrial ecosystems . Thus, we hypothesize that a rise in rhizobacterial α-diversity facilitates the alleviation of tobacco CCO with CBF addition. However, rhizosphere fungal α-diversity did not show any apparent difference between CBF and CF treatments, demonstrating more significant effects of CBF addition on bacterial α-diversity than fungi. Furthermore, CBF amendment in soil also transformed bacterial and fungal community structures as compared to CF, and this was associated with the enhancements in tobacco yield and economic value according to the Spearman correlation and explanatory power analyses. Agricultural practices and environmental variability possess a great influence on the complexity and stability of soil microbial community, which has been extensively studied and verified using network analysis . demonstrated that a higher abundance of Ralstonia solanacearum in the tomato rhizosphere resulted in decreased microbial connections in co-occurrence networks. Here, we observed more complex and stable rhizosphere microbial co-occurrence networks with CBF amendment, and this revealed that CBF addition in soil improved positive collaborations and interactions in rhizosphere microbes, which was consistent with the results from . Besides, the keystone roles of some beneficial taxa were stimulated in the microbial networks, such as bacteria Stenotrophomonas_sp._MYb57 , Bradyrhizobium_elkanii_g__Bradyrhizobium , Novosphingobium_resinovorum , Bacillus_megaterium_NBRC_15308__ATCC_14581 , and fungi Penicillium_alogum , Chaetomium_grande . It is likely that the positive roles of these beneficial microorganisms are strengthened in managing and adjusting soil microbial networks. Our results showed that CBF amendment in soil enriched some underlying pathogen-suppressive and plant growth-promoting microbes (PGPM), such as bacterial genus Arthrobacter , Pseudomonas , Gemmatimonas , and fungal genus Trichoderma , Mortierella , Penicillium , Chaetomium and Gibellulopsis . Stimulation in soil indigenous Pseudomonas populations can enhance the suppression of Fusarium wilt disease in banana . reported that Pseudomonadaceae members played a predominant role in the plant disease-suppressive microbial consortia by producing a putative chlorinated lipopeptide encoded by NRPS genes. Simultaneously, some other taxa, Arthrobacter , Trichoderma , Mortierella , Penicillium , Chaetomium and Gibellulopsis , have been recognized as disease suppression-associated microbes and produced into biocontrol agents with broad applications in fields . Oppositely, there was a decline in the relative abundances of some potential plant pathogens and their synergetic microorganisms under CBF treatment, such as Mycobacterium , Rhizophlyctis , Debaryomyces and Alternaria . Alternaria solani , belonging to the Alternaria genus, is a notorious fungal pathogen causing pear black spot disease and blight disease in Tomato and Solanum lycopersicum . In addition, based on the correlation analysis, tobacco leaf yield and economic value were negatively correlated with the relative abundances of Sphingomonas , Lysobacter , norank_f__Gemmatimonadaceae and Cladosporium. reported that the relative abundance of Sphingomonas increased with the prolongation years of continuous cotton cultivation, which was in accordance with our results. Cladosporium is also identified as a type pf plant pathogens that can induce leaf lesions on Vicia faba seedlings . Therefore, we consider that a healthier soil micro-ecology environment was formed in response to CBF application, which contributed to the control of tobacco diseases and the alleviation of tobacco CCO. Moreover, the shifts in soil microbial community composition and structures were primarily steered by soil pH, OM, and TN contents. Lastly, our results indicated that BFs action in the alleviation of tobacco CCO can be appreciably ascribed to the improvements in soil chemical properties and microbial community structures and diversities, with soil TC, bacterial PCoA1, and bacterial and fungal Chao indexes showing the most important explanatory power. Here we provide an effective and environmentally friendly scheme for the alleviation of tobacco CCO, and this can also provoke some thoughts and inspiration according to the results of current study. For instance, screening and isolating crucial beneficial microbes is suggested to further elucidate and validate their positive roles in tobacco growth in future studies. Bio-organic fertilizers production with isolated microbes is a fascinating strategy for targeted manipulation of rhizosphere microbial assemblages. Additionally, we need to explore the optimal application rate and frequency of BFs as well as combinations with other useful agricultural measures under a long-term field condition. The present study confirmed that BFs addition in soil mitigated tobacco CCO and contributed to tobacco sustainable farming, and this positive function was triggered by changes in soil chemical properties, heavy metals (Cd, V, Cu, Zn) contents in tobacco plants, and rhizospheric microbial community composition and structures. Specifically, BFs amendment in tobacco CC soil improved microbial diversities, structures and their co-occurrence networks accompanied with a growth of some beneficial microbes and a decrease of tobacco pathogens ( Phytophthora nicotianae and Fusarium oxysporum ) in the rhizosphere. This study highlights a joint action mechanism of BFs application in the alleviation of tobacco CCO, and provides a useful guide for BFs practical application in fields and an important theoretical basis for future elucidation of another mechanism of crop CCO mitigation. BFs addition may be a promising approach or strategy for agriculture sustainable development. Below is the link to the electronic supplementary material. Supplementary Material 1
PLANT-Dx: A Molecular Diagnostic for Point-of-Use Detection of Plant Pathogens
2409a72a-5df6-4419-b1c1-c7c274c58cfc
6479721
Pathology[mh]
To develop PLANT-Dx, we first sought to create pathogen detecting molecular sensors based upon the Small Transcription Activating RNA (STAR) regulatory system. This transcription activation system is based upon conditional formation of a terminator hairpin located within a target RNA upstream of a gene to be regulated: alone, the terminator hairpin forms and interrupts transcription of the downstream gene, while in the presence of a specific trans- acting STAR the hairpin cannot form and transcription proceeds ( Figure S2 ). Previous work showed that the STAR linear binding region can be changed to produce highly functional and orthogonal variants. Here we sought to utilize this by replacing the linear binding region with sequences derived from genomic pathogen RNA to create new viral sensors. To do this, we utilized the secondary structure prediction algorithm NUPACK to identify regions within the genomes of CMV and PVY, that are predicted computationally to be unstructured for target RNA design ( Note S1 ). Once viral STARs were designed, reporter DNA constructs were then created in which these target sequences were placed downstream of a constitutive E. coli promoter and upstream of the CDO reporter gene coding sequence. We next designed RPA primer sets to amplify and transform a pathogen’s genomic material into a DNA construct capable of synthesizing a functional STAR. Specifically, a T7 promoter and antiterminator STAR sequence were added to the 5′ end of a reverse RPA primer, which when combined with a forward primer, amplified an approximately 80 nucleotide (nt) viral sequence to produce a double-stranded DNA encoding the designed STAR which contained the target viral sequence. In this way, we anticipated that combining the CDO-encoding reporter construct and RPA amplified DNA into a cell-free gene expression reaction , would lead to the production of a detectable colorimetric output signal. We began by investigating the ability of PLANT-Dx to detect the presence of in vitro transcribed (IVT) RNA designed to mimic specific target regions of CMV. We observed rapid color accumulation in samples containing 1 nM of purified transcription product versus the no-RNA negative control ( b). To test for modularity, we further developed sensors and primer sets for the detection of PVY, and confirmed function with the same assay ( c). The specificity of our system was also tested by interrogating the crosstalk between the product of various RPA reactions and noncognate molecular sensors. Specifically, we tested color production from cell-free reactions containing the reporter DNA construct for CMV with the PVY IVT-derived RPA product, as well as the converse, and found color production only between cognate pairs of input RPA and reporter constructs ( d). We next interrogated the inherent limit of detection of our system through titration of input IVT products ( e) and found it to be between 44pM and 4.4pM of input IVT RNA material. This demonstrated our ability to detect the presence of target nucleic acid sequences down to the picomolar range. Surprisingly, this sensitivity is lower than that previously reported for RPA and is most likely due to loss in amplification efficiency from the addition of the long overhangs present within our primer sets. We next set out to determine whether this methodology was able to differentiate between plant lysate obtained from healthy plants versus lysate from plants infected with CMV virus. To test this, we input 1 μL of CMV-infected plant lysate, or an equivalent volume of a noninfected plant lysate control, into the PLANT-Dx reaction system. Here, we observed rapid color change only from reactions with infected lysate when compared to healthy lysate ( a). Interestingly, the leak in the system was reduced when challenged with plant extract in comparison to previous results using IVT product. This is most likely due to a slightly inhibitory effect plant lysate may have on the efficiency of the RPA reaction and presents a positive benefit of reducing off target signal production. Despite the great benefits derived from colorimetric enzymes, their usage dictates that any leak in the system will eventually result in the complete conversion of substrate into a visible signal. Therefore, it is important to determine a time point cutoff in which to analyze data for the presence or absence of the color signal to minimize false positives associated with expression leak. In this work using PLANT-Dx for detection of CMV, we suggest utilizing 150 min ( a). To demonstrate that this assay can be monitored by eye, reactions were carried out and filmed within a 31 °C incubator ( b). With the naked eye, we detected accumulation of a yellow color only within reactions that were incubated with infected lysate, while no such production was observed in reactions with uninfected lysate. A notable drawback of current gold standard diagnostics is the need for peripheral equipment for either amplification or visualization of outputs. Even simple heating elements for controlled incubations are a major hindrance during deployment within the field and can be cost-prohibitive. We sought to exploit the flexible temperature requirements of both RPA and cell-free gene expression reactions by attempting to run our diagnostic reactions for CMV infected lysate using only body heat. This resulted in clear yellow color only in the presence of infected lysate, with no major difference observed between these reactions and those previously incubated within a thermocycler and observed with a plate reader ( c). Here we have demonstrated a novel scheme for combining isothermal amplification and custom synthetic biology viral sensors for the detection of the important plant pathogen CMV from infected plant lysate. Building off of previously elucidated STAR design rules, we have shown that our molecular sensors can be efficiently designed, built, and implemented for use in this important plant diagnostic context. The use of STARs in PLANT-Dx complements previous uses of toehold translational switches for similar purposes in human viral diagnostics, and could lead to more powerful combinations of the two technologies in the future. In addition, we have shown that these reactions can be readily run without the need for extraneous heating or visualization equipment. In particular, the rapid mechanical disruption of infected leaf tissue into a reaction-ready lysate buffer eliminates the need for any nucleic acid isolation. While in these experiments the lysate was snap-frozen, they could equally be used for immediate analysis in the field. The ability of our methodology to selectively detect genomic sequences from CMV and PVY highlight the ability of the growing methodologies and design principles within RNA synthetic biology to contribute to real world applications. Further modifications to sample preparation will undoubtedly be needed to simplify the user interface still further while improving the sensitivity of detection of lower replicating and genetically more diverse virus species. We hope that these developments can be incorporated within other synthetic biology-based diagnostic platforms to enable PoUDs to be developed and delivered to regions of the world that need them most.
Burden and Temporal Trends of Valvular Heart Disease‐Related Heart Failure From 1990 to 2019 and Projection Up to 2030 in
04c62f7a-b16c-44a9-bdf7-186f5d5da64b
11935581
Medicine[mh]
This study, for the first time, unveils the significant burden of valvular heart disease (VHD)‐related heart failure (HF) in Group of 20 countries, demonstrating heterogeneity across various VHD subtypes including age, sex, and sociodemographic indices. This heterogeneity reflects differences in lifestyle, economic development, population aging, and health care resources among Group of 20 nations. The burden of VHD‐related HF is a growing public health concern within the Group of 20 nations and globally. In developing countries, the longstanding burden of rheumatic VHD‐related HF persists, alongside a noticeable increased burden of nonrheumatic VHD‐related HF. In developed countries, there is not only a rising burden of nonrheumatic VHD‐related HF but also a notable resurgence of rheumatic VHD‐related HF burden. With the escalating burden of VHD‐related HF in Group of 20 countries, it is crucial for these nations to promptly align their public health policies with the current status of VHD‐related HF burden and available resources and to implement timely interventions to tackle this challenge. Comprehensive health policies prioritizing early screening, early prevention, and early treatment may be an effective strategy to address the significant burden of VHD‐related HF. The data that support the findings of this study are available from the corresponding author upon reasonable request. Data Sources and Case Definition The disease causes were coded according to the International Classification of Diseases, Tenth Revision ( ICD‐10 ) system, and mapped to the VHD. Specifically, ICD‐10 codes corresponding to RVHD were I01 to I01.9, I02.0, and I05 to I09.9, whereas NRVHD codes were I34 to I37.9. VHD‐related HF refers to HF that arises as a direct consequence of valvular abnormalities, including RVHD or NRVHD. According to the HF severity, VHD‐related HF was categorized into 4 groups: treat, mild, moderate, and severe HF based on the American College of Cardiology/American Heart Association guidelines. The VHD‐related HF burden was defined based on the number of cases, prevalence rate, or age‐standardized prevalence rates (ASPRs), and years lived with disabilities numbers or rates. These indicators of VHD‐related HF burden for the years 1990, 2019, and their intervening periods in G20 countries were extracted from the 2019 GBD study. , The 4 indicators are presented in counts and age‐standardized rates per 100 000 individuals, calculated using the GBD world population standard. According to previously published methods, years lived with disabilities were computed as the product of prevalence and disability weight, preserving the severity distributions for each prevalence estimate and applying the corresponding disability weight or combined disability weight by using DisMod‐MR 2.1. To address uncertainty in the burden estimates, the 95% uncertainty intervals (UIs) for each estimate were derived from 1000 draws of the posterior distribution, calculated as the 2.5 and 97.5 percentiles of this distribution. Moreover, subgroup analyses were further applied based on region, sex (both sexes, men, and women) and age groups (<5, 5–14, 15–49, 50–74, 75–84, and ≥85 years of age), as well as by the SDI. The SDI values of G20 countries from 1990 to 2020 are available on Institute for Health Metrics and Evaluation (IHME)'s data platform. Based on SDI values, the G20 countries were categorized into 5 groups: low SDI (<0.45), low‐middle SDI (≥0.45 and <0.61), middle SDI (≥0.61 and <0.69), high‐middle SDI (≥0.69 and <0.80), and high SDI (≥0.80). Statistical Analysis All data analyses and data visualization in this study were performed using statistical software R (version 4.2.1) and Origin Pro 2024. The count, age‐standardized rates of prevalence, and years lived with disabilities, and their 95% UIs extracted from the 2019 GBD were used to assess the burden of VHD‐related HF in G20 countries. Additionally, we used the Bayesian age‐period‐cohort model to project the trends of VHD‐related HF in G20 countries from 2020 to 2030 by using the R package Bayesian age‐period‐cohort (version 0.0.36). Temporal trends of age‐standardized rates from 2020 to 2030 were assessed through the estimated annual percentage change using least squares linear regression referring to previously published methods. Estimated annual percentage change served as a comprehensive and widely used measure for rate trends over specified intervals. It was calculated by fitting a regression model to the natural logarithm of the rate, namely ln (rate)=α+β×(calendar year)+ε, and estimated annual percentage change is defined as 100×(exp [β]−1). Furthermore, the male‐to‐female ratio (MtoF) was defined as the male ASPRs divided by the female ASPRs, and the value >1 meant male dominance whereas the opposite was female dominance. Ethical Approval and Consent to Participate Ethical approval and consent are not required because this study used publicly available secondary data that were aggregated and nonidentified. The disease causes were coded according to the International Classification of Diseases, Tenth Revision ( ICD‐10 ) system, and mapped to the VHD. Specifically, ICD‐10 codes corresponding to RVHD were I01 to I01.9, I02.0, and I05 to I09.9, whereas NRVHD codes were I34 to I37.9. VHD‐related HF refers to HF that arises as a direct consequence of valvular abnormalities, including RVHD or NRVHD. According to the HF severity, VHD‐related HF was categorized into 4 groups: treat, mild, moderate, and severe HF based on the American College of Cardiology/American Heart Association guidelines. The VHD‐related HF burden was defined based on the number of cases, prevalence rate, or age‐standardized prevalence rates (ASPRs), and years lived with disabilities numbers or rates. These indicators of VHD‐related HF burden for the years 1990, 2019, and their intervening periods in G20 countries were extracted from the 2019 GBD study. , The 4 indicators are presented in counts and age‐standardized rates per 100 000 individuals, calculated using the GBD world population standard. According to previously published methods, years lived with disabilities were computed as the product of prevalence and disability weight, preserving the severity distributions for each prevalence estimate and applying the corresponding disability weight or combined disability weight by using DisMod‐MR 2.1. To address uncertainty in the burden estimates, the 95% uncertainty intervals (UIs) for each estimate were derived from 1000 draws of the posterior distribution, calculated as the 2.5 and 97.5 percentiles of this distribution. Moreover, subgroup analyses were further applied based on region, sex (both sexes, men, and women) and age groups (<5, 5–14, 15–49, 50–74, 75–84, and ≥85 years of age), as well as by the SDI. The SDI values of G20 countries from 1990 to 2020 are available on Institute for Health Metrics and Evaluation (IHME)'s data platform. Based on SDI values, the G20 countries were categorized into 5 groups: low SDI (<0.45), low‐middle SDI (≥0.45 and <0.61), middle SDI (≥0.61 and <0.69), high‐middle SDI (≥0.69 and <0.80), and high SDI (≥0.80). All data analyses and data visualization in this study were performed using statistical software R (version 4.2.1) and Origin Pro 2024. The count, age‐standardized rates of prevalence, and years lived with disabilities, and their 95% UIs extracted from the 2019 GBD were used to assess the burden of VHD‐related HF in G20 countries. Additionally, we used the Bayesian age‐period‐cohort model to project the trends of VHD‐related HF in G20 countries from 2020 to 2030 by using the R package Bayesian age‐period‐cohort (version 0.0.36). Temporal trends of age‐standardized rates from 2020 to 2030 were assessed through the estimated annual percentage change using least squares linear regression referring to previously published methods. Estimated annual percentage change served as a comprehensive and widely used measure for rate trends over specified intervals. It was calculated by fitting a regression model to the natural logarithm of the rate, namely ln (rate)=α+β×(calendar year)+ε, and estimated annual percentage change is defined as 100×(exp [β]−1). Furthermore, the male‐to‐female ratio (MtoF) was defined as the male ASPRs divided by the female ASPRs, and the value >1 meant male dominance whereas the opposite was female dominance. Ethical approval and consent are not required because this study used publicly available secondary data that were aggregated and nonidentified. Overview of G20 Countries From 1990 to 2019, the prevalence  of NRVHD‐related HF in G20 countries increased by 69.59% (Table ), rising from 1.29 million (95% UI, 0.84–1.87) to 2.18 million (95% UI, 1.42–3.24) across all age groups and sexes combined. However, the ASPRs decreased by 26.37%, declining from 48.96 (95% UI, 32.40–70.68) per 100 000 person‐years to 36.05 (95% UI, 23.64–53.52) over the 30‐year period (Table ). In regard to RVHD‐related HF, the prevalence increased by 117.34%, from 0.86 million (95% UI, 0.66–1.10) to 1.86 million (95% UI, 1.44–2.37) (Table ), with the ASPRs increasing by 10.11% from 27.67 (95% UI, 21.50–35.09) to 30.47 (95% UI, 23.70–38.59) (Table ). Both NRVHD‐related HF and RVHD‐related HF showed years lived with disabilities numbers (Tables and ) and years lived with disabilities (Tables and ) accounting for approximately 9.0% of their cases (Figures and ), observed across the overall population, as well as age (Tables and ) and sex (Tables and ) subgroups. Burden of VHD ‐Related HF in G20 Countries NRVHD ‐Related HF Burden in G20 Countries In 1990, among the G20 countries, Italy had the highest ASPRs of NRVHD‐related HF, reaching 182.43 (95% UI, 118.15–268.30) per 100 000 person‐years, followed by the United States at 130.85 (95% UI, 85.68–190.14), Japan at 123.20 (95% UI, 80.02–180.30), the Republic of Korea at 59.72 (95% UI, 36.70–90.77), and the Russian Federation at 52.10 (95% UI, 33.15–76.51). Conversely, India exhibited the lowest ASPR at 1.42 (95% UI, 0.93–2.07), differing from Italy by 128‐fold (Figure and Table ). From 1990 to 2019, among G20 countries, 11 nations showed an increasing trend in ASPRs of NRVHD‐related HF. The most notable increase was observed in South Africa, showing a rise of 354.78% (95% UI, 266.84–431.09), followed by Germany at 259.47% (95% UI, 113.63–389.90), Australia at 170.62% (95% UI, 93.80–269.09), and Mexico at 108.60% (95% UI, 86.05–126.28). Among the 8 G20 nations exhibiting a decreasing trend, Japan experienced the most prominent decline at −42.18% (95% UI, −55.07 to −26.33), followed by the Republic of Korea at −39.86% (95% UI, −53.40 to −22.33) and Italy at −39.21% (95% UI, −54.00 to −20.95) (Figure and Table ). Despite significant decreases over the past 30 years, the ASPRs of Italy, the United States, Japan, Canada, and the Republic of Korea were still high, with Italy remaining >100 per 100 000 person‐years. In contrast, Mexico, South Africa, and Germany showed notable increases but still maintained lower ASPRs, especially Mexico, which remained <10 per 100 000 person‐years (Figure and Table ). RVHD ‐Related HF Burden in G20 Countries In 1990, among the G20 countries, China had the highest ASPRs of RVHD‐related HF, standing at 43.37 (95% UI, 33.97–55.17) per 100 000 person‐years, followed by Canada at 36.72 (95% UI, 26.96–48.20), India at 33.16 (95% UI, 25.04–42.79), and France at 32.56 (95% UI, 26.60–39.33). Conversely, Brazil exhibited the lowest ASPR at 6.09 (95% UI, 4.25–8.31), differing from China by a factor of 7 (Figure and Table ). From 1990 to 2019, among G20 countries, 8 nations showed an increasing trend in ASPRs of RVHD‐related HF. The most notable increase was observed in Italy, with a rise of 20.53% (95% UI, 4.29–43.70), followed by China at 12.03% (95% UI, 9.04–15.04), and Saudi Arabia at 10.67% (95% UI, 1.87–19.83). Among the 11 nations of the G20 showing a decreasing trend, Canada experienced the most prominent decline at −39.61% (95% UI, −48.86 to −28.80), followed by France at −36.93% (95% UI, −48.70 to −22.99), and the United Kingdom at −32.06% (95% UI, −38.88 to −23.23), along with Australia at −31.34% (95% UI, −42.76 to −17.93) (Figure and Table ). Despite significant decreases over the past 30 years, the ASPRs of Canada, France, and Australia were still high. Conversely, Saudi Arabia showed notable increases but still maintained lower ASPRs. Notably, in both 1990 and 2019, China, Italy, and India had high and increasing ASPRs of RVHD‐related HF over the 30‐year period, with China remaining at >40 per 100 000 person‐years. In contrast, the United Kingdom, Germany, Turkey, Japan, and the Republic of Korea showed low and significantly decreasing rates over the 30 years, with the Republic of Korea remaining <10 per 100 000 person‐years (Figure and Table ). Burden of VHD ‐Related HF by Age Groups In 2019, the prevalence rates of NRVHD‐related HF in G20 countries increased with age and peaked at ≥85 years of age (Figure and Table ). The country with the highest prevalence rate was the United States, with 2880.30 (95% UI, 1924.61–4162.05) per 100 000 person‐years, followed by Italy at 2614.85 (95% UI, 1702.52–3955.89) and Japan at 1899.37 (95% UI, 1233.20–2804.60). On the other hand, the highest prevalence rates of RVHD‐related HF varied among the G20 countries. Except for China, India, the Russian Federation, Mexico, Argentina, and Turkey, which peaked in the age group of 75 to 84 years of age, all reached their highest levels among individuals ≥85 years of age (Figure and Table ). Burden of VHD ‐Related HF by Sex Groups In addition to age heterogeneity, there was sex heterogeneity in NRVHD‐related HF among G20 countries. Some countries exhibited higher ASPRs in men, whereas others showed higher rates in women. In 2019, G20 countries with higher ASPRs in men included South Africa (MtoF=17.42), the Russian Federation (MtoF=2.23), and Mexico (MtoF=1.59) (Figure and Table ). Countries such as Argentina and Brazil experienced a transition from a higher proportion of women in 1990 to a higher proportion of men in 2019, with ASPRs changing from 0.93 to 1.35 for Argentina and from 0.94 to 1.45 for Brazil. Remarkably, over the 30‐year period, South Africa witnessed the most pronounced increase in ASPRs among men, with a surge >6 times and a shift in the MtoF from 2:1 to 17:1 (Table ). In the remaining G20 countries, women consistently exhibited higher ASPRs for NRVHD‐related HF over the 30‐year period, with Japan (MtoF=0.50), Turkey (MtoF=0.57), and Italy (MtoF=0.69) being particularly notable (Figure and Table ). From 1990 to 2019, Germany experienced a >3‐fold increase in ASPRs among women. Women in Japan underwent a larger decrease in ASPRs compared with men; but in 2019, female rates remained notably higher (Table ). With the exception of Indonesia (MtoF=1.35), RVHD‐related HF exhibited higher ASPRs in women across nearly all G20 countries. Among these, South Africa (MtoF=0.36), Turkey (MtoF=0.37), the Russian Federation (MtoF=0.39), and Australia (MtoF=0.49) were particularly notable (Figure and Table ). Association of VHD ‐Related HF With Age and SDI The prevalence (eg, ASPRs) of NRVHD‐related HF escalated with age and SDI quintiles, peaking among individuals ≥85 years of age in high‐SDI countries both in 1990 (Figure ) and 2019 (Figure ). Moreover, the prevalence rates in each age group for both high‐middle‐SDI and high‐SDI countries in 2019 were lower than those in 1990. On the other hand, the prevalence rates of RVHD‐related HF followed a similar trend, increasing with age and SDI quintiles both in 1990 (Figure ) and 2019 (Figure ), peaking among individuals ≥85 years of age in high‐SDI countries. However, the trend of prevalence rates differed in populations <74 years of age, where the highest rates were observed in middle‐SDI countries, with the lowest prevalence rates in low‐ and high‐SDI countries. Among the same SDI countries, prevalence rates of RVHD‐related HF, except for high‐middle‐SDI and high‐SDI countries, where rates peaked among those ≥85 years of age, were highest among individuals 75 to 84 years of age. Additionally, prevalence rates in each age group of high‐SDI countries in 2019 were lower than in 1990. Similar to ASPRs, comparable trends were observed in years lived with disabilities (Figure ). Projections in G20 Countries Over the Next 10 Years Based on the 2019 GBD study, we projected future trends in ASPRs of VHD‐related HF in G20 countries for the next 10 years (from 2020 to 2030). Based on the predicted change trends of NRVHD‐ and RVHD‐related HF burden, we divide G20 countries into 6 types (Data , Figure , and Figure ), and the 10 G20 countries with significant changes are as shown in Figure . G20 Countries From 1990 to 2019, the prevalence  of NRVHD‐related HF in G20 countries increased by 69.59% (Table ), rising from 1.29 million (95% UI, 0.84–1.87) to 2.18 million (95% UI, 1.42–3.24) across all age groups and sexes combined. However, the ASPRs decreased by 26.37%, declining from 48.96 (95% UI, 32.40–70.68) per 100 000 person‐years to 36.05 (95% UI, 23.64–53.52) over the 30‐year period (Table ). In regard to RVHD‐related HF, the prevalence increased by 117.34%, from 0.86 million (95% UI, 0.66–1.10) to 1.86 million (95% UI, 1.44–2.37) (Table ), with the ASPRs increasing by 10.11% from 27.67 (95% UI, 21.50–35.09) to 30.47 (95% UI, 23.70–38.59) (Table ). Both NRVHD‐related HF and RVHD‐related HF showed years lived with disabilities numbers (Tables and ) and years lived with disabilities (Tables and ) accounting for approximately 9.0% of their cases (Figures and ), observed across the overall population, as well as age (Tables and ) and sex (Tables and ) subgroups. VHD ‐Related HF in G20 Countries NRVHD ‐Related HF Burden in G20 Countries In 1990, among the G20 countries, Italy had the highest ASPRs of NRVHD‐related HF, reaching 182.43 (95% UI, 118.15–268.30) per 100 000 person‐years, followed by the United States at 130.85 (95% UI, 85.68–190.14), Japan at 123.20 (95% UI, 80.02–180.30), the Republic of Korea at 59.72 (95% UI, 36.70–90.77), and the Russian Federation at 52.10 (95% UI, 33.15–76.51). Conversely, India exhibited the lowest ASPR at 1.42 (95% UI, 0.93–2.07), differing from Italy by 128‐fold (Figure and Table ). From 1990 to 2019, among G20 countries, 11 nations showed an increasing trend in ASPRs of NRVHD‐related HF. The most notable increase was observed in South Africa, showing a rise of 354.78% (95% UI, 266.84–431.09), followed by Germany at 259.47% (95% UI, 113.63–389.90), Australia at 170.62% (95% UI, 93.80–269.09), and Mexico at 108.60% (95% UI, 86.05–126.28). Among the 8 G20 nations exhibiting a decreasing trend, Japan experienced the most prominent decline at −42.18% (95% UI, −55.07 to −26.33), followed by the Republic of Korea at −39.86% (95% UI, −53.40 to −22.33) and Italy at −39.21% (95% UI, −54.00 to −20.95) (Figure and Table ). Despite significant decreases over the past 30 years, the ASPRs of Italy, the United States, Japan, Canada, and the Republic of Korea were still high, with Italy remaining >100 per 100 000 person‐years. In contrast, Mexico, South Africa, and Germany showed notable increases but still maintained lower ASPRs, especially Mexico, which remained <10 per 100 000 person‐years (Figure and Table ). RVHD ‐Related HF Burden in G20 Countries In 1990, among the G20 countries, China had the highest ASPRs of RVHD‐related HF, standing at 43.37 (95% UI, 33.97–55.17) per 100 000 person‐years, followed by Canada at 36.72 (95% UI, 26.96–48.20), India at 33.16 (95% UI, 25.04–42.79), and France at 32.56 (95% UI, 26.60–39.33). Conversely, Brazil exhibited the lowest ASPR at 6.09 (95% UI, 4.25–8.31), differing from China by a factor of 7 (Figure and Table ). From 1990 to 2019, among G20 countries, 8 nations showed an increasing trend in ASPRs of RVHD‐related HF. The most notable increase was observed in Italy, with a rise of 20.53% (95% UI, 4.29–43.70), followed by China at 12.03% (95% UI, 9.04–15.04), and Saudi Arabia at 10.67% (95% UI, 1.87–19.83). Among the 11 nations of the G20 showing a decreasing trend, Canada experienced the most prominent decline at −39.61% (95% UI, −48.86 to −28.80), followed by France at −36.93% (95% UI, −48.70 to −22.99), and the United Kingdom at −32.06% (95% UI, −38.88 to −23.23), along with Australia at −31.34% (95% UI, −42.76 to −17.93) (Figure and Table ). Despite significant decreases over the past 30 years, the ASPRs of Canada, France, and Australia were still high. Conversely, Saudi Arabia showed notable increases but still maintained lower ASPRs. Notably, in both 1990 and 2019, China, Italy, and India had high and increasing ASPRs of RVHD‐related HF over the 30‐year period, with China remaining at >40 per 100 000 person‐years. In contrast, the United Kingdom, Germany, Turkey, Japan, and the Republic of Korea showed low and significantly decreasing rates over the 30 years, with the Republic of Korea remaining <10 per 100 000 person‐years (Figure and Table ). Burden of VHD ‐Related HF by Age Groups In 2019, the prevalence rates of NRVHD‐related HF in G20 countries increased with age and peaked at ≥85 years of age (Figure and Table ). The country with the highest prevalence rate was the United States, with 2880.30 (95% UI, 1924.61–4162.05) per 100 000 person‐years, followed by Italy at 2614.85 (95% UI, 1702.52–3955.89) and Japan at 1899.37 (95% UI, 1233.20–2804.60). On the other hand, the highest prevalence rates of RVHD‐related HF varied among the G20 countries. Except for China, India, the Russian Federation, Mexico, Argentina, and Turkey, which peaked in the age group of 75 to 84 years of age, all reached their highest levels among individuals ≥85 years of age (Figure and Table ). Burden of VHD ‐Related HF by Sex Groups In addition to age heterogeneity, there was sex heterogeneity in NRVHD‐related HF among G20 countries. Some countries exhibited higher ASPRs in men, whereas others showed higher rates in women. In 2019, G20 countries with higher ASPRs in men included South Africa (MtoF=17.42), the Russian Federation (MtoF=2.23), and Mexico (MtoF=1.59) (Figure and Table ). Countries such as Argentina and Brazil experienced a transition from a higher proportion of women in 1990 to a higher proportion of men in 2019, with ASPRs changing from 0.93 to 1.35 for Argentina and from 0.94 to 1.45 for Brazil. Remarkably, over the 30‐year period, South Africa witnessed the most pronounced increase in ASPRs among men, with a surge >6 times and a shift in the MtoF from 2:1 to 17:1 (Table ). In the remaining G20 countries, women consistently exhibited higher ASPRs for NRVHD‐related HF over the 30‐year period, with Japan (MtoF=0.50), Turkey (MtoF=0.57), and Italy (MtoF=0.69) being particularly notable (Figure and Table ). From 1990 to 2019, Germany experienced a >3‐fold increase in ASPRs among women. Women in Japan underwent a larger decrease in ASPRs compared with men; but in 2019, female rates remained notably higher (Table ). With the exception of Indonesia (MtoF=1.35), RVHD‐related HF exhibited higher ASPRs in women across nearly all G20 countries. Among these, South Africa (MtoF=0.36), Turkey (MtoF=0.37), the Russian Federation (MtoF=0.39), and Australia (MtoF=0.49) were particularly notable (Figure and Table ). Association of VHD ‐Related HF With Age and SDI The prevalence (eg, ASPRs) of NRVHD‐related HF escalated with age and SDI quintiles, peaking among individuals ≥85 years of age in high‐SDI countries both in 1990 (Figure ) and 2019 (Figure ). Moreover, the prevalence rates in each age group for both high‐middle‐SDI and high‐SDI countries in 2019 were lower than those in 1990. On the other hand, the prevalence rates of RVHD‐related HF followed a similar trend, increasing with age and SDI quintiles both in 1990 (Figure ) and 2019 (Figure ), peaking among individuals ≥85 years of age in high‐SDI countries. However, the trend of prevalence rates differed in populations <74 years of age, where the highest rates were observed in middle‐SDI countries, with the lowest prevalence rates in low‐ and high‐SDI countries. Among the same SDI countries, prevalence rates of RVHD‐related HF, except for high‐middle‐SDI and high‐SDI countries, where rates peaked among those ≥85 years of age, were highest among individuals 75 to 84 years of age. Additionally, prevalence rates in each age group of high‐SDI countries in 2019 were lower than in 1990. Similar to ASPRs, comparable trends were observed in years lived with disabilities (Figure ). ‐Related HF Burden in G20 Countries In 1990, among the G20 countries, Italy had the highest ASPRs of NRVHD‐related HF, reaching 182.43 (95% UI, 118.15–268.30) per 100 000 person‐years, followed by the United States at 130.85 (95% UI, 85.68–190.14), Japan at 123.20 (95% UI, 80.02–180.30), the Republic of Korea at 59.72 (95% UI, 36.70–90.77), and the Russian Federation at 52.10 (95% UI, 33.15–76.51). Conversely, India exhibited the lowest ASPR at 1.42 (95% UI, 0.93–2.07), differing from Italy by 128‐fold (Figure and Table ). From 1990 to 2019, among G20 countries, 11 nations showed an increasing trend in ASPRs of NRVHD‐related HF. The most notable increase was observed in South Africa, showing a rise of 354.78% (95% UI, 266.84–431.09), followed by Germany at 259.47% (95% UI, 113.63–389.90), Australia at 170.62% (95% UI, 93.80–269.09), and Mexico at 108.60% (95% UI, 86.05–126.28). Among the 8 G20 nations exhibiting a decreasing trend, Japan experienced the most prominent decline at −42.18% (95% UI, −55.07 to −26.33), followed by the Republic of Korea at −39.86% (95% UI, −53.40 to −22.33) and Italy at −39.21% (95% UI, −54.00 to −20.95) (Figure and Table ). Despite significant decreases over the past 30 years, the ASPRs of Italy, the United States, Japan, Canada, and the Republic of Korea were still high, with Italy remaining >100 per 100 000 person‐years. In contrast, Mexico, South Africa, and Germany showed notable increases but still maintained lower ASPRs, especially Mexico, which remained <10 per 100 000 person‐years (Figure and Table ). ‐Related HF Burden in G20 Countries In 1990, among the G20 countries, China had the highest ASPRs of RVHD‐related HF, standing at 43.37 (95% UI, 33.97–55.17) per 100 000 person‐years, followed by Canada at 36.72 (95% UI, 26.96–48.20), India at 33.16 (95% UI, 25.04–42.79), and France at 32.56 (95% UI, 26.60–39.33). Conversely, Brazil exhibited the lowest ASPR at 6.09 (95% UI, 4.25–8.31), differing from China by a factor of 7 (Figure and Table ). From 1990 to 2019, among G20 countries, 8 nations showed an increasing trend in ASPRs of RVHD‐related HF. The most notable increase was observed in Italy, with a rise of 20.53% (95% UI, 4.29–43.70), followed by China at 12.03% (95% UI, 9.04–15.04), and Saudi Arabia at 10.67% (95% UI, 1.87–19.83). Among the 11 nations of the G20 showing a decreasing trend, Canada experienced the most prominent decline at −39.61% (95% UI, −48.86 to −28.80), followed by France at −36.93% (95% UI, −48.70 to −22.99), and the United Kingdom at −32.06% (95% UI, −38.88 to −23.23), along with Australia at −31.34% (95% UI, −42.76 to −17.93) (Figure and Table ). Despite significant decreases over the past 30 years, the ASPRs of Canada, France, and Australia were still high. Conversely, Saudi Arabia showed notable increases but still maintained lower ASPRs. Notably, in both 1990 and 2019, China, Italy, and India had high and increasing ASPRs of RVHD‐related HF over the 30‐year period, with China remaining at >40 per 100 000 person‐years. In contrast, the United Kingdom, Germany, Turkey, Japan, and the Republic of Korea showed low and significantly decreasing rates over the 30 years, with the Republic of Korea remaining <10 per 100 000 person‐years (Figure and Table ). VHD ‐Related HF by Age Groups In 2019, the prevalence rates of NRVHD‐related HF in G20 countries increased with age and peaked at ≥85 years of age (Figure and Table ). The country with the highest prevalence rate was the United States, with 2880.30 (95% UI, 1924.61–4162.05) per 100 000 person‐years, followed by Italy at 2614.85 (95% UI, 1702.52–3955.89) and Japan at 1899.37 (95% UI, 1233.20–2804.60). On the other hand, the highest prevalence rates of RVHD‐related HF varied among the G20 countries. Except for China, India, the Russian Federation, Mexico, Argentina, and Turkey, which peaked in the age group of 75 to 84 years of age, all reached their highest levels among individuals ≥85 years of age (Figure and Table ). VHD ‐Related HF by Sex Groups In addition to age heterogeneity, there was sex heterogeneity in NRVHD‐related HF among G20 countries. Some countries exhibited higher ASPRs in men, whereas others showed higher rates in women. In 2019, G20 countries with higher ASPRs in men included South Africa (MtoF=17.42), the Russian Federation (MtoF=2.23), and Mexico (MtoF=1.59) (Figure and Table ). Countries such as Argentina and Brazil experienced a transition from a higher proportion of women in 1990 to a higher proportion of men in 2019, with ASPRs changing from 0.93 to 1.35 for Argentina and from 0.94 to 1.45 for Brazil. Remarkably, over the 30‐year period, South Africa witnessed the most pronounced increase in ASPRs among men, with a surge >6 times and a shift in the MtoF from 2:1 to 17:1 (Table ). In the remaining G20 countries, women consistently exhibited higher ASPRs for NRVHD‐related HF over the 30‐year period, with Japan (MtoF=0.50), Turkey (MtoF=0.57), and Italy (MtoF=0.69) being particularly notable (Figure and Table ). From 1990 to 2019, Germany experienced a >3‐fold increase in ASPRs among women. Women in Japan underwent a larger decrease in ASPRs compared with men; but in 2019, female rates remained notably higher (Table ). With the exception of Indonesia (MtoF=1.35), RVHD‐related HF exhibited higher ASPRs in women across nearly all G20 countries. Among these, South Africa (MtoF=0.36), Turkey (MtoF=0.37), the Russian Federation (MtoF=0.39), and Australia (MtoF=0.49) were particularly notable (Figure and Table ). VHD ‐Related HF With Age and SDI The prevalence (eg, ASPRs) of NRVHD‐related HF escalated with age and SDI quintiles, peaking among individuals ≥85 years of age in high‐SDI countries both in 1990 (Figure ) and 2019 (Figure ). Moreover, the prevalence rates in each age group for both high‐middle‐SDI and high‐SDI countries in 2019 were lower than those in 1990. On the other hand, the prevalence rates of RVHD‐related HF followed a similar trend, increasing with age and SDI quintiles both in 1990 (Figure ) and 2019 (Figure ), peaking among individuals ≥85 years of age in high‐SDI countries. However, the trend of prevalence rates differed in populations <74 years of age, where the highest rates were observed in middle‐SDI countries, with the lowest prevalence rates in low‐ and high‐SDI countries. Among the same SDI countries, prevalence rates of RVHD‐related HF, except for high‐middle‐SDI and high‐SDI countries, where rates peaked among those ≥85 years of age, were highest among individuals 75 to 84 years of age. Additionally, prevalence rates in each age group of high‐SDI countries in 2019 were lower than in 1990. Similar to ASPRs, comparable trends were observed in years lived with disabilities (Figure ). G20 Countries Over the Next 10 Years Based on the 2019 GBD study, we projected future trends in ASPRs of VHD‐related HF in G20 countries for the next 10 years (from 2020 to 2030). Based on the predicted change trends of NRVHD‐ and RVHD‐related HF burden, we divide G20 countries into 6 types (Data , Figure , and Figure ), and the 10 G20 countries with significant changes are as shown in Figure . The prevention and treatment of VHD‐related HF continue to be pressing and unresolved global public health challenges. This study, for the first time, revealed the substantial burden of VHD‐related HF in G20 countries, and showed heterogeneity by age, sex, and SDI, referring to the differences in features based on lifestyle, economic development, population aging, and health care resource levels among G20 countries. Deteriorating Burden of NRVHD ‐Related HF High‐Burden Countries of NRVHD ‐Related HF and Its Possible Causes This study revealed a rising trend of NRVHD‐related HF burden across G20 countries, escalating with age and SDI, and displaying heterogeneity in sex. The ASPRs of NRVHD‐related HF exhibited a 26.4% decrease over the past 30 years, but the all‐age prevalence increased by 29.1% (Table ). A possible explanation was that NRVHD and HF were significantly influenced by age. , Using age‐standardized rates helped offset the impact of higher rates in older populations and population aging, resulting in a declining trend. In 2019, G20 countries with the highest burden of NRVHD‐related HF were Italy, the United States, the Russian Federation, Japan, and Australia, sequentially (Table ). Italy, as the G20 country with the most substantial burden, is primarily influenced by aging. Under the circumstances, the ASPRs of NRVHD in Italy were about twice as high, and the mortality rate was 4 times higher than the global average. Notably, Italy, Japan, and the United States continued to bear a high burden of NRVHD‐related HF in 2019, but there was a significant decreasing trend in this burden over the past 30 years. It was possibly linked to the successful implementation of primary measures for NRVHD. These measures included addressing population aging, promoting smoking cessation, encouraging regular exercise, and increasing investment in health care. More importantly, the Russian Federation, another G20 country facing a substantial burden of NRVHD‐related HF, demonstrated a persistent increasing trend over the past 30 years. Possible explanations included the detrimental effects among Russians: excessive alcohol consumption and a high prevalence of hypertension. On the other hand, inadequate prevention and control of NRVHD, characterized by insufficient health financing and limited expertise in cardiac surgery, contributed to an elevated HF risk. Despite a marginal change (<1%) over the same period, the burden of NRVHD‐related HF in the United Kingdom remained substantial in 2019, attributed to underdiagnosis and the absence of timely and effective interventions among patients with VHD. Low‐Burden Countries of NRVHD ‐Related HF and Its Possible Causes Conversely, the G20 countries experiencing the least burden of NRVHD‐related HF in 2019 were India, Brazil, Mexico, France, and Indonesia, sequentially (Table ). With the exception of France, these countries were distinguished by lower levels of aging and economics, resulting in comparatively lower exposure to cardiovascular pandemics. India, holding the position of the country with the lowest burden of NRVHD‐related HF, exhibited distinct characteristics such as a lower prevalence of NRVHD, a higher mortality rate (indicative of a shorter average life expectancy), and a slower aging process. Similarly, a comparable phenomenon was noted in Brazil. The most recent epidemiological data revealed an upward trajectory in the prevalence and crude mortality rates of NRVHD in Brazil during 1990 to 2019, but it is noteworthy that HF is not currently the leading cause of death in that country. Moving on to Mexico, the relatively low burden of NRVHD‐related HF can be primarily attributed to the widespread adoption of valve surgeries, effectively slowing the progression of NRVHD. In contrast, Indonesia benefited from effective primary prevention strategies, including improved access to cardiovascular health care and the promotion of healthy lifestyles such as smoking cessation and regular exercise. These measures had effectively controlled cardiovascular diseases such as NRVHD. On the other hand, as the primary complication of NRVHD, HF takes a relatively long time to progress, which indicates that HF will become a significant challenge in India, Brazil, Mexico, and Indonesia. Interestingly, France, characterized by a substantial aging population and multiple NRVHD risks, surprisingly exhibited a relatively low burden of NRVHD‐related HF, similar to the paradox in France seen in coronary heart disease. The cultural background and lifestyle in France, such as moderate wine consumption, a low‐fat high‐fiber diet, and regular exercise may contribute to this. However, even in low‐burden G20 countries, there has been a rising trend in NRVHD‐related HF over the past 30 years, linked to long‐term exposure to cardiovascular risk factors and aging. This impact was less severe compared with countries with a higher burden of NRVHD‐related HF. Characteristics of Sex and SDI on NRVHD ‐Related HF Burden The burden of NRVHD‐related HF in G20 countries exhibited a positive correlation with age, peaking in individuals ≥85 years of age (Table ), and a sex heterogeneity (Table ). Women generally experienced a more severe burden than men in most G20 countries, especially pronounced in Japan (MtoF=0.50) and Italy (MtoF=0.69), possibly due to higher susceptibility of women to conditions like mitral valve prolapse and HF. Conversely, men had a higher burden in South Africa (MtoF=17.42) and the Russian Federation (MtoF=2.23), linked to less timely interventions and greater exposure to cardiovascular risk factors. On the other hand, NRVHD‐related HF burden also positively correlated with SDI, even within regions of the same country. Beydoun et al revealed that NRVHD prevalence in the most developed regions of the United States was 1.16 times higher than in the less‐developed areas, likely due to advanced detection, screening, and more severe risk factors for NRVHD (eg, aging, hypertension, dyslipidemia, and obesity). Underestimated Burden of RVHD ‐Related HF High‐Burden Countries of RVHD ‐Related HF and Its Possible Causes Contrary to the prevailing notion that RVHD was gradually declining, RVHD remained the predominant form of VHD globally, ranking as the third leading cause of HF. This study found that the ASPRs of RVHD‐related HF in G20 countries showed a concerning 10% increase over the past 30 years, and also found that the G20 countries with the highest burden of RVHD‐related HF in 2019 were China, India, Italy, South Africa, and Canada, sequentially (Table ). In developing countries (eg, China, India, and South Africa) and underdeveloped regions in developed countries (eg, Canada), the primary factors contributing to this burden were related to a substantial historical load of RVHD as well as inadequately distributed health financing. Canada, as one of the developed countries grappling with a substantial burden of RVHD‐related HF, was significantly influenced by historical factors, contributing to a notable prevalence of RVHD among its residents. In the case of Italy, another developed country, RVHD‐related HF was associated with inadequate monitoring of RVHD among immigrants. The presence of a considerable number of borderline RVHD cases in Italy was attributed to the influx of refugees and migrants. Contrary to previous research describing a declining trend of RVHD in Western countries, Italy faced a tremendous hidden threat beneath its seemingly calm surface. In China, approximately 55.1% of patients with VHD were attributed to RVHD, facing a 5.17‐fold higher risk of HF compared with the general populations. , Similarly, in India, not only was there a high prevalence of RVHD, but there was also a significant regional disparity in RVHD‐related HF between the northern and southern parts of the country. , Despite comparable RVHD burdens in China and India, China exhibited a higher burden of RVHD‐related HF, possibly related to variations in RVHD mortality. From 1990 to 2015, RVHD‐related death decreased by 71% in China, whereas the decline was only 18% in India, resulting in a relatively heightened risk of HF. It was noteworthy that South Africa, as the sole representative of the African region in the G20 countries, faced a significant burden of RHD‐related HF. However, this issue was not unique to South Africa but was prevalent across other African countries and even the entire continent, albeit with regional variations due to differences in health policies, medical resources, and population structure. , To address this challenge, African Union countries implemented significant initiatives against RVHD in 2015, including establishing disease registers, penicillin treatment, early echocardiographic screening, launching national multisectoral RVHD programs, and fostering international partnerships with multinational organizations. Despite these efforts, South Africa continued to bear a significant burden of RVHD‐related HF in 2019, and the overall prevalence of RVHD remained at 14.67% in East Africa. Therefore, the efficacy of these measures requires validation over time. Low‐Burden Countries of RVHD ‐Related HF and Its Possible Causes Conversely, the 5 G20 countries with the lowest RVHD‐related HF burden in 2019 were the Republic of Korea, Brazil, Turkey, Indonesia, and the Russian Federation, respectively (Table ). Consistent with previous studies indicating lower RVHD prevalence in Southeast Asia and high‐income Asia–Pacific countries, there was a decreasing trend over the past 30 years. This decline may be attributed to the active implementation of preventive and therapeutic measures against RVHD and HF, such as echocardiographic screening for schoolchildren and positive prevention and treatment of HF. Characteristics of Age and Sex on RVHD ‐Related HF Burden Distinct peak ages were observed for NRVHD‐ and RVHD‐related HF, with NRVHD peaking at ≥85 years of age and RVHD a decade earlier in countries like China, India, the Russian Federation, Mexico, Argentina, and Turkey (Table ), which was consistent with the findings by Adhikari et al. Regions with high RVHD burdens, such as those in Asia and South America, experienced an average age of HF onset that was 7 years earlier compared with high‐income regions. RVHD contributed to the early onset of HF, not only due to the valves or myocardium damages but also due to complications such as coronary heart disease and atrial fibrillation. Additionally, our findings revealed a higher burden of RVHD‐related HF in women compared with men (Table ). This increased burden may be due to greater autoimmune susceptibility, higher vulnerability to streptococcus, and less equitable health care access, as well as lower RVHD mortality in women. Future Prospects and Potentially Intervention Strategies for VHD ‐Related HF Burden Based on the comparison of the predicted trends in future VHD‐related HF ASPRs and the trend changes from 1990 to 2019, our findings show that there may be a reversal in the burden of VHD‐related HF in some G20 countries. Despite a decrease over the past 30 years, countries such as Italy, Japan, China, the Republic of Korea, and Canada may witness an exacerbation of NRVHD‐related HF in the next 10 years due to the prevalence of factors such as coronary heart disease, hypertension, diabetes, dyslipidemia, and obesity. In regard to the RVHD‐related HF burden, Japan, Australia, France, Germany, and the United Kingdom have experienced a decrease over the past 30 years. However, the predictions indicated an upward trend, potentially linked to inadequate intervention in remote or economically disadvantaged regions of these developed countries, especially those with high immigrant populations. Considering current and future trends in the burden of NRVHD‐ and RVHD‐related HF, a health care strategy focusing on primary prevention, supplemented by secondary prevention may be effective. (1) Primary prevention: It is crucial to establish a primary health care‐centered model. For NRVHD, promote a healthy lifestyle to address aging and improve health care access. For RVHD, conduct rheumatic fever and echocardiographic screenings in schoolchildren, particularly in economically disadvantaged or heavily immigrated areas. (2) Secondary prevention: Early treatment for patients with VHD is key to reducing HF risk. Most patients with VHD are in the high‐risk phase, where HF risk rises sharply. Guidelines suggested neurohormonal blockade therapy (eg, renin‐angiotensin‐aldosterone system inhibitors, sodium‐glucose cotransporter 2 inhibitors ) in patients with VHD only after the onset of HF or when symptoms persist after surgery. However, extending this treatment to the high‐risk period of VHD before HF could be considered. In addition, address sex disparities in HF treatment to ensure women have equitable access to care. Advantages and Limitations To the best of our knowledge, this was the first study characterizing the burden of VHD‐related HF in G20 countries and predicting the trends for the next 10 years. Limitations included data accuracy issues due to health care disparities, time lags from the GBD study's extensive scope, and incomplete subnational‐level data. Thus, results should be interpreted with caution. Furthermore, the burden of RVHD‐related HF in African countries, beyond just South Africa, requires special attention. NRVHD ‐Related HF High‐Burden Countries of NRVHD ‐Related HF and Its Possible Causes This study revealed a rising trend of NRVHD‐related HF burden across G20 countries, escalating with age and SDI, and displaying heterogeneity in sex. The ASPRs of NRVHD‐related HF exhibited a 26.4% decrease over the past 30 years, but the all‐age prevalence increased by 29.1% (Table ). A possible explanation was that NRVHD and HF were significantly influenced by age. , Using age‐standardized rates helped offset the impact of higher rates in older populations and population aging, resulting in a declining trend. In 2019, G20 countries with the highest burden of NRVHD‐related HF were Italy, the United States, the Russian Federation, Japan, and Australia, sequentially (Table ). Italy, as the G20 country with the most substantial burden, is primarily influenced by aging. Under the circumstances, the ASPRs of NRVHD in Italy were about twice as high, and the mortality rate was 4 times higher than the global average. Notably, Italy, Japan, and the United States continued to bear a high burden of NRVHD‐related HF in 2019, but there was a significant decreasing trend in this burden over the past 30 years. It was possibly linked to the successful implementation of primary measures for NRVHD. These measures included addressing population aging, promoting smoking cessation, encouraging regular exercise, and increasing investment in health care. More importantly, the Russian Federation, another G20 country facing a substantial burden of NRVHD‐related HF, demonstrated a persistent increasing trend over the past 30 years. Possible explanations included the detrimental effects among Russians: excessive alcohol consumption and a high prevalence of hypertension. On the other hand, inadequate prevention and control of NRVHD, characterized by insufficient health financing and limited expertise in cardiac surgery, contributed to an elevated HF risk. Despite a marginal change (<1%) over the same period, the burden of NRVHD‐related HF in the United Kingdom remained substantial in 2019, attributed to underdiagnosis and the absence of timely and effective interventions among patients with VHD. Low‐Burden Countries of NRVHD ‐Related HF and Its Possible Causes Conversely, the G20 countries experiencing the least burden of NRVHD‐related HF in 2019 were India, Brazil, Mexico, France, and Indonesia, sequentially (Table ). With the exception of France, these countries were distinguished by lower levels of aging and economics, resulting in comparatively lower exposure to cardiovascular pandemics. India, holding the position of the country with the lowest burden of NRVHD‐related HF, exhibited distinct characteristics such as a lower prevalence of NRVHD, a higher mortality rate (indicative of a shorter average life expectancy), and a slower aging process. Similarly, a comparable phenomenon was noted in Brazil. The most recent epidemiological data revealed an upward trajectory in the prevalence and crude mortality rates of NRVHD in Brazil during 1990 to 2019, but it is noteworthy that HF is not currently the leading cause of death in that country. Moving on to Mexico, the relatively low burden of NRVHD‐related HF can be primarily attributed to the widespread adoption of valve surgeries, effectively slowing the progression of NRVHD. In contrast, Indonesia benefited from effective primary prevention strategies, including improved access to cardiovascular health care and the promotion of healthy lifestyles such as smoking cessation and regular exercise. These measures had effectively controlled cardiovascular diseases such as NRVHD. On the other hand, as the primary complication of NRVHD, HF takes a relatively long time to progress, which indicates that HF will become a significant challenge in India, Brazil, Mexico, and Indonesia. Interestingly, France, characterized by a substantial aging population and multiple NRVHD risks, surprisingly exhibited a relatively low burden of NRVHD‐related HF, similar to the paradox in France seen in coronary heart disease. The cultural background and lifestyle in France, such as moderate wine consumption, a low‐fat high‐fiber diet, and regular exercise may contribute to this. However, even in low‐burden G20 countries, there has been a rising trend in NRVHD‐related HF over the past 30 years, linked to long‐term exposure to cardiovascular risk factors and aging. This impact was less severe compared with countries with a higher burden of NRVHD‐related HF. Characteristics of Sex and SDI on NRVHD ‐Related HF Burden The burden of NRVHD‐related HF in G20 countries exhibited a positive correlation with age, peaking in individuals ≥85 years of age (Table ), and a sex heterogeneity (Table ). Women generally experienced a more severe burden than men in most G20 countries, especially pronounced in Japan (MtoF=0.50) and Italy (MtoF=0.69), possibly due to higher susceptibility of women to conditions like mitral valve prolapse and HF. Conversely, men had a higher burden in South Africa (MtoF=17.42) and the Russian Federation (MtoF=2.23), linked to less timely interventions and greater exposure to cardiovascular risk factors. On the other hand, NRVHD‐related HF burden also positively correlated with SDI, even within regions of the same country. Beydoun et al revealed that NRVHD prevalence in the most developed regions of the United States was 1.16 times higher than in the less‐developed areas, likely due to advanced detection, screening, and more severe risk factors for NRVHD (eg, aging, hypertension, dyslipidemia, and obesity). NRVHD ‐Related HF and Its Possible Causes This study revealed a rising trend of NRVHD‐related HF burden across G20 countries, escalating with age and SDI, and displaying heterogeneity in sex. The ASPRs of NRVHD‐related HF exhibited a 26.4% decrease over the past 30 years, but the all‐age prevalence increased by 29.1% (Table ). A possible explanation was that NRVHD and HF were significantly influenced by age. , Using age‐standardized rates helped offset the impact of higher rates in older populations and population aging, resulting in a declining trend. In 2019, G20 countries with the highest burden of NRVHD‐related HF were Italy, the United States, the Russian Federation, Japan, and Australia, sequentially (Table ). Italy, as the G20 country with the most substantial burden, is primarily influenced by aging. Under the circumstances, the ASPRs of NRVHD in Italy were about twice as high, and the mortality rate was 4 times higher than the global average. Notably, Italy, Japan, and the United States continued to bear a high burden of NRVHD‐related HF in 2019, but there was a significant decreasing trend in this burden over the past 30 years. It was possibly linked to the successful implementation of primary measures for NRVHD. These measures included addressing population aging, promoting smoking cessation, encouraging regular exercise, and increasing investment in health care. More importantly, the Russian Federation, another G20 country facing a substantial burden of NRVHD‐related HF, demonstrated a persistent increasing trend over the past 30 years. Possible explanations included the detrimental effects among Russians: excessive alcohol consumption and a high prevalence of hypertension. On the other hand, inadequate prevention and control of NRVHD, characterized by insufficient health financing and limited expertise in cardiac surgery, contributed to an elevated HF risk. Despite a marginal change (<1%) over the same period, the burden of NRVHD‐related HF in the United Kingdom remained substantial in 2019, attributed to underdiagnosis and the absence of timely and effective interventions among patients with VHD. NRVHD ‐Related HF and Its Possible Causes Conversely, the G20 countries experiencing the least burden of NRVHD‐related HF in 2019 were India, Brazil, Mexico, France, and Indonesia, sequentially (Table ). With the exception of France, these countries were distinguished by lower levels of aging and economics, resulting in comparatively lower exposure to cardiovascular pandemics. India, holding the position of the country with the lowest burden of NRVHD‐related HF, exhibited distinct characteristics such as a lower prevalence of NRVHD, a higher mortality rate (indicative of a shorter average life expectancy), and a slower aging process. Similarly, a comparable phenomenon was noted in Brazil. The most recent epidemiological data revealed an upward trajectory in the prevalence and crude mortality rates of NRVHD in Brazil during 1990 to 2019, but it is noteworthy that HF is not currently the leading cause of death in that country. Moving on to Mexico, the relatively low burden of NRVHD‐related HF can be primarily attributed to the widespread adoption of valve surgeries, effectively slowing the progression of NRVHD. In contrast, Indonesia benefited from effective primary prevention strategies, including improved access to cardiovascular health care and the promotion of healthy lifestyles such as smoking cessation and regular exercise. These measures had effectively controlled cardiovascular diseases such as NRVHD. On the other hand, as the primary complication of NRVHD, HF takes a relatively long time to progress, which indicates that HF will become a significant challenge in India, Brazil, Mexico, and Indonesia. Interestingly, France, characterized by a substantial aging population and multiple NRVHD risks, surprisingly exhibited a relatively low burden of NRVHD‐related HF, similar to the paradox in France seen in coronary heart disease. The cultural background and lifestyle in France, such as moderate wine consumption, a low‐fat high‐fiber diet, and regular exercise may contribute to this. However, even in low‐burden G20 countries, there has been a rising trend in NRVHD‐related HF over the past 30 years, linked to long‐term exposure to cardiovascular risk factors and aging. This impact was less severe compared with countries with a higher burden of NRVHD‐related HF. SDI on NRVHD ‐Related HF Burden The burden of NRVHD‐related HF in G20 countries exhibited a positive correlation with age, peaking in individuals ≥85 years of age (Table ), and a sex heterogeneity (Table ). Women generally experienced a more severe burden than men in most G20 countries, especially pronounced in Japan (MtoF=0.50) and Italy (MtoF=0.69), possibly due to higher susceptibility of women to conditions like mitral valve prolapse and HF. Conversely, men had a higher burden in South Africa (MtoF=17.42) and the Russian Federation (MtoF=2.23), linked to less timely interventions and greater exposure to cardiovascular risk factors. On the other hand, NRVHD‐related HF burden also positively correlated with SDI, even within regions of the same country. Beydoun et al revealed that NRVHD prevalence in the most developed regions of the United States was 1.16 times higher than in the less‐developed areas, likely due to advanced detection, screening, and more severe risk factors for NRVHD (eg, aging, hypertension, dyslipidemia, and obesity). RVHD ‐Related HF High‐Burden Countries of RVHD ‐Related HF and Its Possible Causes Contrary to the prevailing notion that RVHD was gradually declining, RVHD remained the predominant form of VHD globally, ranking as the third leading cause of HF. This study found that the ASPRs of RVHD‐related HF in G20 countries showed a concerning 10% increase over the past 30 years, and also found that the G20 countries with the highest burden of RVHD‐related HF in 2019 were China, India, Italy, South Africa, and Canada, sequentially (Table ). In developing countries (eg, China, India, and South Africa) and underdeveloped regions in developed countries (eg, Canada), the primary factors contributing to this burden were related to a substantial historical load of RVHD as well as inadequately distributed health financing. Canada, as one of the developed countries grappling with a substantial burden of RVHD‐related HF, was significantly influenced by historical factors, contributing to a notable prevalence of RVHD among its residents. In the case of Italy, another developed country, RVHD‐related HF was associated with inadequate monitoring of RVHD among immigrants. The presence of a considerable number of borderline RVHD cases in Italy was attributed to the influx of refugees and migrants. Contrary to previous research describing a declining trend of RVHD in Western countries, Italy faced a tremendous hidden threat beneath its seemingly calm surface. In China, approximately 55.1% of patients with VHD were attributed to RVHD, facing a 5.17‐fold higher risk of HF compared with the general populations. , Similarly, in India, not only was there a high prevalence of RVHD, but there was also a significant regional disparity in RVHD‐related HF between the northern and southern parts of the country. , Despite comparable RVHD burdens in China and India, China exhibited a higher burden of RVHD‐related HF, possibly related to variations in RVHD mortality. From 1990 to 2015, RVHD‐related death decreased by 71% in China, whereas the decline was only 18% in India, resulting in a relatively heightened risk of HF. It was noteworthy that South Africa, as the sole representative of the African region in the G20 countries, faced a significant burden of RHD‐related HF. However, this issue was not unique to South Africa but was prevalent across other African countries and even the entire continent, albeit with regional variations due to differences in health policies, medical resources, and population structure. , To address this challenge, African Union countries implemented significant initiatives against RVHD in 2015, including establishing disease registers, penicillin treatment, early echocardiographic screening, launching national multisectoral RVHD programs, and fostering international partnerships with multinational organizations. Despite these efforts, South Africa continued to bear a significant burden of RVHD‐related HF in 2019, and the overall prevalence of RVHD remained at 14.67% in East Africa. Therefore, the efficacy of these measures requires validation over time. Low‐Burden Countries of RVHD ‐Related HF and Its Possible Causes Conversely, the 5 G20 countries with the lowest RVHD‐related HF burden in 2019 were the Republic of Korea, Brazil, Turkey, Indonesia, and the Russian Federation, respectively (Table ). Consistent with previous studies indicating lower RVHD prevalence in Southeast Asia and high‐income Asia–Pacific countries, there was a decreasing trend over the past 30 years. This decline may be attributed to the active implementation of preventive and therapeutic measures against RVHD and HF, such as echocardiographic screening for schoolchildren and positive prevention and treatment of HF. Characteristics of Age and Sex on RVHD ‐Related HF Burden Distinct peak ages were observed for NRVHD‐ and RVHD‐related HF, with NRVHD peaking at ≥85 years of age and RVHD a decade earlier in countries like China, India, the Russian Federation, Mexico, Argentina, and Turkey (Table ), which was consistent with the findings by Adhikari et al. Regions with high RVHD burdens, such as those in Asia and South America, experienced an average age of HF onset that was 7 years earlier compared with high‐income regions. RVHD contributed to the early onset of HF, not only due to the valves or myocardium damages but also due to complications such as coronary heart disease and atrial fibrillation. Additionally, our findings revealed a higher burden of RVHD‐related HF in women compared with men (Table ). This increased burden may be due to greater autoimmune susceptibility, higher vulnerability to streptococcus, and less equitable health care access, as well as lower RVHD mortality in women. Future Prospects and Potentially Intervention Strategies for VHD ‐Related HF Burden Based on the comparison of the predicted trends in future VHD‐related HF ASPRs and the trend changes from 1990 to 2019, our findings show that there may be a reversal in the burden of VHD‐related HF in some G20 countries. Despite a decrease over the past 30 years, countries such as Italy, Japan, China, the Republic of Korea, and Canada may witness an exacerbation of NRVHD‐related HF in the next 10 years due to the prevalence of factors such as coronary heart disease, hypertension, diabetes, dyslipidemia, and obesity. In regard to the RVHD‐related HF burden, Japan, Australia, France, Germany, and the United Kingdom have experienced a decrease over the past 30 years. However, the predictions indicated an upward trend, potentially linked to inadequate intervention in remote or economically disadvantaged regions of these developed countries, especially those with high immigrant populations. Considering current and future trends in the burden of NRVHD‐ and RVHD‐related HF, a health care strategy focusing on primary prevention, supplemented by secondary prevention may be effective. (1) Primary prevention: It is crucial to establish a primary health care‐centered model. For NRVHD, promote a healthy lifestyle to address aging and improve health care access. For RVHD, conduct rheumatic fever and echocardiographic screenings in schoolchildren, particularly in economically disadvantaged or heavily immigrated areas. (2) Secondary prevention: Early treatment for patients with VHD is key to reducing HF risk. Most patients with VHD are in the high‐risk phase, where HF risk rises sharply. Guidelines suggested neurohormonal blockade therapy (eg, renin‐angiotensin‐aldosterone system inhibitors, sodium‐glucose cotransporter 2 inhibitors ) in patients with VHD only after the onset of HF or when symptoms persist after surgery. However, extending this treatment to the high‐risk period of VHD before HF could be considered. In addition, address sex disparities in HF treatment to ensure women have equitable access to care. RVHD ‐Related HF and Its Possible Causes Contrary to the prevailing notion that RVHD was gradually declining, RVHD remained the predominant form of VHD globally, ranking as the third leading cause of HF. This study found that the ASPRs of RVHD‐related HF in G20 countries showed a concerning 10% increase over the past 30 years, and also found that the G20 countries with the highest burden of RVHD‐related HF in 2019 were China, India, Italy, South Africa, and Canada, sequentially (Table ). In developing countries (eg, China, India, and South Africa) and underdeveloped regions in developed countries (eg, Canada), the primary factors contributing to this burden were related to a substantial historical load of RVHD as well as inadequately distributed health financing. Canada, as one of the developed countries grappling with a substantial burden of RVHD‐related HF, was significantly influenced by historical factors, contributing to a notable prevalence of RVHD among its residents. In the case of Italy, another developed country, RVHD‐related HF was associated with inadequate monitoring of RVHD among immigrants. The presence of a considerable number of borderline RVHD cases in Italy was attributed to the influx of refugees and migrants. Contrary to previous research describing a declining trend of RVHD in Western countries, Italy faced a tremendous hidden threat beneath its seemingly calm surface. In China, approximately 55.1% of patients with VHD were attributed to RVHD, facing a 5.17‐fold higher risk of HF compared with the general populations. , Similarly, in India, not only was there a high prevalence of RVHD, but there was also a significant regional disparity in RVHD‐related HF between the northern and southern parts of the country. , Despite comparable RVHD burdens in China and India, China exhibited a higher burden of RVHD‐related HF, possibly related to variations in RVHD mortality. From 1990 to 2015, RVHD‐related death decreased by 71% in China, whereas the decline was only 18% in India, resulting in a relatively heightened risk of HF. It was noteworthy that South Africa, as the sole representative of the African region in the G20 countries, faced a significant burden of RHD‐related HF. However, this issue was not unique to South Africa but was prevalent across other African countries and even the entire continent, albeit with regional variations due to differences in health policies, medical resources, and population structure. , To address this challenge, African Union countries implemented significant initiatives against RVHD in 2015, including establishing disease registers, penicillin treatment, early echocardiographic screening, launching national multisectoral RVHD programs, and fostering international partnerships with multinational organizations. Despite these efforts, South Africa continued to bear a significant burden of RVHD‐related HF in 2019, and the overall prevalence of RVHD remained at 14.67% in East Africa. Therefore, the efficacy of these measures requires validation over time. RVHD ‐Related HF and Its Possible Causes Conversely, the 5 G20 countries with the lowest RVHD‐related HF burden in 2019 were the Republic of Korea, Brazil, Turkey, Indonesia, and the Russian Federation, respectively (Table ). Consistent with previous studies indicating lower RVHD prevalence in Southeast Asia and high‐income Asia–Pacific countries, there was a decreasing trend over the past 30 years. This decline may be attributed to the active implementation of preventive and therapeutic measures against RVHD and HF, such as echocardiographic screening for schoolchildren and positive prevention and treatment of HF. RVHD ‐Related HF Burden Distinct peak ages were observed for NRVHD‐ and RVHD‐related HF, with NRVHD peaking at ≥85 years of age and RVHD a decade earlier in countries like China, India, the Russian Federation, Mexico, Argentina, and Turkey (Table ), which was consistent with the findings by Adhikari et al. Regions with high RVHD burdens, such as those in Asia and South America, experienced an average age of HF onset that was 7 years earlier compared with high‐income regions. RVHD contributed to the early onset of HF, not only due to the valves or myocardium damages but also due to complications such as coronary heart disease and atrial fibrillation. Additionally, our findings revealed a higher burden of RVHD‐related HF in women compared with men (Table ). This increased burden may be due to greater autoimmune susceptibility, higher vulnerability to streptococcus, and less equitable health care access, as well as lower RVHD mortality in women. VHD ‐Related HF Burden Based on the comparison of the predicted trends in future VHD‐related HF ASPRs and the trend changes from 1990 to 2019, our findings show that there may be a reversal in the burden of VHD‐related HF in some G20 countries. Despite a decrease over the past 30 years, countries such as Italy, Japan, China, the Republic of Korea, and Canada may witness an exacerbation of NRVHD‐related HF in the next 10 years due to the prevalence of factors such as coronary heart disease, hypertension, diabetes, dyslipidemia, and obesity. In regard to the RVHD‐related HF burden, Japan, Australia, France, Germany, and the United Kingdom have experienced a decrease over the past 30 years. However, the predictions indicated an upward trend, potentially linked to inadequate intervention in remote or economically disadvantaged regions of these developed countries, especially those with high immigrant populations. Considering current and future trends in the burden of NRVHD‐ and RVHD‐related HF, a health care strategy focusing on primary prevention, supplemented by secondary prevention may be effective. (1) Primary prevention: It is crucial to establish a primary health care‐centered model. For NRVHD, promote a healthy lifestyle to address aging and improve health care access. For RVHD, conduct rheumatic fever and echocardiographic screenings in schoolchildren, particularly in economically disadvantaged or heavily immigrated areas. (2) Secondary prevention: Early treatment for patients with VHD is key to reducing HF risk. Most patients with VHD are in the high‐risk phase, where HF risk rises sharply. Guidelines suggested neurohormonal blockade therapy (eg, renin‐angiotensin‐aldosterone system inhibitors, sodium‐glucose cotransporter 2 inhibitors ) in patients with VHD only after the onset of HF or when symptoms persist after surgery. However, extending this treatment to the high‐risk period of VHD before HF could be considered. In addition, address sex disparities in HF treatment to ensure women have equitable access to care. To the best of our knowledge, this was the first study characterizing the burden of VHD‐related HF in G20 countries and predicting the trends for the next 10 years. Limitations included data accuracy issues due to health care disparities, time lags from the GBD study's extensive scope, and incomplete subnational‐level data. Thus, results should be interpreted with caution. Furthermore, the burden of RVHD‐related HF in African countries, beyond just South Africa, requires special attention. VHD‐related HF burden has been a progressively serious public health issue in G20 countries globally. In developing countries, the previous existing burden associated with RVHD‐related HF has not been fully alleviated, and there is a discernible upward trajectory in the burden of NRVHD‐related HF. In developed countries, not only is there an increasing burden of NRVHD‐related HF, but there is also a noticeable resurgence in the burden of RVHD‐related HF. Consequently, it is imperative for G20 countries to develop comprehensive health policies that prioritize early screening, early prevention, and early treatment to effectively address the substantial burden of VHD‐related HF. This study was funded by the Bill and Melinda Gates Foundation. Moreover, this work was partially supported by grants from the Natural Science Foundation of Guangdong Province (2022A1515220021), the Guangzhou Science and Technology Project of China (2023A03J0476), and the Chinese Cardiovascular Association‐Atherosclerotic Cardiovascular Disease (ASCVD) fund (2023‐CCA‐ASCVD‐141). None. Data S1 Tables S1–S8 Figures S1–S4
Clinical Circulating Tumor DNA Testing for Precision Oncology
0748d718-d0eb-49c7-a668-8db7d780bb14
10101787
Internal Medicine[mh]
As we currently live in an era of information and advanced genomics, we should maintain our focus on how we respond to and process the overwhelming amount of information we encounter. For example, Mandel and Metais first described the presence of nucleic acids in human blood in 1948, but several decades passed before attention was paid to the vast amount of information supplied by nucleic acids in the blood. However, since the discovery of mutant RAS gene fragments in the blood of cancer patients in 1994 and the detection of microsatellite DNA changes in the serum of cancer patients in 1996 , the information contained within the nucleic acids in the blood has gradually gained attention. Blood contains cellular components and numerous biological substances, such as extracellular vesicles, proteins, and nucleic acids, including mRNAs, miRNAs, and cell-free DNA (cfDNA). cfDNA refers to any non-encapsulated DNA within the bloodstream originating from various cell types. The portion of the cfDNA in the blood of cancer patients released from tumor cells via apoptosis, necrosis, or active release , is commonly referred to as the circulating tumor DNA (ctDNA). The ctDNA has gained increasing attention since 2010 because of the potential to detect early cancer metastases through novel, sensitive laboratory methods that cannot be detected by high-resolution imaging techniques . The BRACAnalysis (Myriad Genetic Laboratories, Salt Lake City, UT) was the first Food and Drug Administration (FDA) approved companion diagnostic test using ovarian cancer patient’s blood specimens with the development of a gene mutation treatment. Expectations arose that ctDNA could lead to drug treatments for cancer patients. Since then, the number of tests and studies related to ctDNA has exploded exponentially . Nucleic acids in the blood are heterogenous depending on their origin. ctDNA analysis can provide more comprehensive information than a conventional tissue biopsy, which has the spatial limitation inherent in sampling due to tumor tissue heterogeneity. It is estimated that up to 3.3% of tumor DNA enters the blood daily from 100 g of tumor tissue, equivalent to 3×10 10 tumor cells . On average, the size of the ctDNA varies from small fragments of 70–200 base pairs to large fragments of up to 21 kb . It is important to note the relatively short half-life of ctDNA in blood circulation, ranging from 16 minutes to 2.5 hours . Although many tumor-specific abnormalities (e.g., mutations in tumor or tumor suppressor genes, changes in DNA integrity , abnormal gene methylation, changes in microsatellite , mitochondrial DNA loading levels, and changes in chromosomal genomes ) can be detected using ctDNA, a number of obstacles exist in the implementation of ctDNA for screening and diagnosis. First, normal hematopoietic cells and other nucleic acids of non-tumor origin also contribute to the ctDNA in the blood and cause false positives in ctDNA assays in cancer patients . Not all somatic mutations detected in the ctDNA analyses are of cancer origin; clonal expansion of somatic variants can be observed in healthy individuals and may represent clonal hematopoiesis of indeterminate potential (CHIP). CHIP frequency increases with age, with only 1% of people under the age of 50 but > 10% over the age of 65 exhibiting CHIP . These abnormalities commonly occur in the DNMT3A , TET2 , and ASXL1 genes , but have also been reported in other genes such as TP53 , JAK2 , SF3B1 , GNB1 , PPM1D , GNAS , and BCORL1 . Simultaneous occurrence of CHIP and tumor-derived gene mutations that have abnormalities in these genes may cause difficulties interpreting the ctDNA assays. The second issue is the low concentration of ctDNA (1–10 ng/mL in asymptomatic individuals) . Depending on the concentration of ctDNA, a false negative result is possible; therefore, the sample volume is an important factor affecting the results. Third, the variant allele frequency (VAF) of ctDNA is usually much lower, often below 1%, and can be affected by factors such as cancer type, stage, and clearance rate . Any interpretation of the results requires careful decisions regarding the threshold of allele frequencies of the detected variants, as these are critical aspects. Fourth, there is a lack of consensus on how ctDNA detection should be performed, from the extraction stage to the final in silico variant analysis stage. Even the nomenclatures related to ctDNA lack a proper consensus . Due to the rapid incremental clinical use of ctDNA testing, the unmet demand for a proper consensus on ctDNA-related issues remains . This review describes the currently-available ctDNA assays based on the different methodologies, ranging from the traditional methods to more recent advanced molecular technologies. We focus on the unmet need for clinical validation of ctDNA testing by reviewing the validation and approval processes of the FDA and European Commission in vitro Diagnostic Medical Device (CE-IVD), among others. This review addresses frequently raised questions regarding the clinical application of ctDNA assays, summarizes the current status of approved and validated ctDNA assays, and the future direction of ctDNA testing. Before introducing the ctDNA test, the terminology and definitions of ctDNA must be clarified. Bronkhorst et al. proposed a nomenclature system for three highly investigated diagnostic areas based on the biological compartment in which the cfDNA is distributed (depending on its presence in circulation) and the origin . cfDNA is highly heterologous, and a broader concept is needed that covers both nuclear and microbial DNA. Nuclear DNA includes mitochondrial DNA, and microbial DNA encompasses both microbial and viral DNA, not of human origin. Part of the nuclear DNA in the plasma of cancer patients is ctDNA. ctDNA usually refers to all types of tumor-derived DNA in the circulating blood, as discussed in this review. DNA abnormalities occur by different parts, and each has different features. These features of the ctDNA have different potential clinical implications . Genomic aberrations of somatic origin detectable in the ctDNA include mutations, chromosomal rearrangements, and copy number changes. An additional features characteristic of ctDNA is specific epigenetic aberrations such as methylation patterns or different DNA fragment lengths . Although ctDNA tests vary in their genomic features and coverage of the genes of interest, the basic principles of the test remain the same. Two categories exist; targeted approaches that test for a small number of known mutations, and untargeted approaches that broadly test for unknown targets. A targeted approach includes real-time polymerase chain reaction (RT-PCR), digital PCR (dPCR), and beads, emulsion, amplification, and magnetics (BEAMing) technology, whereas the broader approach would include high-throughput sequencing methods based on next-generation sequencing (NGS), whole exome sequencing (WES), whole genome sequencing (WGS), and mass-spectrometry-based detection of PCR amplicons among others. 1. ctDNA detection methods 1) RT-PCR RT-PCR is widely used for variant screening because it is relatively inexpensive and fast . The variants are detected via the binding of complementary sequences using fluorescent-labeled sequence-specific probes, and the fluorescence intensity is related to the amount of amplified product. The sensitivity of RT-PCR is approximately 10%, which is lower than that of other test methods . Cold amplification at a lower denaturation temperature PCR (COLD-PCR) is a variant assay that improves the RT-PCR sensitivity. COLD-PCR concentrates mutated DNA sequences in preference to the wild type using a lower-temperature denaturation step during the cycling protocol. The denaturation temperature for a given sequence is adjusted within ±0.3°C to allow for selective denaturation and amplification of mutated sequences, while double-stranded wild-type sequences are amplified less. This assay can enrich the mutant sequences, improving the sensitivity to detect the mutant allele frequency (MAF) to approximately 0.1% . The PCR-based method has the advantage of high sensitivity and cost-effectiveness but is limited as only known variants can be selected with limiting input and speed. 2) Digital PCR dPCR shares the same reaction principle as RT-PCR, except that the samples are dispersed into arrays or droplets, resulting in thousands of parallel PCR reactions. The dPCR can quantify a low fraction of variants against a high background wild-type cfDNA using a single or few DNA templates in an array/droplet and has 0.1% sensitivity . The dPCR method can be applied to cancer personalized profiling by deep sequencing (CAPP-Seq) in combination with molecular barcoding technologies that improve sensitivity by reducing the background sequencing error . These two methods improve the sensitivity of CAPP-Seq up to three-fold and, when combined with molecular barcoding, yield approximately 15-fold improvements . cfDNA enrichment is conducted by a two-step PCR procedure during the sample preparation process. The first PCR amplifies the mutational hotspot regions of several genes in a single tube. The second PCR is a nested PCR with unique barcoded primers for sample labeling. The final PCR products are pooled and partitioned for sequencing. The advanced dPCR assay (BEAMing) is a highly sensitive approach with a detection rate of 0.02% . This approach consists of four principal components: beads, emulsion, amplification, and magnetics. BEAMing combines dPCR with magnetic bead and flow cytometry . In BEAMing, the primer binds to the magnetic beads using a reaction that forms a biotin-streptavidin complex. Less than one template molecule and less than one bead are contained within the microemulsion, and the PCR is performed within each droplet. At the end of the PCR process, the beads are magnetically purified. After denaturation, the beads are incubated with oligonucleotides to distinguish between different templates. The bound hybridization probe is then labeled with a fluorescently labeled antibody. Finally, the amplified products are counted as fluorescent beads by flow cytometry. However, the BEAMing method is impractical for routine clinical use due to its workflow complexity and high cost . 3) Mass spectrometry The mass spectrometry-based method combines the matrix-assisted laser desorption/ionization time-of-flight mass spectrometry with a conventional multiplex PCR. An example of this method is UltraSEEK (Agena Bioscience, San Diego, CA). UltraSEEK consists of two-step PCR for amplification and mass spectrometry for detection. The two-step PCR step consists of a multiplex PCR followed by a mutation-specific single-base extension reaction. The extension reaction uses a single mutation-specific chain terminator labeled with a moiety for solid phase capture. Captured, washed, and eluted products are examined for mass, and mutational genotypes are identified and characterized using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry . UltraSEEK has the advantage of multiplex detection of mutant sequences simultaneously, and has a MAF of 0.1% . 4) Next-generation sequencing NGS, also known as massively parallel sequencing technology, can characterize cancer at the genomic, transcriptomic, and epigenetic levels. NGS is a highly sensitive assay that can detect mutations in MAF of < 1% using the latest platforms . NGS can analyze several million short DNA sequences in parallel and conduct sequence alignment or de novo sequence assembly to the reference genome . Depending on the panel configuration, NGS panels can be targeted to analyze known variants or untargeted to screen unknown variants. Target panels were preferred due to their high sensitivity and low cost but are limited to point mutations and indel analysis. Several NGS methods can be applied to target panels with adjustable sensitivity, including tagged amplicon deep sequencing, the safe sequencing system, and CAPP-Seq. On the other hand, WGS or WES using untargeted panels allow detection of unknown DNA variants throughout the entire genome (or exome). Different genome-wide sequencing methods have been proposed for different variation types, such as personalized analysis of rearranged ends, digital karyotyping, and Fast Aneuploidy Screening Test-Sequencing System . However, genome-wide sequencing requires a large sample, making its application for ctDNA difficult due to the low concentrations of ctDNA in samples. There have been attempts to analyze DNA fragmentation differences. There is a marked difference in fragment length size between ctDNA and normal cf DNA. The fragment length of ctDNA is consistently shorter than that of normal cfDNA . Besides, ctDNA with a low MAF (< 0.6%) is associated with a longer ctDNA fragment length when compared to normal cfDNAs . Moreover, most cancers of different origins showed fragmentation profiles of varying lengths . The characteristic DNA fragmentation provides a proof-of-principle approach applicable to screening, early detection, and monitoring of various cancer types. NGS application has been extended to microsatellite inst-ability (MSI) detection . Loss of DNA mismatch repair (MMR) activity leads to an accumulation of mutations that could otherwise be corrected by MMR genes. A deficiency in MMR activity is often caused by germline mutations or aberrant methylation. The MSI phenotype of a deficiency in MMR activity refers to the shortening or lengthening of tandem DNA repeats in coding and noncoding regions throughout the genome. Tumors with at least 30% to 40% of unstable microsatellite loci, termed microsatellite instability high (MSI-H) , reportedly have a better prognosis than MSS tumors and tumors with low MSI. MSI has been documented in various cancer types, including colon, endometrium, and stomach cancers . The FDA has approved Pembrolizumab to treat MSI-H cancer regardless of the tumor type or site . NGS-based methods utilize various MSI detection algorithms such as MSIsensor , mSINGS , MANTIS , and bMSISEA , which have demonstrated concordance rates ranging from 92.3% to 100% with the PCR-based method. The application of NGS can reliably detect the MSI status with a ctDNA fraction up to 0.4% . 5) Methylation analysis Epigenetic information such as methylation is more specific to the tissue of origin than genetic mutations . Changes in DNA methylation patterns occur early in tumor development and have been reported to help early screening for cancers of unknown origin . Methylation analysis is not routinely or commonly used to detect ctDNA, but it can be partially applied to cancer patients. The method can be broadly divided depending on its application to the candidate gene. The Grail’s technology applied DNA methylation patterns to differentiate between cancer cell types or tissue origins . Most cfDNA methylation analysis methods applied a candidate gene approach due to the low analytical cost and the efficiency of using pre-established epigenetic biomarkers . Bisulfite treatment-based assays distinguish cytosine methylation and are generally the preferred ctDNA methylation detection method . The analytical principle is based on treating the DNA with bisulfite to convert unmethylated cytosine residues to uracil. Two types of ctDNA methylation analysis exist; PCR-based methods that apply specific primers or melting temperatures and sequence-based methods such as direct sequencing or pyrosequencing. However, the accuracy of bisulfite pyrosequencing is only maintained up to 5% . Methylation-specific PCR (MSP) can distinguish DNA sequences by sequence-specific PCR primers after bisulfite conversion . The methylation-sensitive high-resolution melting (MS-HRM) protocol is based on comparing the melting profiles of the PCR products from unknown samples with profiles for specific PCR products derived from methylated and unmethylated control DNAs . The protocol consists of PCR amplification of bisulfite-modified DNA with primers and subsequent high-resolution melting analysis of the PCR product. MSP or MS-HRM can accurately detect about 0.1% of methylated DNA . 6) Hybrid sequencing (NanoString) The nCounter Technology (NanoString Technologies, Seattle, WA) is a novel technology developed to screen clinically-relevant ALK , ROS1 , and RET fusion genes in lung cancer tissue samples. NanoString is applicable to RNA, miRNA, or protein and, more recently, to ctDNA . Target ctDNA is directly tagged with capture and reporter probes that are specific to the target variant of interest, creating a unique target-probe complex. The probes include a fluorescent reporter and a secondary biotinylated capture probe that allows immobilization onto the cartridge surface. The target-probe complex is immobilized and aligned on the imaging surface. The labeled barcode of the complex is then directly counted by an automated fluorescence microscope . 2. International efforts for advanced precision medicine in ctDNA analysis The availability of new ctDNA testing methods and continuous scientific advances has resulted in several new problems. The factors affecting ctDNA testing outcomes are present from the sample collection phase to the final reporting phase. The American Society of Clinical Oncology (ASCO) and the College of American Pathologists (CAP) reviewed the framework for future research into clinical ctDNA tests in 2018 . This article categorizes the key findings that affect ctDNA testing for oncology patients into preanalytical variables for ctDNA specimens, analytical validity, interpretation and reporting, and clinical validity and utility of each test. Plasma is the most suitable sample recommended for ctDNA testing , as is the use of specific types of sample collection tubes such as cell-stabilizing tubes (Cell-Free DNABCT [STRECK tubes] and PAXgene Blood DNA tubes [Qiagen]), or conventional EDTA anticoagulant tubes . Leukocyte stabilization tubes can extend the preprocessing window to 48 hours after collection, but EDTA anticoagulant tubes require processing within 6 hours. However, few studies have examined the preanalytical variables affecting ctDNA testing, and guidelines are needed to validate their clinical utility. Considering the variations in the many factors and different types of ctDNA assays, based on different methods, the validity of each analysis must be comparable. The current clinical ctDNA analyses require a clear assessment of the validity of the individual analyses. To increase the precision of ctDNA assays, best practices, protocols, and quality metrics for NGS-based ctDNA analyses must be developed. The Sequencing Quality Control Phase 2 (SEQC2) consortium organized by the FDA is an international group of members from academia, government, and industry ( https://www.fda.gov/science-research/bioinformatics-tools/microarraysequencing-quality-control-maqcseqc#MAQC_IV ). The SEQC2 Oncopanel Sequencing Working Group developed a translational scientific infrastructure to be applied for practices in precision oncology . The Oncopanel Sequencing Working Group evaluated panels/assays, genomic regions, coverage, VAF ranges, and bioinformatics pipelines, using self-constructed reference samples. This study on the analytical performance evaluation of oncopanels/assays for small variant detection includes: (1) comprehensive solid tumor oncopanel examination , (2) liquid biopsy testing , (3) testing involving formalin-fixed paraffin-embedded material , and (4) testing involving spike-in materials . A major finding from the SEQC2 liquid biopsy proficiency testing study is that all assays could detect mutations with high sensitivity, precision, and reproducibility for those above the 0.5% VAF threshold. The degree of DNA input material impacted the test sensitivity, requiring higher input for improved sensitivity and reproducibility for variants with a VAF below 0.5%. Advanced NGS-based assays for precision oncology are in high demand, and recently approved ctDNA assays are to be identified. Establishing a proper validation scheme would support the FDA’s regulatory and scientific endeavors. 3. FDA-approved ctDNA assay We searched for FDA-approved assays in the FDA database ( https://www.accessdata.fda.gov/scripts/cdrh/devicesatfda/index.cfm ) using the following keywords: circulating tumor DNA, ctDNA, cell-free DNA, circulating cell-free DNA, cfDNA, liquid biopsy, and plasma and DNA. The search results were compared to the annual report of medical devices cleared or approved on FDA lists published between 2013 and 2022 for confirmation, and assays related to ctDNA were selected. We identified three in vitro diagnostic devices (Epi ProColon, Cobas EGFR Mutation Test, and therascreen PIK3CA RGQ PCR Kit) and two specialized laboratory services (Guardant360 CDx, and FoundationOne Liquid CDx) . FoundationOne Liquid CDx was approved as a companion diagnostic on October 26 and November 6, 2020. The approved companion diagnostic indications are (1) to identify mutations in BRCA1 and BRCA2 genes in patients with ovarian cancer eligible for treatment with rucaparib (RUBRACA, Clovis Oncology, Inc.), (2) to identify ALK rearrangements in patients with non–small cell lung cancer eligible for treatment with alectinib (ALECENSA, Genentech USA Inc.), (3) to identify mutations in the PIK3CA gene in patients with breast cancer eligible for treatment with alpelisib (PIQRAY, Novartis Pharmaceutical Corporation), and (4) to identify mutations in the BRCA1 , BRCA2 , and ATM genes in patients with metastatic castration-resistant prostate cancer eligible for treatment with olaparib (LYNPARZA, AstraZeneca Pharmaceuticals LP) . The NGS-based ctDNA tests related to companion diagnostics, such as FoundationOne Liquid CDx and Guardant360 CDx are transitioning to specialized laboratory services. FDA-approved tests require evaluations of their analytical performance. Recent laboratory-based tests have undergone extensive evaluation testing using large sample numbers for advanced assay interpretation and reporting, clinical validation, and utility . Considering the cost of the tests, the number of tests performed for evaluation is prohibitive for small laboratories. Specialized laboratories use their own processes to provide users with reports. Therefore, testing is changing from a complex assay performed at individual laboratories to a more specialized service where each specialized laboratory be devised its own analysis processes, and the type of inspection changes to a laboratory service. 4. CE-marked ctDNA assay In May 2017, the Conformité Européenne (CE) declared the strengthening of in vitro Diagnostic Regulation and Medical Device Regulation regulations. The transition was completed in May 2022, following a 5-year transition period. The CE announced a new database search service called the European Database on Medical Devices (EUDAMED), which will consist of six modules: actor registration, unique device ID and device registration, notification authority and certificate, clinical and performance research, and alert and market monitoring. The EUDAMED database was scheduled for release in July 2022 but has been postponed to the third quarter of 2024 due to a delay in the module development process. We had difficulty performing a systematic search for medical devices or in vitro diagnostics with the CE mark. Therefore, we searched for recently published papers that mentioned CE-marked products . 5. Marketplace of ctDNA test in Republic of Korea In Korea, the marketplace of ctDNA test has recently expanded as the In Vitro Diagnostic Medical Devices Act was promulgated in April 2019. When ctDNA test was searched in the medical device database of the Korea Ministry of Food and Drug Safety ( https://udiportal.mfds.go.kr/search/data/P02_01#list ), a total of seven domestic tests were identified. The Smart Biopsy EML4-ALK Detection Kit (CytoGen, Seoul, Korea) was first nationally accredited on April 20, 2016, and the following tests have been nationally accredited in sequence; ADPS EGFR Mutation Test Kit V1 (GENECAST, Seoul, Korea), Droplex KRAS Mutation Test v2 (Gencurix, Seoul, Korea), Droplex PIK3CA Mutation Test (Gencurix), PANAMutyper R EGFR V2 (PANAGENE, Daejeon, Korea), AlphaLiquid 100 (IMBDX, Seoul, Korea), and LiquidSCAN (GENINUS, Seoul, Korea). Most tests are PCR-based assay, such as RT-PCR and dPCR, but AlphaLiquid 100 (IMBDX) and LiquidSCAN (GENINUS) are NGS-based assay. 1) RT-PCR RT-PCR is widely used for variant screening because it is relatively inexpensive and fast . The variants are detected via the binding of complementary sequences using fluorescent-labeled sequence-specific probes, and the fluorescence intensity is related to the amount of amplified product. The sensitivity of RT-PCR is approximately 10%, which is lower than that of other test methods . Cold amplification at a lower denaturation temperature PCR (COLD-PCR) is a variant assay that improves the RT-PCR sensitivity. COLD-PCR concentrates mutated DNA sequences in preference to the wild type using a lower-temperature denaturation step during the cycling protocol. The denaturation temperature for a given sequence is adjusted within ±0.3°C to allow for selective denaturation and amplification of mutated sequences, while double-stranded wild-type sequences are amplified less. This assay can enrich the mutant sequences, improving the sensitivity to detect the mutant allele frequency (MAF) to approximately 0.1% . The PCR-based method has the advantage of high sensitivity and cost-effectiveness but is limited as only known variants can be selected with limiting input and speed. 2) Digital PCR dPCR shares the same reaction principle as RT-PCR, except that the samples are dispersed into arrays or droplets, resulting in thousands of parallel PCR reactions. The dPCR can quantify a low fraction of variants against a high background wild-type cfDNA using a single or few DNA templates in an array/droplet and has 0.1% sensitivity . The dPCR method can be applied to cancer personalized profiling by deep sequencing (CAPP-Seq) in combination with molecular barcoding technologies that improve sensitivity by reducing the background sequencing error . These two methods improve the sensitivity of CAPP-Seq up to three-fold and, when combined with molecular barcoding, yield approximately 15-fold improvements . cfDNA enrichment is conducted by a two-step PCR procedure during the sample preparation process. The first PCR amplifies the mutational hotspot regions of several genes in a single tube. The second PCR is a nested PCR with unique barcoded primers for sample labeling. The final PCR products are pooled and partitioned for sequencing. The advanced dPCR assay (BEAMing) is a highly sensitive approach with a detection rate of 0.02% . This approach consists of four principal components: beads, emulsion, amplification, and magnetics. BEAMing combines dPCR with magnetic bead and flow cytometry . In BEAMing, the primer binds to the magnetic beads using a reaction that forms a biotin-streptavidin complex. Less than one template molecule and less than one bead are contained within the microemulsion, and the PCR is performed within each droplet. At the end of the PCR process, the beads are magnetically purified. After denaturation, the beads are incubated with oligonucleotides to distinguish between different templates. The bound hybridization probe is then labeled with a fluorescently labeled antibody. Finally, the amplified products are counted as fluorescent beads by flow cytometry. However, the BEAMing method is impractical for routine clinical use due to its workflow complexity and high cost . 3) Mass spectrometry The mass spectrometry-based method combines the matrix-assisted laser desorption/ionization time-of-flight mass spectrometry with a conventional multiplex PCR. An example of this method is UltraSEEK (Agena Bioscience, San Diego, CA). UltraSEEK consists of two-step PCR for amplification and mass spectrometry for detection. The two-step PCR step consists of a multiplex PCR followed by a mutation-specific single-base extension reaction. The extension reaction uses a single mutation-specific chain terminator labeled with a moiety for solid phase capture. Captured, washed, and eluted products are examined for mass, and mutational genotypes are identified and characterized using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry . UltraSEEK has the advantage of multiplex detection of mutant sequences simultaneously, and has a MAF of 0.1% . 4) Next-generation sequencing NGS, also known as massively parallel sequencing technology, can characterize cancer at the genomic, transcriptomic, and epigenetic levels. NGS is a highly sensitive assay that can detect mutations in MAF of < 1% using the latest platforms . NGS can analyze several million short DNA sequences in parallel and conduct sequence alignment or de novo sequence assembly to the reference genome . Depending on the panel configuration, NGS panels can be targeted to analyze known variants or untargeted to screen unknown variants. Target panels were preferred due to their high sensitivity and low cost but are limited to point mutations and indel analysis. Several NGS methods can be applied to target panels with adjustable sensitivity, including tagged amplicon deep sequencing, the safe sequencing system, and CAPP-Seq. On the other hand, WGS or WES using untargeted panels allow detection of unknown DNA variants throughout the entire genome (or exome). Different genome-wide sequencing methods have been proposed for different variation types, such as personalized analysis of rearranged ends, digital karyotyping, and Fast Aneuploidy Screening Test-Sequencing System . However, genome-wide sequencing requires a large sample, making its application for ctDNA difficult due to the low concentrations of ctDNA in samples. There have been attempts to analyze DNA fragmentation differences. There is a marked difference in fragment length size between ctDNA and normal cf DNA. The fragment length of ctDNA is consistently shorter than that of normal cfDNA . Besides, ctDNA with a low MAF (< 0.6%) is associated with a longer ctDNA fragment length when compared to normal cfDNAs . Moreover, most cancers of different origins showed fragmentation profiles of varying lengths . The characteristic DNA fragmentation provides a proof-of-principle approach applicable to screening, early detection, and monitoring of various cancer types. NGS application has been extended to microsatellite inst-ability (MSI) detection . Loss of DNA mismatch repair (MMR) activity leads to an accumulation of mutations that could otherwise be corrected by MMR genes. A deficiency in MMR activity is often caused by germline mutations or aberrant methylation. The MSI phenotype of a deficiency in MMR activity refers to the shortening or lengthening of tandem DNA repeats in coding and noncoding regions throughout the genome. Tumors with at least 30% to 40% of unstable microsatellite loci, termed microsatellite instability high (MSI-H) , reportedly have a better prognosis than MSS tumors and tumors with low MSI. MSI has been documented in various cancer types, including colon, endometrium, and stomach cancers . The FDA has approved Pembrolizumab to treat MSI-H cancer regardless of the tumor type or site . NGS-based methods utilize various MSI detection algorithms such as MSIsensor , mSINGS , MANTIS , and bMSISEA , which have demonstrated concordance rates ranging from 92.3% to 100% with the PCR-based method. The application of NGS can reliably detect the MSI status with a ctDNA fraction up to 0.4% . 5) Methylation analysis Epigenetic information such as methylation is more specific to the tissue of origin than genetic mutations . Changes in DNA methylation patterns occur early in tumor development and have been reported to help early screening for cancers of unknown origin . Methylation analysis is not routinely or commonly used to detect ctDNA, but it can be partially applied to cancer patients. The method can be broadly divided depending on its application to the candidate gene. The Grail’s technology applied DNA methylation patterns to differentiate between cancer cell types or tissue origins . Most cfDNA methylation analysis methods applied a candidate gene approach due to the low analytical cost and the efficiency of using pre-established epigenetic biomarkers . Bisulfite treatment-based assays distinguish cytosine methylation and are generally the preferred ctDNA methylation detection method . The analytical principle is based on treating the DNA with bisulfite to convert unmethylated cytosine residues to uracil. Two types of ctDNA methylation analysis exist; PCR-based methods that apply specific primers or melting temperatures and sequence-based methods such as direct sequencing or pyrosequencing. However, the accuracy of bisulfite pyrosequencing is only maintained up to 5% . Methylation-specific PCR (MSP) can distinguish DNA sequences by sequence-specific PCR primers after bisulfite conversion . The methylation-sensitive high-resolution melting (MS-HRM) protocol is based on comparing the melting profiles of the PCR products from unknown samples with profiles for specific PCR products derived from methylated and unmethylated control DNAs . The protocol consists of PCR amplification of bisulfite-modified DNA with primers and subsequent high-resolution melting analysis of the PCR product. MSP or MS-HRM can accurately detect about 0.1% of methylated DNA . 6) Hybrid sequencing (NanoString) The nCounter Technology (NanoString Technologies, Seattle, WA) is a novel technology developed to screen clinically-relevant ALK , ROS1 , and RET fusion genes in lung cancer tissue samples. NanoString is applicable to RNA, miRNA, or protein and, more recently, to ctDNA . Target ctDNA is directly tagged with capture and reporter probes that are specific to the target variant of interest, creating a unique target-probe complex. The probes include a fluorescent reporter and a secondary biotinylated capture probe that allows immobilization onto the cartridge surface. The target-probe complex is immobilized and aligned on the imaging surface. The labeled barcode of the complex is then directly counted by an automated fluorescence microscope . RT-PCR is widely used for variant screening because it is relatively inexpensive and fast . The variants are detected via the binding of complementary sequences using fluorescent-labeled sequence-specific probes, and the fluorescence intensity is related to the amount of amplified product. The sensitivity of RT-PCR is approximately 10%, which is lower than that of other test methods . Cold amplification at a lower denaturation temperature PCR (COLD-PCR) is a variant assay that improves the RT-PCR sensitivity. COLD-PCR concentrates mutated DNA sequences in preference to the wild type using a lower-temperature denaturation step during the cycling protocol. The denaturation temperature for a given sequence is adjusted within ±0.3°C to allow for selective denaturation and amplification of mutated sequences, while double-stranded wild-type sequences are amplified less. This assay can enrich the mutant sequences, improving the sensitivity to detect the mutant allele frequency (MAF) to approximately 0.1% . The PCR-based method has the advantage of high sensitivity and cost-effectiveness but is limited as only known variants can be selected with limiting input and speed. dPCR shares the same reaction principle as RT-PCR, except that the samples are dispersed into arrays or droplets, resulting in thousands of parallel PCR reactions. The dPCR can quantify a low fraction of variants against a high background wild-type cfDNA using a single or few DNA templates in an array/droplet and has 0.1% sensitivity . The dPCR method can be applied to cancer personalized profiling by deep sequencing (CAPP-Seq) in combination with molecular barcoding technologies that improve sensitivity by reducing the background sequencing error . These two methods improve the sensitivity of CAPP-Seq up to three-fold and, when combined with molecular barcoding, yield approximately 15-fold improvements . cfDNA enrichment is conducted by a two-step PCR procedure during the sample preparation process. The first PCR amplifies the mutational hotspot regions of several genes in a single tube. The second PCR is a nested PCR with unique barcoded primers for sample labeling. The final PCR products are pooled and partitioned for sequencing. The advanced dPCR assay (BEAMing) is a highly sensitive approach with a detection rate of 0.02% . This approach consists of four principal components: beads, emulsion, amplification, and magnetics. BEAMing combines dPCR with magnetic bead and flow cytometry . In BEAMing, the primer binds to the magnetic beads using a reaction that forms a biotin-streptavidin complex. Less than one template molecule and less than one bead are contained within the microemulsion, and the PCR is performed within each droplet. At the end of the PCR process, the beads are magnetically purified. After denaturation, the beads are incubated with oligonucleotides to distinguish between different templates. The bound hybridization probe is then labeled with a fluorescently labeled antibody. Finally, the amplified products are counted as fluorescent beads by flow cytometry. However, the BEAMing method is impractical for routine clinical use due to its workflow complexity and high cost . The mass spectrometry-based method combines the matrix-assisted laser desorption/ionization time-of-flight mass spectrometry with a conventional multiplex PCR. An example of this method is UltraSEEK (Agena Bioscience, San Diego, CA). UltraSEEK consists of two-step PCR for amplification and mass spectrometry for detection. The two-step PCR step consists of a multiplex PCR followed by a mutation-specific single-base extension reaction. The extension reaction uses a single mutation-specific chain terminator labeled with a moiety for solid phase capture. Captured, washed, and eluted products are examined for mass, and mutational genotypes are identified and characterized using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry . UltraSEEK has the advantage of multiplex detection of mutant sequences simultaneously, and has a MAF of 0.1% . NGS, also known as massively parallel sequencing technology, can characterize cancer at the genomic, transcriptomic, and epigenetic levels. NGS is a highly sensitive assay that can detect mutations in MAF of < 1% using the latest platforms . NGS can analyze several million short DNA sequences in parallel and conduct sequence alignment or de novo sequence assembly to the reference genome . Depending on the panel configuration, NGS panels can be targeted to analyze known variants or untargeted to screen unknown variants. Target panels were preferred due to their high sensitivity and low cost but are limited to point mutations and indel analysis. Several NGS methods can be applied to target panels with adjustable sensitivity, including tagged amplicon deep sequencing, the safe sequencing system, and CAPP-Seq. On the other hand, WGS or WES using untargeted panels allow detection of unknown DNA variants throughout the entire genome (or exome). Different genome-wide sequencing methods have been proposed for different variation types, such as personalized analysis of rearranged ends, digital karyotyping, and Fast Aneuploidy Screening Test-Sequencing System . However, genome-wide sequencing requires a large sample, making its application for ctDNA difficult due to the low concentrations of ctDNA in samples. There have been attempts to analyze DNA fragmentation differences. There is a marked difference in fragment length size between ctDNA and normal cf DNA. The fragment length of ctDNA is consistently shorter than that of normal cfDNA . Besides, ctDNA with a low MAF (< 0.6%) is associated with a longer ctDNA fragment length when compared to normal cfDNAs . Moreover, most cancers of different origins showed fragmentation profiles of varying lengths . The characteristic DNA fragmentation provides a proof-of-principle approach applicable to screening, early detection, and monitoring of various cancer types. NGS application has been extended to microsatellite inst-ability (MSI) detection . Loss of DNA mismatch repair (MMR) activity leads to an accumulation of mutations that could otherwise be corrected by MMR genes. A deficiency in MMR activity is often caused by germline mutations or aberrant methylation. The MSI phenotype of a deficiency in MMR activity refers to the shortening or lengthening of tandem DNA repeats in coding and noncoding regions throughout the genome. Tumors with at least 30% to 40% of unstable microsatellite loci, termed microsatellite instability high (MSI-H) , reportedly have a better prognosis than MSS tumors and tumors with low MSI. MSI has been documented in various cancer types, including colon, endometrium, and stomach cancers . The FDA has approved Pembrolizumab to treat MSI-H cancer regardless of the tumor type or site . NGS-based methods utilize various MSI detection algorithms such as MSIsensor , mSINGS , MANTIS , and bMSISEA , which have demonstrated concordance rates ranging from 92.3% to 100% with the PCR-based method. The application of NGS can reliably detect the MSI status with a ctDNA fraction up to 0.4% . Epigenetic information such as methylation is more specific to the tissue of origin than genetic mutations . Changes in DNA methylation patterns occur early in tumor development and have been reported to help early screening for cancers of unknown origin . Methylation analysis is not routinely or commonly used to detect ctDNA, but it can be partially applied to cancer patients. The method can be broadly divided depending on its application to the candidate gene. The Grail’s technology applied DNA methylation patterns to differentiate between cancer cell types or tissue origins . Most cfDNA methylation analysis methods applied a candidate gene approach due to the low analytical cost and the efficiency of using pre-established epigenetic biomarkers . Bisulfite treatment-based assays distinguish cytosine methylation and are generally the preferred ctDNA methylation detection method . The analytical principle is based on treating the DNA with bisulfite to convert unmethylated cytosine residues to uracil. Two types of ctDNA methylation analysis exist; PCR-based methods that apply specific primers or melting temperatures and sequence-based methods such as direct sequencing or pyrosequencing. However, the accuracy of bisulfite pyrosequencing is only maintained up to 5% . Methylation-specific PCR (MSP) can distinguish DNA sequences by sequence-specific PCR primers after bisulfite conversion . The methylation-sensitive high-resolution melting (MS-HRM) protocol is based on comparing the melting profiles of the PCR products from unknown samples with profiles for specific PCR products derived from methylated and unmethylated control DNAs . The protocol consists of PCR amplification of bisulfite-modified DNA with primers and subsequent high-resolution melting analysis of the PCR product. MSP or MS-HRM can accurately detect about 0.1% of methylated DNA . The nCounter Technology (NanoString Technologies, Seattle, WA) is a novel technology developed to screen clinically-relevant ALK , ROS1 , and RET fusion genes in lung cancer tissue samples. NanoString is applicable to RNA, miRNA, or protein and, more recently, to ctDNA . Target ctDNA is directly tagged with capture and reporter probes that are specific to the target variant of interest, creating a unique target-probe complex. The probes include a fluorescent reporter and a secondary biotinylated capture probe that allows immobilization onto the cartridge surface. The target-probe complex is immobilized and aligned on the imaging surface. The labeled barcode of the complex is then directly counted by an automated fluorescence microscope . The availability of new ctDNA testing methods and continuous scientific advances has resulted in several new problems. The factors affecting ctDNA testing outcomes are present from the sample collection phase to the final reporting phase. The American Society of Clinical Oncology (ASCO) and the College of American Pathologists (CAP) reviewed the framework for future research into clinical ctDNA tests in 2018 . This article categorizes the key findings that affect ctDNA testing for oncology patients into preanalytical variables for ctDNA specimens, analytical validity, interpretation and reporting, and clinical validity and utility of each test. Plasma is the most suitable sample recommended for ctDNA testing , as is the use of specific types of sample collection tubes such as cell-stabilizing tubes (Cell-Free DNABCT [STRECK tubes] and PAXgene Blood DNA tubes [Qiagen]), or conventional EDTA anticoagulant tubes . Leukocyte stabilization tubes can extend the preprocessing window to 48 hours after collection, but EDTA anticoagulant tubes require processing within 6 hours. However, few studies have examined the preanalytical variables affecting ctDNA testing, and guidelines are needed to validate their clinical utility. Considering the variations in the many factors and different types of ctDNA assays, based on different methods, the validity of each analysis must be comparable. The current clinical ctDNA analyses require a clear assessment of the validity of the individual analyses. To increase the precision of ctDNA assays, best practices, protocols, and quality metrics for NGS-based ctDNA analyses must be developed. The Sequencing Quality Control Phase 2 (SEQC2) consortium organized by the FDA is an international group of members from academia, government, and industry ( https://www.fda.gov/science-research/bioinformatics-tools/microarraysequencing-quality-control-maqcseqc#MAQC_IV ). The SEQC2 Oncopanel Sequencing Working Group developed a translational scientific infrastructure to be applied for practices in precision oncology . The Oncopanel Sequencing Working Group evaluated panels/assays, genomic regions, coverage, VAF ranges, and bioinformatics pipelines, using self-constructed reference samples. This study on the analytical performance evaluation of oncopanels/assays for small variant detection includes: (1) comprehensive solid tumor oncopanel examination , (2) liquid biopsy testing , (3) testing involving formalin-fixed paraffin-embedded material , and (4) testing involving spike-in materials . A major finding from the SEQC2 liquid biopsy proficiency testing study is that all assays could detect mutations with high sensitivity, precision, and reproducibility for those above the 0.5% VAF threshold. The degree of DNA input material impacted the test sensitivity, requiring higher input for improved sensitivity and reproducibility for variants with a VAF below 0.5%. Advanced NGS-based assays for precision oncology are in high demand, and recently approved ctDNA assays are to be identified. Establishing a proper validation scheme would support the FDA’s regulatory and scientific endeavors. We searched for FDA-approved assays in the FDA database ( https://www.accessdata.fda.gov/scripts/cdrh/devicesatfda/index.cfm ) using the following keywords: circulating tumor DNA, ctDNA, cell-free DNA, circulating cell-free DNA, cfDNA, liquid biopsy, and plasma and DNA. The search results were compared to the annual report of medical devices cleared or approved on FDA lists published between 2013 and 2022 for confirmation, and assays related to ctDNA were selected. We identified three in vitro diagnostic devices (Epi ProColon, Cobas EGFR Mutation Test, and therascreen PIK3CA RGQ PCR Kit) and two specialized laboratory services (Guardant360 CDx, and FoundationOne Liquid CDx) . FoundationOne Liquid CDx was approved as a companion diagnostic on October 26 and November 6, 2020. The approved companion diagnostic indications are (1) to identify mutations in BRCA1 and BRCA2 genes in patients with ovarian cancer eligible for treatment with rucaparib (RUBRACA, Clovis Oncology, Inc.), (2) to identify ALK rearrangements in patients with non–small cell lung cancer eligible for treatment with alectinib (ALECENSA, Genentech USA Inc.), (3) to identify mutations in the PIK3CA gene in patients with breast cancer eligible for treatment with alpelisib (PIQRAY, Novartis Pharmaceutical Corporation), and (4) to identify mutations in the BRCA1 , BRCA2 , and ATM genes in patients with metastatic castration-resistant prostate cancer eligible for treatment with olaparib (LYNPARZA, AstraZeneca Pharmaceuticals LP) . The NGS-based ctDNA tests related to companion diagnostics, such as FoundationOne Liquid CDx and Guardant360 CDx are transitioning to specialized laboratory services. FDA-approved tests require evaluations of their analytical performance. Recent laboratory-based tests have undergone extensive evaluation testing using large sample numbers for advanced assay interpretation and reporting, clinical validation, and utility . Considering the cost of the tests, the number of tests performed for evaluation is prohibitive for small laboratories. Specialized laboratories use their own processes to provide users with reports. Therefore, testing is changing from a complex assay performed at individual laboratories to a more specialized service where each specialized laboratory be devised its own analysis processes, and the type of inspection changes to a laboratory service. In May 2017, the Conformité Européenne (CE) declared the strengthening of in vitro Diagnostic Regulation and Medical Device Regulation regulations. The transition was completed in May 2022, following a 5-year transition period. The CE announced a new database search service called the European Database on Medical Devices (EUDAMED), which will consist of six modules: actor registration, unique device ID and device registration, notification authority and certificate, clinical and performance research, and alert and market monitoring. The EUDAMED database was scheduled for release in July 2022 but has been postponed to the third quarter of 2024 due to a delay in the module development process. We had difficulty performing a systematic search for medical devices or in vitro diagnostics with the CE mark. Therefore, we searched for recently published papers that mentioned CE-marked products . In Korea, the marketplace of ctDNA test has recently expanded as the In Vitro Diagnostic Medical Devices Act was promulgated in April 2019. When ctDNA test was searched in the medical device database of the Korea Ministry of Food and Drug Safety ( https://udiportal.mfds.go.kr/search/data/P02_01#list ), a total of seven domestic tests were identified. The Smart Biopsy EML4-ALK Detection Kit (CytoGen, Seoul, Korea) was first nationally accredited on April 20, 2016, and the following tests have been nationally accredited in sequence; ADPS EGFR Mutation Test Kit V1 (GENECAST, Seoul, Korea), Droplex KRAS Mutation Test v2 (Gencurix, Seoul, Korea), Droplex PIK3CA Mutation Test (Gencurix), PANAMutyper R EGFR V2 (PANAGENE, Daejeon, Korea), AlphaLiquid 100 (IMBDX, Seoul, Korea), and LiquidSCAN (GENINUS, Seoul, Korea). Most tests are PCR-based assay, such as RT-PCR and dPCR, but AlphaLiquid 100 (IMBDX) and LiquidSCAN (GENINUS) are NGS-based assay. The introduction of ctDNA testing and technical advancements in NGS have affected cancer’s diagnostic and therapeutic aspects. Many biomarkers associated with treatment options have been identified for cancer patients whose tissues were previously unavailable for biopsies. The widespread use of NGS and its increased availability has changed the concept/scheme of companion diagnostics from ‘one gene-one drug’ to ‘multi-genes - multi-drugs’ treatment . Recently, experts from the National Comprehensive Cancer Network have recommended measuring multiple predictive genes associated with companion diagnostics for certain cancers . Although tissue biopsy remains the standard of diagnosis because of its important pathological diagnostic information value and the need to assess biomarkers without DNA alterations, such as estrogen receptor expression and other protein or RNA biomarkers . However, ctDNA testing is undoubtedly a very promising technology, with broad clinical applications for early diagnosis, monitoring, management, and prognosis . When performing metastatic diagnosis alongside standard tissue biopsies, ctDNA testing can provide key advantages, either as a baseline for follow-up testing after treatment or in situations in which more-rapid identification of targetable alterations is needed to guide first-line therapy . In addition, ctDNA testing plays an important role in real-time monitoring of various aspects of tumors due to its simple sample preparation . We expect that the strengths of ctDNA, including the potential ability to detect latent cancers and track tumor-specific mutations, will naturally enable minimal residual disease (MRD) assessment . The ability to identify microscopic residuals and occult metastases could revolutionize the individualization of adjuvant and consolidation therapy . Despite the potential use of ctDNA to determine MRD, it is premature for use in this feature due to many current issues . Therefore, the reliability and clinical validity of ctDNA analysis is becoming increasingly important as it can directly impact patient care with respect to treatment options. To assess the current status of ctDNA testing and ongoing developments, we searched the clinical trial database of the FDA ( Clinical trial.gov ). A query using ctDNA as the keyword showed 978 clinical trials as of June 2022. The results of 109 trials in which the clinical trials were completed were reviewed using the uploaded articles from the database or the national clinical trial number for a PubMed search. Twenty-two clinical trials were available, and we reviewed and compared their preanalytical and analytical variables . The preanalytical variables of the blood collection tube used, the volume of whole blood collected, time to sample processing, centrifugation protocols, and DNA extraction methods were missing or unidentifiable in over half of the reports, despite their importance. When provided, the information varied among studies; specimen processing within 24 hours using EDTA tubes was a possible confounding factor regarding the stability of the ctDNA. Some trials requiring detection of low VAF variants, such as ‘using copy number variation of ctDNA for cancer diagnosis’ or ‘biomarker response according to treatment in metastatic cancer’ used only 25 ng DNA, which appears to be insufficient and, therefore, they were unable to exclude the possibility of false negatives. The use of FDA-approved assays among trials was low at 13.64% (3/22). Information regarding approval from other institutions or agencies was often unavailable, but most clinical trials (72.73%, 16/22) utilized non–FDA-approved testing methods. Despite the considerable therapeutic influence of ctDNA testing or companion diagnostics, the current practice of utilizing various ctDNA tests without regard to a consensus on clinical validation is questionable, as the review of clinical trials and available information demonstrates. Discrepant test results between the tissue biopsy and ctDNA results are common, and the underlying reasons for these discrepancies include temporal heterogeneity (an archival tumor specimen), spatial heterogeneity (a subclonal mutation), and analytical errors . In the case of analytical errors, the source of the error should be evaluated before any therapeutic action can be taken. If such an investigation or validation is lacking, this should be disclosed to enable the participants or patients to give proper informed consent. The common rules followed by institutional review boards (IRB) when reviewing research state that the prospective participants (or a legally authorized representative) be provided with sufficient detailed information regarding the research. The consent form containing this information must be organized to facilitate an understanding of why one might or might not want to participate . This should also be the case if the patient opts for a ctDNA test. Patients should be able to choose the ctDNA test based on detailed information about the accuracy of the test, the list of genes that can be analyzed, and the laboratory’s ability to analyze mutations based on experience. Laboratories must be aware of any new developments in ctDNA testing. A changing trend in ctDNA testing is demonstrated by the recent FDA-approved ctDNA assays. Previously FDA-approved assays were mostly in vitro diagnostic devices (IVDs), conducted by small-scale clinical laboratories. However, recent FDA-approved assays require referrals to larger, specialized laboratories with institutional accreditation. Such changes are inevitable due to the testing complexity and higher reliability required by clinical practice. Cutting-edge ctDNA testing is costly and requires first-rate laboratory infrastructure and highly specialized and multi-disciplinary professionals. The trend toward centralization and referrals is in line with these requirements. Previously, the introduction of tumor markers has resulted in the overutilization of tumor marker testing in the hope of providing definitive answers to cancer diagnostics. The need for specific guidelines/instructions on how tumor markers should be utilized demonstrates the concern regarding the misuse of tumor markers, potentially resulting in misdiagnosis or a delay in treatment . In recognition of these issues, in 2002, the National Academy of Clinical Biochemistry produced the Laboratory Medicine Practice Guideline of tumor biomarkers . The guideline provides recommendations based on expert opinions from those in the field of IVD and the marketplace. This regulatory guideline includes 16 different cancers and their established tumor markers, their qualities, and the technological requirements. It is anticipated that a similar development to the guideline for tumor marker testing will be available for ctDNA testing soon. However, guidelines on validation are currently lacking. ctDNA testing requires clinical validation prior to its clinical implementation. These regulated clinical validation guidelines will inevitably require updating, refinement, and modification as knowledge and understanding of ctDNA and its biological role increases. In summary, ctDNA testing requires a minimum safety resolution through clinical validation to ensure its clinical utility. The testing requires cooperation between multi-disciplinary experts to provide meaningful and reliable results. Establishing a proper clinical validation guideline for ctDNA will enable access to better cancer treatment and reliable testing in the future.
Improving Family Medicine Residents’ Confidence to Assess and Manage Psychiatric Crises in an Outpatient Clinic
d9b058ba-a15e-4b50-af01-a8db1c582b63
9434671
Family Medicine[mh]
Primary care physicians (PCPS) are increasingly managing mental health in outpatient settings. PCPs manage most of the care for mild-moderate mental illnesses, (eg, anxiety and depression), and care for up to one-third of patients with severe mental illness. Rising rates of depression, a nationwide shortage of psychiatrists, which is accentuated in rural areas, and insurance barriers all contribute to the increased frequency of PCPs managing mental illness. Mental health care includes assessing and treating psychiatric crises, which are defined as “any situation in which a person’s behavior puts them at risk of hurting themselves or others and/or prevents them from being able to care for themselves or function effectively in the community.” Forms of psychiatric crises include acute suicidal ideation, acute homicidal ideation, and psychosis that interferes with an individual’s decision-making capacity. As PCPs increasingly manage mental health and encounter psychiatric crises, there is an urgent need for evidence-based trainings for PCPs in practice and residency requirements (ie, from the American Board of Family Medicine or Accreditation Council for Graduate Medical Education) to address this life-and-death issue. There is limited research of PCP assessment of psychiatric crises; this is critical because nearly half of individuals who die by suicide have contact with their PCP within 1 month of their death. The problem begins with screening. PCPs have been found to assess for suicidal ideation in only 36% of patients experiencing moderate to severe depressive symptoms. Another study found that, even after implementation of outcome-improving practice guidelines for depression, PCPs assessed for suicidal ideation in only 24% of depression-focused clinic visits. At well child visits, most PCPs (61%) do not screen for suicidal ideation. Education may be key; this study found that PCPs with knowledge of suicide risk assessment were almost 5 times more likely to screen than those without. The prevalence of homicidal ideation and psychosis in primary care clinics in the U.S. is not known; however, since PCPs do not consistently assess for suicidality, one can assume that homicidality and psychosis are not reliably assessed for either. Research into factors that hinder PCPs from adequately assessing and managing psychiatric crises is extremely limited and focuses on suicidal ideation only. Inadequate training likely contributes to PCPs not assessing and managing psychiatric crises; Graham et al found that PCPs felt more competent to assess and treat suicidality after formal training. Since solely screening for suicidal ideation does not reduce suicide attempts, professional training on how to assess and then manage a crisis is crucial. There have been a few calls to address this training need in residency curricula where practice patterns for PCPS are established. , Some residencies have risen to this call by implementing trainings and workflow changes, such as workshops and standardized charting templates, finding significant benefit. These studies document helpful interventions to consider when creating and implementing curricula; yet, they are limited in that they solely focus on suicidality and it cannot be concluded that this curricula is beneficial for other types of crises. This study helps fill this critically important literature gap by describing a brief training intervention and point-of-care resources that improved family medicine resident confidence in the outpatient assessment and management of all types of psychiatric crises. The current study took place at the primary care clinic of the Mayo Clinic Family Medicine Residency in Eau Claire, Wisconsin. Although Eau Claire is considered an urban cluster, it serves a large catchment area of rural communities and family medicine residents regularly rotate through more remote rural outpatient clinics. Additionally, access to adult psychiatry in the area is extremely limited and often PCPs, including family medicine residents, fill the gap in caring for patients suffering from severe mental illness. The program admitted its first class of residents in 2017 and did not have a residency behavioral scientist until late 2019. Before this research project in 2019-20, faculty physicians varied widely in their confidence and experience of managing psychiatric crises in clinic. The program had no professional training or standardized process on how to assess for and then manage a psychiatric crisis in the outpatient setting. Therefore, the aim of the current study was to increase the resident physicians’ confidence in assessing and managing various psychiatric crises that can present in a clinic visit. The behavioral scientist created practice guidelines on how to manage a range of psychiatric crises and then developed a curriculum for residents. It was hypothesized that implementation of a brief didactic series, access to supplemental material with workflow changes, and as-needed consultation with the behavioral scientist would improve residents’ confidence. Participants Participants were family medicine residents enrolled in the Mayo Clinic Family Medicine Residency—Eau Claire, Wisconsin, program. The family medicine residency is a 3-year program, with a class of 5 residents each year. Therefore, each time the residents were surveyed, there was a potential for a total of 15 participants. Procedures This study was exempted from review by the Mayo Clinic Institutional Review Board because the project was conducted in an educational setting and was part of a normal educational practice. The curriculum was implemented via monthly 1-h didactic sessions delivered over 3 consecutive months. Attendance was 100%, because all residents were excused from their clinical duties to attend these didactics. The first didactic hour focused on training the residents on how to screen for suicidal ideation and behaviors using the Columbia-Suicide Severity Rating Scale (C-SSRS), a valid and reliable questionnaire for assessing suicidal ideation and behaviors. The residents were given a physical copy of the C-SSRS to have in hand and then they were taught how to use the assessment through PowerPoint presentation and watching videos created by the Center for Practice Innovations. Following this activity, the group reviewed cases together to differentiate if a behavior was a suicide attempt or not. The second didactic had 4 main objectives: (1) defining the types of psychiatric crises, (2) discussing how to screen for all crises (eg, reminding to use the C-SSRS for suicidal ideation and behaviors, teaching how to screen for homicidal ideation and psychosis that impairs decision-making capacity), (3) discerning when a crisis met criteria for inpatient psychiatric hospitalization, and (4) understanding the logistics of admitting a patient to the inpatient psychiatric unit, including (a) admitting directly to the inpatient unit or (b) indirectly through the ED, followed by how to do each of those options when the patient was being voluntarily or involuntarily admitted. Material was presented via PowerPoint and included group review of individual cases to determine if a patient met criteria for inpatient hospitalization, with special focus on what further information was needed to make such a determination. For the third and last didactic, the objectives included (1) reviewing the definition of a psychiatric crisis, (2) reviewing how to screen for all crises, (3) reviewing criteria for inpatient psychiatric hospitalization and logistics thereof, and (4) understanding appropriate options for outpatient management of psychiatric crises that did not meet admission criteria. The curriculum content is summarized in . A one-page practice guideline document was created that reviewed each crisis, what to screen for (eg, ideation, intent, plan), and step-by-step instruction for what to do depending on if the patient was remaining outpatient or being admitted voluntarily or involuntarily to an inpatient unit (see ). Specifically, this practice document was developed by the residency’s behavioral scientist and then reviewed by another primary care behavioral scientist, followed by review from several residency faculty physicians. Additionally, since information on how to admit directly to the inpatient unit is referenced in this document, the behavioral scientist obtained input from nursing leadership in the inpatient behavioral health unit on how to most effectively provide a safe admission. This practice guideline document was referenced during the second and third didactic to orient the residents to it and remind them of its availability in their workflow. Once residents started using this document in practice, they provided feedback to the behavioral scientist and the document was updated based on that feedback. The document was stored on the residency shared virtual drive, in resource binders with each clinical team, and posted prominently on a bulletin board in the clinic’s precepting space. Additionally, a reference was integrated into the electronic medical system (Epic) that reviewed the steps for each different crisis and included the appropriate verbiage to document the crisis and steps taken. Residents were taught how to use this by viewing the reference and watching the behavioral scientists complete it step-by-step in both the second and third didactic. They were then encouraged to practice using the reference independently and ask questions as they arose. Lastly, the behavioral scientist co-precepted with faculty physicians in the clinic 12 h a week and was available for consultation 20 h a week, allowing conversations, assistance, and review of any skills and steps related to the curriculum. Evaluation Methods The behavioral scientist created a brief 17 item questionnaire to measure the residents’ confidence in their ability to assess and manage psychiatric crises. Due to the complexity of Wisconsin state law, coupled with institutional policies, published generic measures of resident confidence assessment were not appropriate. Further, literature review shows that it is very common to use unvalidated assessments of family medicine resident confidence aimed at the particular educational objective. The items were very specific to the steps required for assessing and managing psychiatric crises in our clinic. The questionnaire (see ) was administered at the beginning of the first didactic before any material was presented (Pre) and then again 6 months later (Post). Study data were collected and managed using REDCap. Administering the survey 3 months after completion of the didactic series allowed the residents the opportunity to apply the information in their clinical practice. Prioritizing anonymity inadvertently resulted in being unable to match before and after responses to individual participants. Resident responses were numerically valued on a 5-point Likert scale (0 = not confident at all, 4 = extremely confident). The difference in residents’ confidence ratings before and after the training was assessed using the Mann-Whitney U test, because the data was not normally distributed as assessed by the Shapiro-Wilk test (all P -values < .05). Analyses were conducted in R version 3.6.3. Type one error rate was set at 5% without adjustment for multiple comparisons. Analyses were performed on raw scores to preserve measured variation and maximize statistical power. Results are described dichotomously as confident (fairly to extremely confident) or not (not at all to somewhat) to convey the degree of change most simply. Participants were family medicine residents enrolled in the Mayo Clinic Family Medicine Residency—Eau Claire, Wisconsin, program. The family medicine residency is a 3-year program, with a class of 5 residents each year. Therefore, each time the residents were surveyed, there was a potential for a total of 15 participants. This study was exempted from review by the Mayo Clinic Institutional Review Board because the project was conducted in an educational setting and was part of a normal educational practice. The curriculum was implemented via monthly 1-h didactic sessions delivered over 3 consecutive months. Attendance was 100%, because all residents were excused from their clinical duties to attend these didactics. The first didactic hour focused on training the residents on how to screen for suicidal ideation and behaviors using the Columbia-Suicide Severity Rating Scale (C-SSRS), a valid and reliable questionnaire for assessing suicidal ideation and behaviors. The residents were given a physical copy of the C-SSRS to have in hand and then they were taught how to use the assessment through PowerPoint presentation and watching videos created by the Center for Practice Innovations. Following this activity, the group reviewed cases together to differentiate if a behavior was a suicide attempt or not. The second didactic had 4 main objectives: (1) defining the types of psychiatric crises, (2) discussing how to screen for all crises (eg, reminding to use the C-SSRS for suicidal ideation and behaviors, teaching how to screen for homicidal ideation and psychosis that impairs decision-making capacity), (3) discerning when a crisis met criteria for inpatient psychiatric hospitalization, and (4) understanding the logistics of admitting a patient to the inpatient psychiatric unit, including (a) admitting directly to the inpatient unit or (b) indirectly through the ED, followed by how to do each of those options when the patient was being voluntarily or involuntarily admitted. Material was presented via PowerPoint and included group review of individual cases to determine if a patient met criteria for inpatient hospitalization, with special focus on what further information was needed to make such a determination. For the third and last didactic, the objectives included (1) reviewing the definition of a psychiatric crisis, (2) reviewing how to screen for all crises, (3) reviewing criteria for inpatient psychiatric hospitalization and logistics thereof, and (4) understanding appropriate options for outpatient management of psychiatric crises that did not meet admission criteria. The curriculum content is summarized in . A one-page practice guideline document was created that reviewed each crisis, what to screen for (eg, ideation, intent, plan), and step-by-step instruction for what to do depending on if the patient was remaining outpatient or being admitted voluntarily or involuntarily to an inpatient unit (see ). Specifically, this practice document was developed by the residency’s behavioral scientist and then reviewed by another primary care behavioral scientist, followed by review from several residency faculty physicians. Additionally, since information on how to admit directly to the inpatient unit is referenced in this document, the behavioral scientist obtained input from nursing leadership in the inpatient behavioral health unit on how to most effectively provide a safe admission. This practice guideline document was referenced during the second and third didactic to orient the residents to it and remind them of its availability in their workflow. Once residents started using this document in practice, they provided feedback to the behavioral scientist and the document was updated based on that feedback. The document was stored on the residency shared virtual drive, in resource binders with each clinical team, and posted prominently on a bulletin board in the clinic’s precepting space. Additionally, a reference was integrated into the electronic medical system (Epic) that reviewed the steps for each different crisis and included the appropriate verbiage to document the crisis and steps taken. Residents were taught how to use this by viewing the reference and watching the behavioral scientists complete it step-by-step in both the second and third didactic. They were then encouraged to practice using the reference independently and ask questions as they arose. Lastly, the behavioral scientist co-precepted with faculty physicians in the clinic 12 h a week and was available for consultation 20 h a week, allowing conversations, assistance, and review of any skills and steps related to the curriculum. The behavioral scientist created a brief 17 item questionnaire to measure the residents’ confidence in their ability to assess and manage psychiatric crises. Due to the complexity of Wisconsin state law, coupled with institutional policies, published generic measures of resident confidence assessment were not appropriate. Further, literature review shows that it is very common to use unvalidated assessments of family medicine resident confidence aimed at the particular educational objective. The items were very specific to the steps required for assessing and managing psychiatric crises in our clinic. The questionnaire (see ) was administered at the beginning of the first didactic before any material was presented (Pre) and then again 6 months later (Post). Study data were collected and managed using REDCap. Administering the survey 3 months after completion of the didactic series allowed the residents the opportunity to apply the information in their clinical practice. Prioritizing anonymity inadvertently resulted in being unable to match before and after responses to individual participants. Resident responses were numerically valued on a 5-point Likert scale (0 = not confident at all, 4 = extremely confident). The difference in residents’ confidence ratings before and after the training was assessed using the Mann-Whitney U test, because the data was not normally distributed as assessed by the Shapiro-Wilk test (all P -values < .05). Analyses were conducted in R version 3.6.3. Type one error rate was set at 5% without adjustment for multiple comparisons. Analyses were performed on raw scores to preserve measured variation and maximize statistical power. Results are described dichotomously as confident (fairly to extremely confident) or not (not at all to somewhat) to convey the degree of change most simply. The response rate was 87% pre-intervention (n = 13) and 93% post-intervention (n = 14). The portion of residents feeling confident handling the various components of assessing and managing psychiatric crises before and after the training are presented in . Before the training, no resident felt confident (1) assessing hallucinations and delusions, (2) determining whether a patient met criteria for direct admission to the inpatient psychiatric unit or needed to go to the ED first, (3) initiating safe transportation to the ED, (4) directly admitting a patient to the inpatient unit when the patient was a voluntary admission, (5) initiating Wisconsin’s Chapter 51 protocol (Wisconsin’s involuntary commitment requirements), (6) collaboratively developing a personalized safety coping plan, or (7) determining if there was enough information to break confidentiality and include a third party. Before the training, screening for suicidal and homicidal ideation was the only item that a majority of residents felt confident about (62%). Resident confidence increased for every aspect of assessing and managing psychiatric crises after the training . The largest improvements in resident confidence were observed for assessing for hallucinations and delusions (+71%) and assessing for suicide and homicide (+70%). Importantly, the proportion of residents feeling confident that they could recognize when inpatient criteria were met rose from 8% to 50% and the prevalence of confidence managing the entire inpatient admission process rose from 8% to 43%, while overall confidence managing a psychiatric crisis that does not meet inpatient criteria rose from 8% to 36%. A short didactic series coupled with a point-of-care, clinic-specific, practice guideline document improved family medicine resident confidence in all aspects of assessing and managing multiple types of psychiatric crises in a family medicine residency clinic. The increased confidence found 3 months after the trainings suggests durable retention of these processes and skills. Given the mental healthcare milieu in the U.S., this training helps fill a critical need to prepare PCPs to appropriately assess and manage psychiatric crises. , A simple, short intervention was chosen to increase generalizability due to PCPs and family medicine residents having innumerable competing demands on their time. This study demonstrates that 3 h of training and a one-page practice guideline, specific to clinic and state regulations, can help improve PCP confidence to help patients experiencing a psychiatric crisis. This training uniquely equips family medicine residents and PCPs to handle psychiatric crises beyond suicidal ideation, including homicidal ideation, harm-related command hallucinations, and delusions that impair a patient’s ability to maintain short-term safety. All other trainings to address psychiatric crises in primary care reported in the literature singularly focus on suicidality. Educating physicians on all crises is critical because these other crises will continue to occur in primary care. As Graham et al have shown, if physicians have not obtained professional training, they are less willing to assess and treat, and if training is only focused on suicidal ideation, a significant number of potential crises will be missed, with potentially lethal consequences. Limitations and Future Research This study has several limitations. First, there was no comparison group, so improvements may in part be due to other changes in the program, health system, or the normal development of trainees. Second, the intervention included didactics and multiple versions of quick, point-of-care references; thus, it is unclear which component(s) were most valuable. The behavioral scientist is often available for in-the-moment consultation, which is a resource not available in most primary care clinics. Third, this study took place at a single family medicine residency program, so the results may not generalize to other family medicine residency program settings, other specialties, or independently practicing PCPs. Other drawbacks of this study’s design were (a) an inability to match each participant’s data from pre-intervention to response at post-intervention and (b) the limited number of participants, both of which decrease statistical power; however, the threshold set for statistical significance was consistently surpassed. Also, the training could likely be improved because fewer than half of residents were confident on 8 of the items after completing the curriculum. Improvements could include a fourth hour of training to synthesize all 3 didactics, a 6-month booster training hour, and more opportunity for role playing or watching case examples. It would also be helpful to survey confidence beyond the 3 months to better assess how well this critical information is retained long-term. Finally, confidence in assessing and managing a psychiatric crisis does not necessarily lead to actual improved practice or patient outcomes. The survey was created specifically for this project and has not been validated as a reflection of provider behavior; therefore, future research should examine the impact of training on actual physician behaviors and patient outcomes for those presenting to primary care with severe, acute mental health needs. We hope that this work spurs future efforts to efficiently equip PCPs with the skills to improve their practice and potentially save lives. This study has several limitations. First, there was no comparison group, so improvements may in part be due to other changes in the program, health system, or the normal development of trainees. Second, the intervention included didactics and multiple versions of quick, point-of-care references; thus, it is unclear which component(s) were most valuable. The behavioral scientist is often available for in-the-moment consultation, which is a resource not available in most primary care clinics. Third, this study took place at a single family medicine residency program, so the results may not generalize to other family medicine residency program settings, other specialties, or independently practicing PCPs. Other drawbacks of this study’s design were (a) an inability to match each participant’s data from pre-intervention to response at post-intervention and (b) the limited number of participants, both of which decrease statistical power; however, the threshold set for statistical significance was consistently surpassed. Also, the training could likely be improved because fewer than half of residents were confident on 8 of the items after completing the curriculum. Improvements could include a fourth hour of training to synthesize all 3 didactics, a 6-month booster training hour, and more opportunity for role playing or watching case examples. It would also be helpful to survey confidence beyond the 3 months to better assess how well this critical information is retained long-term. Finally, confidence in assessing and managing a psychiatric crisis does not necessarily lead to actual improved practice or patient outcomes. The survey was created specifically for this project and has not been validated as a reflection of provider behavior; therefore, future research should examine the impact of training on actual physician behaviors and patient outcomes for those presenting to primary care with severe, acute mental health needs. We hope that this work spurs future efforts to efficiently equip PCPs with the skills to improve their practice and potentially save lives.
Orthodontic treatment in periodontally compromised patients: a systematic review
a4a384e5-e4ae-478c-8795-e4317a45b651
9877066
Dental[mh]
The need of appropriate orthodontic appliances for the challenging treatment of adult patients has increased considerably, especially within the last six decades . The main reasons for the increasing demand for orthodontic treatment among adults are beauty ideals on the one hand, and the development of almost invisible appliances on the other. According to a survey by the American Association of Orthodontists® (AAO), the number of adult patients in the USA and Canada increased by 16% between 2012 and 2014. Accordingly, 27% of all orthodontically treated patients in these two nations were at least 18 years old . Since a flawless appearance is today associated with better social and professional opportunities and increased self-confidence, more and more adults place high expectations on the abilities of orthodontists [ – ]. The treatment of this patient group in particular is proving to be highly demanding, often requiring interdisciplinary cooperation with periodontists, restorative dentists, implantologists, and maxillofacial surgeons. Every orthodontic treatment is based on the interaction of teeth with their respective periodontium. In adults, however, the conditions have changed. Periodontopathies, which increase with age, lead to a destruction of the periodontal supporting tissue. This results in a smaller reaction zone between the root surface and the convertible alveolar bone. The resistance center of the affected teeth also shifts further towards apical due to the resorption of the alveolar ridge. These changes result in a greater deflection of the teeth when force is applied and the risk of unwanted tilting and root resorption increases. Careful treatment planning must ensure inflammation-free periodontal conditions and take into account changes in biomechanics. A structured interdisciplinary treatment approach that considers the individual therapeutic needs and possibilities is decisive for the long-term therapeutic success. Search strategy The literature search was carried out within the electronic databases “Pubmed” and “DIMDI.” The search was carried out with previously defined search keys for the publication period from January 1990 to July 2022: aggressive periodontitis AND ortho* aggressive periodontitis AND orthodontics chronic periodontitis AND ortho* chronic periodontitis AND orthodontics In addition, the journals “Community Dental Health,” “European Journal of Oral Sciences,” and “Parodontologie” were searched by hand. Study design The literature search comprised randomized controlled trials (RCT), cohort studies, case–control studies, and cross-sectional studies. Animal experimental studies, case reports, and reviews were generally excluded. Population Only studies with systemically healthy patients were included. Language The searches were confined to publications in German or English. The titles and short descriptions of the publications identified by the database and hand search were analyzed, and the full text examined to determine whether it matched the search criteria. If a match with the keywords could not be clearly identified from the abstract, the corresponding original work was requested and checked. Finally, the bibliographies of the present full texts were reviewed in the same way. The analysis of all publications was subject to a complex staggered search scheme. The literature search was carried out within the electronic databases “Pubmed” and “DIMDI.” The search was carried out with previously defined search keys for the publication period from January 1990 to July 2022: aggressive periodontitis AND ortho* aggressive periodontitis AND orthodontics chronic periodontitis AND ortho* chronic periodontitis AND orthodontics In addition, the journals “Community Dental Health,” “European Journal of Oral Sciences,” and “Parodontologie” were searched by hand. The literature search comprised randomized controlled trials (RCT), cohort studies, case–control studies, and cross-sectional studies. Animal experimental studies, case reports, and reviews were generally excluded. Only studies with systemically healthy patients were included. The searches were confined to publications in German or English. The titles and short descriptions of the publications identified by the database and hand search were analyzed, and the full text examined to determine whether it matched the search criteria. If a match with the keywords could not be clearly identified from the abstract, the corresponding original work was requested and checked. Finally, the bibliographies of the present full texts were reviewed in the same way. The analysis of all publications was subject to a complex staggered search scheme. Literature search The electronic literature search yielded 1067 publications (Fig. ). The manual search and review of all related bibliographies resulted in an additional 1591 hits. Of these, 881 publications remained after the exclusion of duplicates. Following screening, 43 articles were classified as potentially relevant and the originals were reviewed. As a result, 6 studies were excluded whose thematic focus differed from the research question. 2 studies were excluded (Cardaropoli D et al. 2001, Re S et al. 2004) as they were identical to two studies, which were already included in this review (Corrente G et al. 2003, Cardaropoli D et al. 2004). 28 publications were not taken into account due to deficiencies in content. Thus, some of these articles did not provide any evaluable information about material and methods or results of the investigation, especially the periodontal parameters are missing. In two publications, periodontal pre-treatment and orthodontic extrusion served the sole purpose of implant site preparation; i.e., periodontally compromised teeth were extracted prior to implant placement. This treatment concept contradicts the question posed in this paper. Finally, five studies were analyzed for this systematic review. Table lists these studies with their characteristic features regarding the study design, the study structure, and the conduct of the study. The studies differ a lot in the statistical heterogeneity: Methodological heterogeneity: 1 RCT, 1 intervention study with two intervention groups, 3 intervention studies without control group. From n = 267 to n = 10, significantly different case numbers Significantly different follow-up intervals or no follow-up at all Timing of outcome measurement very different, partly after each intervention (PA pretreatment, PA surgery, orthodontics, follow-up), partly only before-after Clinical heterogeneity: Variability of participants: • minimal ethnic distribution, studies only from Japan and Italy • predominantly middle-aged patients • type of periodontitis not clearly classified • compliance of patients partly documented via plaque score, partly no info • different teeth treated, partly molars, partly anterior teeth Type and time of outcome measurements: • different clinical parameters collected • scope and type of diagnostic measures vary greatly • timing of measurements varies widely, partly after each intervention, partly only before-after • measurement partly based on study models, radiographs, partly clinical • measurement partly performed by one practitioner, partly by two, partly no info at all • length of follow-up varies Intervention characteristics: • type of PA pretreatment, surgical (with or without bone substitutes, with or without antibiosis) vs.non-surgical • type of orthodontic movement: Intrusion, extrusion • type of orthodontic appliance • applied orthodontic forces vary • duration of orthodontic treatment • recall interval varies Descriptive presentation of the included studies The five studies under discussion included one randomized controlled trial and four intervention studies [ – ] with a total of 366 adult patients (Table ). Among them 75 patients suffered from chronic periodontitis . The authors of the remaining studies did not define the type of periodontitis, only describing it as severe or advanced periodontitis. All patients were treated using an interdisciplinary approach. Periodontal therapy was performed in all cases by means of supragingival and subgingival scaling and root plaining (SRP) prior to orthodontic treatment. Four studies only included patients with a vertical bony defect of ≥ 6 mm on one tooth [ , – ]. All studies included do not consider the type of malocclusion as well as crowding, yet several studies describe only or use as further inclusion criterion the pathological migration of teeth and the associated development of diastema due to periodontal disease [ – ]. Smokers were excluded in four studies [ , – ]. Re et al. did not provide any information in this respect . Except for 128 patients , flap surgery was performed in all patients to gain visual access to the root surfaces and alveolar bone. In 61 patients , enamel matrix proteins or bone substitute materials were used. The teeth of 47 patients were treated with an extrusion mechanism ; all others were treated with a multibracket appliance (MBA) for the intrusion of the teeth. All patients regularly received professional tooth cleaning and oral hygiene instructions as part of periodontal therapy. The selected studies differed considerably regarding the orthodontic technique, the force used, and the duration of treatment. After completion of the orthodontic therapy, the patients — except for the patients participating in the study by Ogihara and Wang — were treated with fixed retainers for long-term stabilization. In two studies, no information was provided regarding the follow-up period. In two other studies, follow-up took place after 12 months . Re et al. divided their patients into five different follow-up groups (2a, 4a, 6a, 10a, 12a) . Results of orthodontic treatment on periodontal status The following results were obtained from a total of 366 patients included in this systematic review. Among them, 343 patients from five studies received orthodontic treatment. 23 patients in one study served as control groups and were not treated orthodontically . The periodontally compromised dentition was not negatively affected by orthodontic treatment in any of the studies under discussion. On the contrary, the interdisciplinary therapy reduced the probing depths by an average of 3.31 mm with an average clinical attachment gain of 5.28 mm. These values are total mean values calculated from the mean values of the included studies by weighting the respective number of cases. The following figures show the decrease in probing depths (Fig. ) and clinical attachment gains (Fig. ) as a result of interdisciplinary therapy. The respective mean values and standard deviations are shown. For Ogihara and Wang , periodontal orthodontic treatment in the intervention group resulted in ΔPPD = 4.2 ± 1.35 mm. In the study by Re et al. , the probing depths were reduced by an average of 3 mm. This study has by far the largest number of cases. ΔPPD is 2.9 ± 0.67 mm in the group of 128 non-surgically treated patients and 3.0 ± 0.78 mm in 129 patients who underwent open flap debridement. Ghezzi et al. found a significantly larger decrease of the mean probing depth of 5.6 mm. In the studies by Cardaropoli et al. and Corrente et al. , the mean probing depth was reduced in the course of treatment by approximately the same amount, namely, by 4.3 ± 1.12 mm and 4.4 ± 1.33 mm, respectively. Re et al. did not provide any information on clinical attachment level (CAL). While in the remaining three studies of evidence class IIc, the mean attachment gains were grouped fairly closely around a value of 6 mm, the measurement results of the patients with Ogihara and Wang deviated significantly from this. In the intervention group, the CAL decreased by 3.7 ± 0.76 mm. The values of the control group are in brackets, since these patients did not receive orthodontic treatment. The diagram shows that the standard deviation is lowest in the study of Ogihara and Wang , so the individual values of the patients are the least widely distributed. The electronic literature search yielded 1067 publications (Fig. ). The manual search and review of all related bibliographies resulted in an additional 1591 hits. Of these, 881 publications remained after the exclusion of duplicates. Following screening, 43 articles were classified as potentially relevant and the originals were reviewed. As a result, 6 studies were excluded whose thematic focus differed from the research question. 2 studies were excluded (Cardaropoli D et al. 2001, Re S et al. 2004) as they were identical to two studies, which were already included in this review (Corrente G et al. 2003, Cardaropoli D et al. 2004). 28 publications were not taken into account due to deficiencies in content. Thus, some of these articles did not provide any evaluable information about material and methods or results of the investigation, especially the periodontal parameters are missing. In two publications, periodontal pre-treatment and orthodontic extrusion served the sole purpose of implant site preparation; i.e., periodontally compromised teeth were extracted prior to implant placement. This treatment concept contradicts the question posed in this paper. Finally, five studies were analyzed for this systematic review. Table lists these studies with their characteristic features regarding the study design, the study structure, and the conduct of the study. The studies differ a lot in the statistical heterogeneity: Methodological heterogeneity: 1 RCT, 1 intervention study with two intervention groups, 3 intervention studies without control group. From n = 267 to n = 10, significantly different case numbers Significantly different follow-up intervals or no follow-up at all Timing of outcome measurement very different, partly after each intervention (PA pretreatment, PA surgery, orthodontics, follow-up), partly only before-after Clinical heterogeneity: Variability of participants: • minimal ethnic distribution, studies only from Japan and Italy • predominantly middle-aged patients • type of periodontitis not clearly classified • compliance of patients partly documented via plaque score, partly no info • different teeth treated, partly molars, partly anterior teeth Type and time of outcome measurements: • different clinical parameters collected • scope and type of diagnostic measures vary greatly • timing of measurements varies widely, partly after each intervention, partly only before-after • measurement partly based on study models, radiographs, partly clinical • measurement partly performed by one practitioner, partly by two, partly no info at all • length of follow-up varies Intervention characteristics: • type of PA pretreatment, surgical (with or without bone substitutes, with or without antibiosis) vs.non-surgical • type of orthodontic movement: Intrusion, extrusion • type of orthodontic appliance • applied orthodontic forces vary • duration of orthodontic treatment • recall interval varies The five studies under discussion included one randomized controlled trial and four intervention studies [ – ] with a total of 366 adult patients (Table ). Among them 75 patients suffered from chronic periodontitis . The authors of the remaining studies did not define the type of periodontitis, only describing it as severe or advanced periodontitis. All patients were treated using an interdisciplinary approach. Periodontal therapy was performed in all cases by means of supragingival and subgingival scaling and root plaining (SRP) prior to orthodontic treatment. Four studies only included patients with a vertical bony defect of ≥ 6 mm on one tooth [ , – ]. All studies included do not consider the type of malocclusion as well as crowding, yet several studies describe only or use as further inclusion criterion the pathological migration of teeth and the associated development of diastema due to periodontal disease [ – ]. Smokers were excluded in four studies [ , – ]. Re et al. did not provide any information in this respect . Except for 128 patients , flap surgery was performed in all patients to gain visual access to the root surfaces and alveolar bone. In 61 patients , enamel matrix proteins or bone substitute materials were used. The teeth of 47 patients were treated with an extrusion mechanism ; all others were treated with a multibracket appliance (MBA) for the intrusion of the teeth. All patients regularly received professional tooth cleaning and oral hygiene instructions as part of periodontal therapy. The selected studies differed considerably regarding the orthodontic technique, the force used, and the duration of treatment. After completion of the orthodontic therapy, the patients — except for the patients participating in the study by Ogihara and Wang — were treated with fixed retainers for long-term stabilization. In two studies, no information was provided regarding the follow-up period. In two other studies, follow-up took place after 12 months . Re et al. divided their patients into five different follow-up groups (2a, 4a, 6a, 10a, 12a) . The following results were obtained from a total of 366 patients included in this systematic review. Among them, 343 patients from five studies received orthodontic treatment. 23 patients in one study served as control groups and were not treated orthodontically . The periodontally compromised dentition was not negatively affected by orthodontic treatment in any of the studies under discussion. On the contrary, the interdisciplinary therapy reduced the probing depths by an average of 3.31 mm with an average clinical attachment gain of 5.28 mm. These values are total mean values calculated from the mean values of the included studies by weighting the respective number of cases. The following figures show the decrease in probing depths (Fig. ) and clinical attachment gains (Fig. ) as a result of interdisciplinary therapy. The respective mean values and standard deviations are shown. For Ogihara and Wang , periodontal orthodontic treatment in the intervention group resulted in ΔPPD = 4.2 ± 1.35 mm. In the study by Re et al. , the probing depths were reduced by an average of 3 mm. This study has by far the largest number of cases. ΔPPD is 2.9 ± 0.67 mm in the group of 128 non-surgically treated patients and 3.0 ± 0.78 mm in 129 patients who underwent open flap debridement. Ghezzi et al. found a significantly larger decrease of the mean probing depth of 5.6 mm. In the studies by Cardaropoli et al. and Corrente et al. , the mean probing depth was reduced in the course of treatment by approximately the same amount, namely, by 4.3 ± 1.12 mm and 4.4 ± 1.33 mm, respectively. Re et al. did not provide any information on clinical attachment level (CAL). While in the remaining three studies of evidence class IIc, the mean attachment gains were grouped fairly closely around a value of 6 mm, the measurement results of the patients with Ogihara and Wang deviated significantly from this. In the intervention group, the CAL decreased by 3.7 ± 0.76 mm. The values of the control group are in brackets, since these patients did not receive orthodontic treatment. The diagram shows that the standard deviation is lowest in the study of Ogihara and Wang , so the individual values of the patients are the least widely distributed. The aim of this comprehensive literature review was to clarify the therapy-relevant aspects of orthodontic treatment with altered biomechanics in the periodontally compromised dentition. Even though extensive attempts were made to collect information relating to the subject, the results of this review should be evaluated critically. The main reason for this is that both the number of relevant publications and the methodological quality of the studies are severely limited. Only one study is randomized and distributed with a control group . In addition, the case numbers of the individual studies are very small and the follow-up period short, so that the long-term effect of the intervention cannot be estimated. The bias risk is correspondingly high. This review summarizes the findings of the analyzed articles. Although a new classification of periodontal diseases has been introduced by Carton et al. in 2018, this systematic review used the 1999 Classification system by Armitage et al. in 1999, as most studies so far are based on this classification. The orthodontic treatment of periodontitis patients seems to have a positive effect on the affected periodontium both functionally and aesthetically. There is consensus in the literature that the establishment and maintenance of an inflammation-free periodontal condition must be ensured during orthodontic therapy and beyond. Only a regular plaque check with oral hygiene instructions and professional tooth cleaning prevents the orthodontic forces from having a negative effect on the already impaired periodontium. As early as 1977, Ericsson et al. were able to show in an animal study in five Beagle dogs that tilting and intruding forces in the periodontally impaired but plaque-free periodontium does not cause infrabony defects. On the other hand, the same movements in plaque accumulated teeth caused a displacement of the supragingival plaque to subgingival, which favors a subsequent attachment loss . Irregular recall during orthodontic therapy is associated with a significantly higher rate of tooth loss . Accordingly, all supportive periodontal therapy (SPT) patients were involved in regular follow-ups. In addition to retraction and up righting, intrusion and extrusion play a central role in periodontitis patients and have been examined scientifically in detail. In various animal studies, the intrusion of teeth with iatrogenically induced attachment loss led to bone and cement neoplasms in the absence of inflammation [ – ]. The findings from these publications formed the basis for the later orthodontic treatment of periodontitis patients. Intrusion reduced the probing depths of migrated and extruded incisors and significantly improved the marginal bone level . The use of minimal, continuous force is critical for the controlled movement of periodontally compromised teeth. As early as 1989, Melsen et al. achieved the most favorable results with orthodontic forces of 5–15 g/tooth in a comparative study using different methods of anterior tooth intrusion . In patients with advanced chronic periodontitis, subcrestal defects were almost completely eliminated, interdental papillae were restored, and probing depths were reduced [ , , ]. This is in line with more recent systematic reviews on this topic . An “optimal force” ensures fast tooth movement combined with the lowest possible discomfort for the patient and minimal damage to the surrounding tissue . The position of the resistance center is decisive for this. If the attachment is lost, the resistance center of the tooth shifts to apical. As a result, orthodontic forces must be reduced in order to avoid uncontrolled tooth tilting. Bone resorption and slower bone remodeling also reduce the anchoring quality of natural teeth. In this case, the reciprocal forces generated by each tooth movement can be less easily absorbed by adjacent teeth, which results in undesirable side effects . The use of skeletal anchoring elements makes orthodontic treatment possible even in these complex cases. The method is also barely visible and meets the high aesthetic demands of adult patients. Wehrbein et al. were among the first to investigate the resilience and stability of palatal implants (Orthosystem Straumann®, Straumann® AG, Basel, Switzerland). This sagittally centered titanium implant osseointegrated in the anterior palate region has proven its worth in the treatment of patients with various malocclusions [ – ]. Histologically, palatal implants show good osseointegration that ensures positionally stable anchorage throughout the period of orthodontic treatment . According to a scientific statement by the German Society for Orthodontics (Deutsche Gesellschaft für Kieferorthopädie e.V., DGKFO), palatal implants and cortex screws can be successfully used in a variety of orthodontic treatment tasks and provide reliable results. These anchoring elements are especially indicated in situations with reduced anchoring quality of the natural dentition, as it is common in periodontally impaired patients with extruded teeth, advanced bone resorption and tooth loss . To date, however, no studies have investigated the skeletal anchoring of MBAs in periodontitis patients. Only isolated case reports document the clinical benefit . In addition to tooth intrusion, forced extrusion has a firm place in the orthodontic treatment of periodontitis patients. They are used for isolated intraosseous defects. These can have several causes: uncontrolled tooth tilting and occlusal trauma, but also local irritation factors such as insufficient restorations, impacted food remains or plaque. Since the neighboring teeth are usually periodontally intact, resective measures are prohibited in these cases. Depending on the morphology of the defect, regenerative methods also have only a limited effect here. Orthodontic extrusion is indicated especially for single or double-walled subcrestal defects. In this way, infrabony defects can be corrected, periodontal pockets reduced and associated gingivitis eliminated . The choice of orthodontic appliance is based on the biomechanics of the findings. In most cases, fixed appliances are indicated to correct tooth misalignments in all three spatial planes. This is reflected in the publications included — MBAs were used in all patients. However, adults with a high aesthetic awareness often reject conventional MBAs. Aligners are a new generation of removable orthodontic devices that are increasingly used in adult treatment. The Invisalign® technique (Align Technology, Santa Clara, USA), which has been established for several years, uses a series of these transparent plastic splints to achieve an end position previously planned on a three-dimensional model . Bite position corrections and extensive three-dimensional movements, however, cannot be implemented with aligner technique alone and still require the use of fixed appliances . As the splints can be removed and thus cleaned easily, they are periodontally hygienically favorable. Clinically, this property is reflected in a significantly better plaque index and lower inflammation values compared to fixed appliances [ – ]. Orthodontic treatment with MBAs usually only leads to temporary impairment of periodontal structures in periodontally healthy patients . However, these results must be critically questioned with regard to the reactions of the periodontally impaired patient. A shift of subgingival microbiota towards periodontal pathogenic anaerobic bacterial species during MBA treatment has been observed in numerous studies . A clear objective in the use of fixed appliances must therefore be to prevent plaque accumulation as far as possible. After augmentation, the early use of orthodontic forces seems to favor the reconstruction of the bone substitute material and the consolidation of the bony defect as soon as 5–10 days postoperatively . Especially in this early phase, intrusive moments seem to have an advantageous effect on tissue remodeling and lead to aesthetically excellent results [ , , ]. Cardaropoli et al. see early intrusion of teeth as the prerequisite for natural periodontal regeneration. The elongation of the collagen fiber apparatus acts as a natural barrier that prevents apical epithelial proliferation and allows attachment gains. Convertible periodontal tissue is the prerequisite for functional and structural restoration of the periodontium. Diedrich et al. as well as Juzanx and Giovannoli therefore call for regenerative treatment of teeth with advanced attachment loss before they are moved orthodontically towards an infrabony defect. This therapy concept has been successfully implemented in case reports [ – ]. Evidence-based research proves that guided tissue regeneration and the use of enamel matrix derivatives (Emdogain®) are safe and effective regenerative methods. Their use in the treatment of infrabony defects has been proven over decades and provides predictable treatment results [ – ]. In periodontitis-damaged teeth, delayed cell remodeling often necessitates permanent retention, which can only be achieved with fixed retainers [ – ]. Flexible lingual retainers have proven their worth in these situations and are currently regarded as the gold standard. In contrast to other permanent retention measures, they allow the physiological mobility of the teeth. Unlike removable appliances, they are largely independent of the patient’s compliance. When inserting the wires, exact positioning is crucial in order to protect the surrounding soft tissue as much as possible and to enable undisturbed dynamic occlusion. Since bonded retainers promote plaque accumulation, it is essential that they show a good hygienic capability. To date, there are no long-term studies documenting the stability of the results of periodontal orthodontic therapy . However, individual case reports with long-term follow-up demonstrate that a healthy periodontal condition and stable occlusion can be permanently established. A well-thought-out interdisciplinary therapy concept with strict periodontal follow-up and excellent patient compliance are necessary prerequisites for long-term treatment success [ – ]. The most recent systematic review and meta-analysis by Martin et al. examined the effects of orthodontic tooth movement on clinical attachment level (CAL) changes in stable treated adult periodontitis patients compared to non-periodontitis patients. The authors concluded that in non-periodontitis and stable treated periodontitis patients, orthodontic tooth movement had no significant impact on periodontal outcomes. This systematic review serves also as base for the clinical recommendations in the EFP S3 level guidelines for treatment of stage IV periodontitis . Thus, the clinical recommendation says that in successfully treated stage IV periodontitis patients in need for orthodontic therapy, it is suggested to undertake orthodontic therapy based on evidence that orthodontic therapy has no detrimental effects on periodontal conditions in periodontitis patients with a healthy but reduced periodontium. This is in line with our findings. However, it has to be noted that in our systematic review, periodontitis patients with different levels of severity were included, evaluating orthodontic treatment in the periodontally compromised dentition more broadly. Patients with periodontal impairment can be successfully treated with orthodontics as part of interdisciplinary therapy. Provided that low controlled forces are used in non-inflammatory conditions, orthodontics will not have any negative effects on the periodontium. To obtain reliable and predictable therapy results and to formulate guidelines, further randomized controlled trials with uniform study design and regular follow-ups are necessary.
Investigation of the Efficacy of a
d5af0b84-2f24-4728-b7ec-4f42972e8bfd
11054734
Microbiology[mh]
There are 17 different species of the Listeria genus. Among them, only two species are pathogenic: Listeria ivanovii , found almost exclusively in ruminants, and Listeria monocytogenes ( L. monocytogenes ), which can infect humans and cause illnesses . L. monocytogenes is a facultative, anaerobic, Gram-positive, rod-shaped bacterium known since 1924. It is a psychrophile pathogen capable of multiplying at refrigeration temperature (4 °C) and surviving at temperatures as low as −17 °C, with an optimum growing temperature range of 30 to 37 °C . L. monocytogenes infection leads to illnesses such as listeriosis, sepsis, myocarditis, meningitis, encephalitis, bacteremia, and intrauterine or cervical infections in pregnant women that could lead to miscarriages or stillbirth . The most common path of L. monocytogenes infection is through the gastrointestinal tract, similar to other foodborne pathogens. It can be found in various food products like poultry, pork, beef, dairy products, bread, fish, ready-to-eat foods, and fresh produce . Its ability to form biofilms facilitates infection from surfaces, transport vehicles, and stainless-steel appliances . Liquid and semi-liquid products like broth, milk, or soft cheeses are suitable growing grounds for Listeria detection. Chicken broth is made by cooking chicken and raw vegetables in water and, therefore, can represent samples of chicken, vegetables, and liquid samples . The detection of L. monocytogenes has been performed with various types of biosensors. As their name suggests, optical biosensors provide an optical signal through luminescence, fluorescence, or color. The optical approach is very sensitive and selective but requires expensive optical equipment and is sensitive to environmental interference . Thermal biosensors measure the heat change due to bioreaction between the biorecognition molecule and specific analytes that correspond with the target pathogen. The method is fast and sensitive, but its selectivity is low due to non-target responses . Electrochemical biosensors (ECBSs) measure the changes in electric parameters like the current, potential, or impedance of a system due to biological interaction between the target analyte and the biorecognition molecule on the working electrode. ECBSs are sensitive, selective, fast, low cost, and do not require trained personnel. They require a small sample volume and can be portable, which makes them optimal for home or field detection in buffer or food samples . In our previous work, we developed different phage-based approaches for separating and detecting different pathogens . A biosensor utilizing phage-immobilized quarternized carbon nanotubes (q-CNTs) for the detection of L. monocytogenes in 1× phosphate-buffered saline (PBS) with a limit of detection of 8.4 CFU/mL was also presented . This work presents an adaptation of our biosensor to a portable platform constructed from commercially available screen-printed electrodes in order to detect L. monocytogenes in chicken broth samples. This newly adapted platform, as seen in a, offers the ability to detect the pathogen without specialized lab equipment. The most common detection methods for L. monocytogenes include genomic and antibody-based approaches. Electrochemical detection systems relying on antibodies for L. monocytogenes have demonstrated a limit of detection (LOD) of 35 CFU/mL in 1× PBS buffer and 22 CFU/mL in spiked lettuce samples. Despite their high sensitivity, antibody-based systems are constrained by limitations in stability and cost compared to bacteriophage-based alternatives . Another detection strategy involves genomic methods, utilizing DNA or RNA for L. monocytogenes detection, and achieving remarkably low LODs on the order of 10 −14 M. However, these methods have notable drawbacks, including the necessity of high-temperature sample preparation for denaturation and prolonged sample preparation times ranging from 8 to 24 h at minimum . Additionally, nucleotide- or antibody-based methods are not capable of distinguishing living and dead bacterial cells and therefore are not particularly attractive for food safety testing. The phage-based method discussed in this work overcomes this drawback . 2.1. Materials Used Carboxylic acid functionalized multiwalled carbon nanotubes (COOH-CNT) with a 30–50 nm outer diameter and a 10–20 µm length (from Cheap Tubes Inc., Cambridgeport, Vermont, USA); 1-pyrenebutanoic acid succinimidyl ester (PBSE), bovine serum albumin (BSA), Tween ® 20, dichloromethane and chlorodimethylsilane (all four from Sigma-Aldrich, St. Louis, MO, USA); dimethyl sulfoxide (DMSO) (Thermo-Scientific, Waltham, MA, USA); disodium phosphate (Na 2 HPO 4 ) (Research Products International Corp, Mt Prospect, IL, USA); sodium chloride (NaCl) (EMD chemicals, Massachusetts, USA); magnesium sulfate heptahydrate (MgSO 4 ·7H 2 O (J.T. Baker, Japan); thionyl chloride (SOCl 2 ) and iodomethane (CH 3 I) (both from Alfa Aesar, Haverhill, Massachusetts, USA); potassium phosphate dibasic (KH 2 PO 4 ) and potassium chloride (both from BDH, Solon, Ohio, USA); tris base, typtone, and ethanol (all three from Fisher Scientific, Hampton, New Hampshire); yeast extract and agar powder (both from Becton Dickinson and Company, Franklin Lakes, New Jersey, USA); and chicken broth (GreenWise ® , Lakeland, Florida, USA) were purchased from the respective commercial vendors and used as received. Phosphate-buffered saline 10× (100 mL) was prepared by mixing 0.2 g of KCl, 8 g of NaCl, 0.245 g of KH 2 PO 4 , and 1.4 g of Na 2 HPO 4 . PBS (1×) (pH 7.4) was prepared by diluting the PBS 10× buffer. A quantity of 0.01% tween 20 solution was prepared by mixing 10 mL of PBS (1×), 85 mL deionized water (DIW), and 20 µL of Tween ® 20. Luria Bertani (L.B.) (100 mL) (pH 7.0) was prepared by mixing 1 g of tryptone, 0.5 g of yeast extract, and 1 g of NaCl. S.M. buffer (pH 7.5) was prepared by mixing 100 mM NaCl, 8 mM MgSO 4 . 7H 2 O, 50 mM Tris base, and 0.01% gelatin. Standard brain–heart infusion (BHI) media was prepared by mixing 37 g of the BHI powder into 1 L of DIW using a magnetic stirrer until a homogenized solution was formed. Chicken broth, 1% dilution, was prepared by diluting 1 mL chicken broth into 99 mL 1× PBS and vortex-mixing the resulting solution. DIW with a resistivity of 18 MΩ.cm was used to prepare all the media and chemicals. All buffers and media were sterilized before use. Screen-printed electrodes (SPE) (Zensor, Taichung City, Taiwan) were purchased from C.H. Instruments, Inc., Austin, TX, USA and used as working-, counter-, and quasi-reference electrodes. All flow-based experiments were performed using a microfluidic flow cell from Metrohm Dropsens, Oviedo, Asturias. All electrochemical impedimetric measurements were performed using a CHI-920C model potentiostat (CH Instruments Inc., Austin, TX, USA). 2.2. Methods Used 2.2.1. Microbiological Methods Listeria monocytogenes Scott A, a pathogenic strain, was used as the target analyte, whereas Salmonella enterica subsp. Enterica serovar Typhimurium 291RH (ser. Typhimurium-291RH) and Escherichia coli O157:H7 ( E. coli O157:H7) were used as the non-target pseudo-analytes for specificity studies. Listex P100 bacteriophage (P100 Phage) was purchased from Micreos Food Safety B.V, Wageningen, Netherlands. L. monocytogenes Scott A was grown by inoculating a single colony in 3 mL of BHI media and incubating at 37 °C for 24 h at 200 rpm. Both ser. Typhimurium-291RH and E. coli O157:H7 were grown by inoculating a single colony in 3 mL of BHI and Luria Broth (LB) media, respectively. Both cultures were incubated overnight for 18 h at 37 °C at 200 rpm. One mL of the mid-log phase bacterial culture was centrifuged at 5000 rpm for 8 min. For detection experiments in buffer, the supernatant was removed and washed twice with 1× PBS buffer to remove any media residue, and the pellet was resuspended in 1× PBS buffer. Then, the dilution series was prepared. For detection experiments in 1% diluted chicken broth, all supernatant was removed and 1× PBS buffer was used to wash and remove any media residue, and the pellet was resuspended in 1% diluted chicken broth. The dilution series was also prepared with 1% chicken broth as the media used. Enumeration of bacteria was performed by plate-count techniques and expressed in CFU/mL. A plaque assay was carried out with P100 phage and L. monocytogenes to measure the phage titer and was expressed in PFU/mL. A soft agar overlay technique was carried out to evaluate the specificity of the P100 phage towards the target ( L. monocytogenes ) and non-target bacteria (ser. Typhimurium-291RH and E. coli O157:H7), with the presence and absence of P100 phage. 2.2.2. Electrode Preparation Quaternized carbon nanotubes (q-CNT) were prepared according to the protocol presented by Zolti et al. . Screen-printed electrodes (SPE) were rinsed with DIW and dried at room temperature for 2 h prior to modification with q-CNT. Once the electrode dried, 8 µL of the 1 mg/mL q-CNT solution was drop-cast on the SPE working electrode and then dried at room temperature. After that, PBSE as a molecular tethering agent was used as a crosslinker to attach the P100 phage to the q-CNT modified electrode. The modified SPE was rinsed with 1× PBS and placed in an ice container and 0.5 µL of 20 mM PBSE solution (in DMSO) was dropped onto it and allowed to self-assemble for 15 min. Excess PBSE was removed by rinsing twice with 1× PBS prior to phage attachment. One µL of the 10 9 PFU/mL P100 phage solution was drop-cast on the working electrode and kept overnight at 4 °C. The P100 phage contains negatively charged capsids and positively charged tail fibers. The strong positive charge on the q-CNT created an oriented phage layer chemically anchored to the surface, as presented by Zolti et al. . After immobilization, the electrode was rinsed with SM buffer and washed with 1× PBS buffer twice. Following the wash, 0.5 µL of 0.1% BSA solution was deposited on the electrode for 30 min to block areas that might not have been completely modified. Finally, the electrode was incubated in 1× PBS or with 1% diluted chicken broth for 15 min before use in electrochemical experiments. The bacterial solution (100 µL) was drop-cast onto the SPE and incubated for 8 min before the measurement. The impedimetric characterization was carried out using a CHI-920C scanning electrochemical microscope. The electrochemical system was a 3-electrode SPE, as shown in a. Electrochemical impedance spectroscopy (EIS) measurements were performed in 5 mM [Fe(CN) 6 ] 4− /[Fe(CN) 6 ] 3− as redox couple, with a frequency range of 1 Hz to 100 kHz and an AC amplitude of 5 mV. All measurements were performed at room temperature under standard conditions. The modified SPEs were tested in 1× PBS buffer and 1% diluted chicken broth matrices. The negative control contained no bacteria, and the test samples contained different concentrations of L. monocytogenes . The SPE was rinsed with 1× PBS after incubation with the tested solution. A 100 µL quantity of 5 mM [Fe(CN) 6 ] 4− /[Fe(CN) 6 ] 3− solution was dropped on the SPE to cover the working, counter, and reference electrodes prior to impedimetric measurements. The negative control measurement was used as the baseline R CT for each set of measurements presented in this section. Detection experiments under constant flow were performed using a syringe pump connected to a microfluidic flow chamber, as shown in b. The chicken broth was diluted a hundred-fold to 1% in 1× PBS to produce a more homogenized sample. Due to the dilution, the heterogenous nature of the chicken broth is negated, while the relevant target bacteria concentration is not reduced below the limit of detection. The dilution was performed according to practices common in the field . Carboxylic acid functionalized multiwalled carbon nanotubes (COOH-CNT) with a 30–50 nm outer diameter and a 10–20 µm length (from Cheap Tubes Inc., Cambridgeport, Vermont, USA); 1-pyrenebutanoic acid succinimidyl ester (PBSE), bovine serum albumin (BSA), Tween ® 20, dichloromethane and chlorodimethylsilane (all four from Sigma-Aldrich, St. Louis, MO, USA); dimethyl sulfoxide (DMSO) (Thermo-Scientific, Waltham, MA, USA); disodium phosphate (Na 2 HPO 4 ) (Research Products International Corp, Mt Prospect, IL, USA); sodium chloride (NaCl) (EMD chemicals, Massachusetts, USA); magnesium sulfate heptahydrate (MgSO 4 ·7H 2 O (J.T. Baker, Japan); thionyl chloride (SOCl 2 ) and iodomethane (CH 3 I) (both from Alfa Aesar, Haverhill, Massachusetts, USA); potassium phosphate dibasic (KH 2 PO 4 ) and potassium chloride (both from BDH, Solon, Ohio, USA); tris base, typtone, and ethanol (all three from Fisher Scientific, Hampton, New Hampshire); yeast extract and agar powder (both from Becton Dickinson and Company, Franklin Lakes, New Jersey, USA); and chicken broth (GreenWise ® , Lakeland, Florida, USA) were purchased from the respective commercial vendors and used as received. Phosphate-buffered saline 10× (100 mL) was prepared by mixing 0.2 g of KCl, 8 g of NaCl, 0.245 g of KH 2 PO 4 , and 1.4 g of Na 2 HPO 4 . PBS (1×) (pH 7.4) was prepared by diluting the PBS 10× buffer. A quantity of 0.01% tween 20 solution was prepared by mixing 10 mL of PBS (1×), 85 mL deionized water (DIW), and 20 µL of Tween ® 20. Luria Bertani (L.B.) (100 mL) (pH 7.0) was prepared by mixing 1 g of tryptone, 0.5 g of yeast extract, and 1 g of NaCl. S.M. buffer (pH 7.5) was prepared by mixing 100 mM NaCl, 8 mM MgSO 4 . 7H 2 O, 50 mM Tris base, and 0.01% gelatin. Standard brain–heart infusion (BHI) media was prepared by mixing 37 g of the BHI powder into 1 L of DIW using a magnetic stirrer until a homogenized solution was formed. Chicken broth, 1% dilution, was prepared by diluting 1 mL chicken broth into 99 mL 1× PBS and vortex-mixing the resulting solution. DIW with a resistivity of 18 MΩ.cm was used to prepare all the media and chemicals. All buffers and media were sterilized before use. Screen-printed electrodes (SPE) (Zensor, Taichung City, Taiwan) were purchased from C.H. Instruments, Inc., Austin, TX, USA and used as working-, counter-, and quasi-reference electrodes. All flow-based experiments were performed using a microfluidic flow cell from Metrohm Dropsens, Oviedo, Asturias. All electrochemical impedimetric measurements were performed using a CHI-920C model potentiostat (CH Instruments Inc., Austin, TX, USA). 2.2.1. Microbiological Methods Listeria monocytogenes Scott A, a pathogenic strain, was used as the target analyte, whereas Salmonella enterica subsp. Enterica serovar Typhimurium 291RH (ser. Typhimurium-291RH) and Escherichia coli O157:H7 ( E. coli O157:H7) were used as the non-target pseudo-analytes for specificity studies. Listex P100 bacteriophage (P100 Phage) was purchased from Micreos Food Safety B.V, Wageningen, Netherlands. L. monocytogenes Scott A was grown by inoculating a single colony in 3 mL of BHI media and incubating at 37 °C for 24 h at 200 rpm. Both ser. Typhimurium-291RH and E. coli O157:H7 were grown by inoculating a single colony in 3 mL of BHI and Luria Broth (LB) media, respectively. Both cultures were incubated overnight for 18 h at 37 °C at 200 rpm. One mL of the mid-log phase bacterial culture was centrifuged at 5000 rpm for 8 min. For detection experiments in buffer, the supernatant was removed and washed twice with 1× PBS buffer to remove any media residue, and the pellet was resuspended in 1× PBS buffer. Then, the dilution series was prepared. For detection experiments in 1% diluted chicken broth, all supernatant was removed and 1× PBS buffer was used to wash and remove any media residue, and the pellet was resuspended in 1% diluted chicken broth. The dilution series was also prepared with 1% chicken broth as the media used. Enumeration of bacteria was performed by plate-count techniques and expressed in CFU/mL. A plaque assay was carried out with P100 phage and L. monocytogenes to measure the phage titer and was expressed in PFU/mL. A soft agar overlay technique was carried out to evaluate the specificity of the P100 phage towards the target ( L. monocytogenes ) and non-target bacteria (ser. Typhimurium-291RH and E. coli O157:H7), with the presence and absence of P100 phage. 2.2.2. Electrode Preparation Quaternized carbon nanotubes (q-CNT) were prepared according to the protocol presented by Zolti et al. . Screen-printed electrodes (SPE) were rinsed with DIW and dried at room temperature for 2 h prior to modification with q-CNT. Once the electrode dried, 8 µL of the 1 mg/mL q-CNT solution was drop-cast on the SPE working electrode and then dried at room temperature. After that, PBSE as a molecular tethering agent was used as a crosslinker to attach the P100 phage to the q-CNT modified electrode. The modified SPE was rinsed with 1× PBS and placed in an ice container and 0.5 µL of 20 mM PBSE solution (in DMSO) was dropped onto it and allowed to self-assemble for 15 min. Excess PBSE was removed by rinsing twice with 1× PBS prior to phage attachment. One µL of the 10 9 PFU/mL P100 phage solution was drop-cast on the working electrode and kept overnight at 4 °C. The P100 phage contains negatively charged capsids and positively charged tail fibers. The strong positive charge on the q-CNT created an oriented phage layer chemically anchored to the surface, as presented by Zolti et al. . After immobilization, the electrode was rinsed with SM buffer and washed with 1× PBS buffer twice. Following the wash, 0.5 µL of 0.1% BSA solution was deposited on the electrode for 30 min to block areas that might not have been completely modified. Finally, the electrode was incubated in 1× PBS or with 1% diluted chicken broth for 15 min before use in electrochemical experiments. The bacterial solution (100 µL) was drop-cast onto the SPE and incubated for 8 min before the measurement. The impedimetric characterization was carried out using a CHI-920C scanning electrochemical microscope. The electrochemical system was a 3-electrode SPE, as shown in a. Electrochemical impedance spectroscopy (EIS) measurements were performed in 5 mM [Fe(CN) 6 ] 4− /[Fe(CN) 6 ] 3− as redox couple, with a frequency range of 1 Hz to 100 kHz and an AC amplitude of 5 mV. All measurements were performed at room temperature under standard conditions. The modified SPEs were tested in 1× PBS buffer and 1% diluted chicken broth matrices. The negative control contained no bacteria, and the test samples contained different concentrations of L. monocytogenes . The SPE was rinsed with 1× PBS after incubation with the tested solution. A 100 µL quantity of 5 mM [Fe(CN) 6 ] 4− /[Fe(CN) 6 ] 3− solution was dropped on the SPE to cover the working, counter, and reference electrodes prior to impedimetric measurements. The negative control measurement was used as the baseline R CT for each set of measurements presented in this section. Detection experiments under constant flow were performed using a syringe pump connected to a microfluidic flow chamber, as shown in b. The chicken broth was diluted a hundred-fold to 1% in 1× PBS to produce a more homogenized sample. Due to the dilution, the heterogenous nature of the chicken broth is negated, while the relevant target bacteria concentration is not reduced below the limit of detection. The dilution was performed according to practices common in the field . Listeria monocytogenes Scott A, a pathogenic strain, was used as the target analyte, whereas Salmonella enterica subsp. Enterica serovar Typhimurium 291RH (ser. Typhimurium-291RH) and Escherichia coli O157:H7 ( E. coli O157:H7) were used as the non-target pseudo-analytes for specificity studies. Listex P100 bacteriophage (P100 Phage) was purchased from Micreos Food Safety B.V, Wageningen, Netherlands. L. monocytogenes Scott A was grown by inoculating a single colony in 3 mL of BHI media and incubating at 37 °C for 24 h at 200 rpm. Both ser. Typhimurium-291RH and E. coli O157:H7 were grown by inoculating a single colony in 3 mL of BHI and Luria Broth (LB) media, respectively. Both cultures were incubated overnight for 18 h at 37 °C at 200 rpm. One mL of the mid-log phase bacterial culture was centrifuged at 5000 rpm for 8 min. For detection experiments in buffer, the supernatant was removed and washed twice with 1× PBS buffer to remove any media residue, and the pellet was resuspended in 1× PBS buffer. Then, the dilution series was prepared. For detection experiments in 1% diluted chicken broth, all supernatant was removed and 1× PBS buffer was used to wash and remove any media residue, and the pellet was resuspended in 1% diluted chicken broth. The dilution series was also prepared with 1% chicken broth as the media used. Enumeration of bacteria was performed by plate-count techniques and expressed in CFU/mL. A plaque assay was carried out with P100 phage and L. monocytogenes to measure the phage titer and was expressed in PFU/mL. A soft agar overlay technique was carried out to evaluate the specificity of the P100 phage towards the target ( L. monocytogenes ) and non-target bacteria (ser. Typhimurium-291RH and E. coli O157:H7), with the presence and absence of P100 phage. Quaternized carbon nanotubes (q-CNT) were prepared according to the protocol presented by Zolti et al. . Screen-printed electrodes (SPE) were rinsed with DIW and dried at room temperature for 2 h prior to modification with q-CNT. Once the electrode dried, 8 µL of the 1 mg/mL q-CNT solution was drop-cast on the SPE working electrode and then dried at room temperature. After that, PBSE as a molecular tethering agent was used as a crosslinker to attach the P100 phage to the q-CNT modified electrode. The modified SPE was rinsed with 1× PBS and placed in an ice container and 0.5 µL of 20 mM PBSE solution (in DMSO) was dropped onto it and allowed to self-assemble for 15 min. Excess PBSE was removed by rinsing twice with 1× PBS prior to phage attachment. One µL of the 10 9 PFU/mL P100 phage solution was drop-cast on the working electrode and kept overnight at 4 °C. The P100 phage contains negatively charged capsids and positively charged tail fibers. The strong positive charge on the q-CNT created an oriented phage layer chemically anchored to the surface, as presented by Zolti et al. . After immobilization, the electrode was rinsed with SM buffer and washed with 1× PBS buffer twice. Following the wash, 0.5 µL of 0.1% BSA solution was deposited on the electrode for 30 min to block areas that might not have been completely modified. Finally, the electrode was incubated in 1× PBS or with 1% diluted chicken broth for 15 min before use in electrochemical experiments. The bacterial solution (100 µL) was drop-cast onto the SPE and incubated for 8 min before the measurement. The impedimetric characterization was carried out using a CHI-920C scanning electrochemical microscope. The electrochemical system was a 3-electrode SPE, as shown in a. Electrochemical impedance spectroscopy (EIS) measurements were performed in 5 mM [Fe(CN) 6 ] 4− /[Fe(CN) 6 ] 3− as redox couple, with a frequency range of 1 Hz to 100 kHz and an AC amplitude of 5 mV. All measurements were performed at room temperature under standard conditions. The modified SPEs were tested in 1× PBS buffer and 1% diluted chicken broth matrices. The negative control contained no bacteria, and the test samples contained different concentrations of L. monocytogenes . The SPE was rinsed with 1× PBS after incubation with the tested solution. A 100 µL quantity of 5 mM [Fe(CN) 6 ] 4− /[Fe(CN) 6 ] 3− solution was dropped on the SPE to cover the working, counter, and reference electrodes prior to impedimetric measurements. The negative control measurement was used as the baseline R CT for each set of measurements presented in this section. Detection experiments under constant flow were performed using a syringe pump connected to a microfluidic flow chamber, as shown in b. The chicken broth was diluted a hundred-fold to 1% in 1× PBS to produce a more homogenized sample. Due to the dilution, the heterogenous nature of the chicken broth is negated, while the relevant target bacteria concentration is not reduced below the limit of detection. The dilution was performed according to practices common in the field . 3.1. Detection of L. monocytogenes in Buffer and Broth Initial impedimetric measurements were performed with the L. monocytogenes suspended in 1× PBS buffer at a concentration range of 10 2 CFU/mL to 10 6 CFU/mL. Measurements in triplicate were performed to determine the errors. a presents a Nyquist plot with data collected from the 1× PBS buffer experiments. The calibration data with a baseline boxplot within the inset are shown in b, along with the linear confidence limits, with a confidence level of 95%, showing that all points fall within the linear regime. It is visible that at higher concentrations, a larger error is calculated in the buffer. The reason for this error determination is that one of the biosensors has reached saturation at lower analyte concentrations than the other two. In addition, the rate of signal change from the concentration of 10 4 CFU/mL has slowed at different rates. Following the buffer experiments, detection experiments were performed in which the negative control and the bacterial solutions were suspended in chicken broth. The diluted broth was used to reduce the effects of inconsistencies in broth composition. The results of these experiments are shown in c,d. The data suggest that exposure to the broth causes a significant reduction in the overall values of the R CT , even after baseline adjustment, with respect to the corresponding measurements in the buffer. Additionally, the chicken broth measurements showed lower calculated error for all measurements. The lowest concentration measured was 10 2 CFU/mL. In addition, two methods were used to calculate the limit of detection (LOD). The first involved use of a linear regression method, and resulted in 55 CFU/mL in buffer and 10 CFU/mL in broth. The second method was 6σ. Here, the LOD is defined as anything higher than three standard deviations from the baseline, with a confidence level of α = 0.01. The values of three standard deviations from the baseline were 38.8 Ω in broth and 126.2 Ω in buffer. When using these numbers to calculate the limit of detection from the linear equation on the calibration curve, the value corresponds to 10 CFU/mL in broth and 300 CFU/mL in buffer . A possible explanation for the improvement with broth samples is that the different salts and components reduce the charge transfer resistance of the system, in turn lowering readings and making them easier to detect above the noise level. Also, since the broth was diluted to 1%, the LOD in undiluted broth samples would be 10 3 CFU/mL for both methods, which meets the requirements of most Western countries and is on par with other biosensors . In addition, the recovery rate is a parameter that compares the concentration calculated from the calibration curves to the actual concentration placed on the biosensor, as shown in , and this further proves the predictability and accuracy of the biosensor . The sensor’s recovery rate in broth has increased with the concentration, and in some concentrations, it has been better than when in a simple matrix, here, 1× PBS buffer. This suggests that the biosensor can detect L. monocytogenes and produce a reliable measurement of its concentration in the broth sample, with an accuracy of 88% to 96%. The recovery rate is calculated using Equations (1) and (2). (1) C i c a l c u l a t e d = Δ R C T , i − i n t e r c e p t s l o p e (2) R e c o v e r y R a t e = C c a l c u l a t e d C a c t u a l Following these tests, the biosensor’s stability over time was tested. SPEs were prepared simultaneously and submerged in 1× PBS at 4 °C until tested after 1 h, 1 day, 1 week, and 2 weeks. For each mentioned time, triplicate impedimetric measurements were obtained with 10 2 CFU/mL L. monocytogenes in broth. The variation in impedance signal from the initial value was calculated as the percentage change from the results obtained after 1 h, as shown in . Since all of the electrodes had been prepared simultaneously under the same conditions, the results after 1 h were used as the reference initial value (100%), and all other results were compared with this reference value. By doing so, it is possible to compare the changes in the signal over time. The stability measurements showed that the response was reduced by 10% after a day but maintained stability. After a week, an overall 30% reduction was observed, and the error became larger, and after 2 weeks, the signal was only 40% of its original value. It can be reasonably concluded that the biosensor exhibits stable performance for over a week after phage immobilization when used to detect L. monocytogenes concentration around 10 2 CFU/mL. 3.2. Role of Phage as Biorecognition Molecule After testing the biosensor using varying concentrations of L. monocytogenes in broth, the effectiveness of the P100 bacteriophage as a biorecognition molecule was tested to ensure that the impedimetric signal for L. monocytogenes detection can be attributed to the presence of the biorecognition molecule. Two sets of SPE were used, one unmodified and the other immobilized with P100 bacteriophage, for the same duration. shows the impedimetric response of the biosensor with and without the P100 bacteriophage. The response showed that when modified with the phage, there was a significantly higher impedance signal, which also increased with the analyte concentration, while the SPE without the phage showed an almost constant response with little effect shown from the varying analyte concentrations. The results indicate that the impedance signal can be attributed to the selective binding of L. monocytogenes to P100 bacteriophage on the SPE. 3.3. Specificity of the Biosensor The specificity of the phage-modified biosensor was tested by exposing the SPE with and without P100 bacteriophage to non-target pathogens E. coli O157:H7 and ser. Typhimurium-291RH. These bacteria were chosen since both are rod-shaped and commonly found in chicken products, alongside L. monocytogenes . The first set of experiments tested 10 2 CFU/mL of single pathogen in broth. The second set of experiments contained a sample containing E. coli O157:H7 and ser. Typhimurium-291RH in concentrations of 10 3 CFU/mL each. In addition, L. monocytogenes was added at two different concentrations, 10 2 and 10 3 CFU/mL. The impedimetric response to a single pseudo-analyte was 10–20 ohms above the response from the negative broth control, while the response to L. monocytogenes was 75 ohms above it, as seen in a; the response to pseudo-analytes is 13–26% of the response to L. monocytogenes. This suggests that the biosensor is very specific, and a positive response will only originate from bacteriophage- L. monocytogenes interaction. The biosensor without the phage biorecognition molecule showed an 8–10-ohm response from all test solutions and the control, demonstrating the effectiveness of the phage. In the interference study, shown in b, the biosensor’s responses to broth samples with both pseudo-analytes, and with and without L. monocytogenes , are presented. A clear signal was measured when L. monocytogenes was present, even when at lower concentration than the pseudo-analytes. In these measurements, when the biosensor had no phage, the response was almost constant, without any dependency on the concentration of L. monocytogenes , which further emphasizes the specificity of the P100 bacteriophage, even in a multi-contaminate environment. 3.4. Detection under Flow The final step was to conduct L. monocytogenes measurements in broth at different flow rates. The capability to detect L. monocytogenes in flow conditions is an important proof-of-concept on the path to portable electrochemical biosensors and the ability to integrate such a sensor into a production line. Initially, a 0.01% Tween ® 20 solution was flowed through the system. Then, an SPE was inserted into the flow cell, and the broth or bacterial sample was flowed on the biosensor’s surface for 8 min at a flow rate of 0.1 mL/min. After that, 5 mM [Fe(CN) 6 ] 4− /[Fe(CN) 6 ] 3− solution was flowed on the SPE surface at a rate of either 0.5 mL/min, 1 mL/min, or 2 mL/min. By multiplexing 10 biosensors of this size in parallel, it will be possible to process a liter of broth per hour. While the redox-couple solution was flowing, impedimetric measurements were taken. A Nyquist plot with the results of the 0.5 mL/min flow rate is shown in a, and the responses from all flow rates are presented in the bar chart in b. The results show that without the phage, the response is an increase of 1–5 ohm, and the flow rate did not move these values outside of that range. With phage-modified SPE, the response shows a 10% decrease in signal every time the flow rate doubles. Even though there was a decreased signal, the signals all changed with the concentration of L. monocytogenes . These results demonstrate that the whole detection process can be accomplished under flow after the SPE is prepared and modified. These results show the capability of the system to work as a portable system. Initial impedimetric measurements were performed with the L. monocytogenes suspended in 1× PBS buffer at a concentration range of 10 2 CFU/mL to 10 6 CFU/mL. Measurements in triplicate were performed to determine the errors. a presents a Nyquist plot with data collected from the 1× PBS buffer experiments. The calibration data with a baseline boxplot within the inset are shown in b, along with the linear confidence limits, with a confidence level of 95%, showing that all points fall within the linear regime. It is visible that at higher concentrations, a larger error is calculated in the buffer. The reason for this error determination is that one of the biosensors has reached saturation at lower analyte concentrations than the other two. In addition, the rate of signal change from the concentration of 10 4 CFU/mL has slowed at different rates. Following the buffer experiments, detection experiments were performed in which the negative control and the bacterial solutions were suspended in chicken broth. The diluted broth was used to reduce the effects of inconsistencies in broth composition. The results of these experiments are shown in c,d. The data suggest that exposure to the broth causes a significant reduction in the overall values of the R CT , even after baseline adjustment, with respect to the corresponding measurements in the buffer. Additionally, the chicken broth measurements showed lower calculated error for all measurements. The lowest concentration measured was 10 2 CFU/mL. In addition, two methods were used to calculate the limit of detection (LOD). The first involved use of a linear regression method, and resulted in 55 CFU/mL in buffer and 10 CFU/mL in broth. The second method was 6σ. Here, the LOD is defined as anything higher than three standard deviations from the baseline, with a confidence level of α = 0.01. The values of three standard deviations from the baseline were 38.8 Ω in broth and 126.2 Ω in buffer. When using these numbers to calculate the limit of detection from the linear equation on the calibration curve, the value corresponds to 10 CFU/mL in broth and 300 CFU/mL in buffer . A possible explanation for the improvement with broth samples is that the different salts and components reduce the charge transfer resistance of the system, in turn lowering readings and making them easier to detect above the noise level. Also, since the broth was diluted to 1%, the LOD in undiluted broth samples would be 10 3 CFU/mL for both methods, which meets the requirements of most Western countries and is on par with other biosensors . In addition, the recovery rate is a parameter that compares the concentration calculated from the calibration curves to the actual concentration placed on the biosensor, as shown in , and this further proves the predictability and accuracy of the biosensor . The sensor’s recovery rate in broth has increased with the concentration, and in some concentrations, it has been better than when in a simple matrix, here, 1× PBS buffer. This suggests that the biosensor can detect L. monocytogenes and produce a reliable measurement of its concentration in the broth sample, with an accuracy of 88% to 96%. The recovery rate is calculated using Equations (1) and (2). (1) C i c a l c u l a t e d = Δ R C T , i − i n t e r c e p t s l o p e (2) R e c o v e r y R a t e = C c a l c u l a t e d C a c t u a l Following these tests, the biosensor’s stability over time was tested. SPEs were prepared simultaneously and submerged in 1× PBS at 4 °C until tested after 1 h, 1 day, 1 week, and 2 weeks. For each mentioned time, triplicate impedimetric measurements were obtained with 10 2 CFU/mL L. monocytogenes in broth. The variation in impedance signal from the initial value was calculated as the percentage change from the results obtained after 1 h, as shown in . Since all of the electrodes had been prepared simultaneously under the same conditions, the results after 1 h were used as the reference initial value (100%), and all other results were compared with this reference value. By doing so, it is possible to compare the changes in the signal over time. The stability measurements showed that the response was reduced by 10% after a day but maintained stability. After a week, an overall 30% reduction was observed, and the error became larger, and after 2 weeks, the signal was only 40% of its original value. It can be reasonably concluded that the biosensor exhibits stable performance for over a week after phage immobilization when used to detect L. monocytogenes concentration around 10 2 CFU/mL. After testing the biosensor using varying concentrations of L. monocytogenes in broth, the effectiveness of the P100 bacteriophage as a biorecognition molecule was tested to ensure that the impedimetric signal for L. monocytogenes detection can be attributed to the presence of the biorecognition molecule. Two sets of SPE were used, one unmodified and the other immobilized with P100 bacteriophage, for the same duration. shows the impedimetric response of the biosensor with and without the P100 bacteriophage. The response showed that when modified with the phage, there was a significantly higher impedance signal, which also increased with the analyte concentration, while the SPE without the phage showed an almost constant response with little effect shown from the varying analyte concentrations. The results indicate that the impedance signal can be attributed to the selective binding of L. monocytogenes to P100 bacteriophage on the SPE. The specificity of the phage-modified biosensor was tested by exposing the SPE with and without P100 bacteriophage to non-target pathogens E. coli O157:H7 and ser. Typhimurium-291RH. These bacteria were chosen since both are rod-shaped and commonly found in chicken products, alongside L. monocytogenes . The first set of experiments tested 10 2 CFU/mL of single pathogen in broth. The second set of experiments contained a sample containing E. coli O157:H7 and ser. Typhimurium-291RH in concentrations of 10 3 CFU/mL each. In addition, L. monocytogenes was added at two different concentrations, 10 2 and 10 3 CFU/mL. The impedimetric response to a single pseudo-analyte was 10–20 ohms above the response from the negative broth control, while the response to L. monocytogenes was 75 ohms above it, as seen in a; the response to pseudo-analytes is 13–26% of the response to L. monocytogenes. This suggests that the biosensor is very specific, and a positive response will only originate from bacteriophage- L. monocytogenes interaction. The biosensor without the phage biorecognition molecule showed an 8–10-ohm response from all test solutions and the control, demonstrating the effectiveness of the phage. In the interference study, shown in b, the biosensor’s responses to broth samples with both pseudo-analytes, and with and without L. monocytogenes , are presented. A clear signal was measured when L. monocytogenes was present, even when at lower concentration than the pseudo-analytes. In these measurements, when the biosensor had no phage, the response was almost constant, without any dependency on the concentration of L. monocytogenes , which further emphasizes the specificity of the P100 bacteriophage, even in a multi-contaminate environment. The final step was to conduct L. monocytogenes measurements in broth at different flow rates. The capability to detect L. monocytogenes in flow conditions is an important proof-of-concept on the path to portable electrochemical biosensors and the ability to integrate such a sensor into a production line. Initially, a 0.01% Tween ® 20 solution was flowed through the system. Then, an SPE was inserted into the flow cell, and the broth or bacterial sample was flowed on the biosensor’s surface for 8 min at a flow rate of 0.1 mL/min. After that, 5 mM [Fe(CN) 6 ] 4− /[Fe(CN) 6 ] 3− solution was flowed on the SPE surface at a rate of either 0.5 mL/min, 1 mL/min, or 2 mL/min. By multiplexing 10 biosensors of this size in parallel, it will be possible to process a liter of broth per hour. While the redox-couple solution was flowing, impedimetric measurements were taken. A Nyquist plot with the results of the 0.5 mL/min flow rate is shown in a, and the responses from all flow rates are presented in the bar chart in b. The results show that without the phage, the response is an increase of 1–5 ohm, and the flow rate did not move these values outside of that range. With phage-modified SPE, the response shows a 10% decrease in signal every time the flow rate doubles. Even though there was a decreased signal, the signals all changed with the concentration of L. monocytogenes . These results demonstrate that the whole detection process can be accomplished under flow after the SPE is prepared and modified. These results show the capability of the system to work as a portable system. In this study, a highly sensitive electrochemical biosensor tailored for L. monocytogenes detection was developed and evaluated. The biosensor exhibited exceptional efficacy, demonstrating a remarkable LOD of 10 CFU/mL, surpassing the capabilities of most existing devices and displaying a sensitivity two orders of magnitude superior to PCR. Successful detection assays were conducted in chicken broth containing multiple pathogens under continuous flow conditions. The selectivity of the P100 bacteriophage was effectively demonstrated through exposure to a singular pathogen and interference studies. Moreover, the integration of SPEs and microfluidic channels showcased the portability of the system. With a demonstrated stability of up to one week, the biosensor proves suitable for various food-pathogen testing applications. Furthermore, validation using chicken broth as a representative food matrix underscored the robust performance of the biosensor platform.
null
3c4b34c4-fc51-4b99-a8f4-5d7a51f8ea90
11167891
Microbiology[mh]
In various cities in Europe and North America, the use of splash pads, areas where water jets are integrated into the ground surface, often with no standing water, is increasingly widespread, especially during summer, when people use them to cool off . Using such walkable fountains for this purpose may pose a health risk due to waterborne infections, especially if there are no regulations in place regarding the installation and supervision of these fountains . In recent years, countries such as the United States , United Kingdom , the Netherlands and Belgium , among others, have described outbreaks of infectious diseases in this type of facility as well as in spas, swimming pools, lakes, and water parks. The main pathogen described is Cryptosporidium spp., accounting for 19% of recreational water outbreaks between 2018 and 2019 in the US . Cryptosporidium spp. is the second major cause of moderate to severe diarrhea in children younger than 2 years and is an important cause of mortality worldwide. Infection with these parasites most commonly occurs during waterborne epidemics and in immunocompromized hosts. Most episodes of cryptosporidiosis in immunocompetent hosts are self-limiting, which may lead to their undersuspicion and underdiagnosis. However, the infection may be associated with chronic symptoms, malnutrition, and other complications in high-risk patients . The burden of cryptosporidiosis in Europe is difficult to estimate due to the lack of standardized surveillance and monitoring systems. Nevertheless, the increasing incidence of food and waterborne outbreaks suggests that Cryptosporidium spp. could be widespread in Europe . Previous studies have reported a prevalence of Cryptosporidium spp. of 18.8% in pools in Barcelona , 16.6 in pools in Paris , and 28.5% in Palermo . In Spain, where cryptosporidiosis requires mandatory reporting, previous outbreaks have been related to swimming pools and tap water . In addition, other pathogens, such as Clostridium perfringens , mainly linked to foodborne outbreaks, could also play a role in causing acute gastroenteritis (AGE) outbreaks in these facilities . The aim of this study was to describe the investigation of an AGE outbreak in a splash pad in the city of Barcelona and the measures taken for its control. Outbreak detection On August 30, 2018, the Epidemiology Department of the Public Health Agency of Barcelona received an email from a mother whose child had played in a splash pad in the Sant Andreu district of Barcelona, an area with a socioeconomic indicator slightly below the average for the city . She explained that her child had AGE and cutaneous symptoms and knew of other users with similar symptoms after cooling off in the facilities. The nursing team telephoned the mother and asked her to share their telephone number with parents whose children had played in the same splash pad and also had a history of AGE symptoms. Numerous calls were received, and within 24 h, 37 cases were recorded. Epidemiological investigations A cross-sectional study was conducted by the Epidemiology Department to identify both primary and secondary cases. Subsequently, team members inspected the interactive fountain area and made the decision to order its temporarily closure. Case definition People were considered primary cases if they had entered the splash pad area and had either gastrointestinal symptoms compatible with C. perfringens infection (diarrhea and abdominal pain) in the following 24 h, or intestinal symptoms compatible with Cryptosporidium spp. infection (diarrhea, abdominal pain, fever, nausea, and vomiting) in the following 1–12 days. Secondary cases were considered those without prior use of the splash pad who, after being in contact with a symptomatic case, developed symptoms of gastroenteritis compatible with both pathogens in the following 1–12 days. The study population were the people who cooled off in the interactive fountain between its opening on August 10, 2018, and its closure on August 30, 2018, and who had AGE symptoms. Data collection A specific questionnaire was designed for the outbreak and was administered by telephone interview to all suspected primary and secondary cases. The questionnaire recorded age, sex, date of exposure to the fountain area, date of symptom onset, symptom duration and characteristics, need for medical care (primary or hospital care), illness prior to cooling off in the area, contact with sick people prior to using the fountains, and the presence of other people in their environment who later developed symptoms. At the beginning of the epidemiological investigation, information was collected on food and visits to restaurants to rule out a foodborne outbreak. Information related to the cases was obtained from the outbreak record and the epidemiological surveys. The data were completed using the Clinical Health Shared Record of Catalonia. Due to the lack of a record of users of the facilities, in an initial phase of the investigation, cases were detected through word of mouth among residents of the area through social media (WhatsApp, Facebook and Twitter), with the collaboration of the first primary case. This, together with dissemination in the local press and television, allowed identification and recording of a larger number of affected people. Once the cause of the outbreak was determined, to reach the maximum number affected users of the facilities, a list of all fecal isolates of Cryptosporidium spp. in August and September was requested from reference laboratories in the same area as the fountain. To determine whether there was a history of using the fountains, cases with positive Cryptosporidium spp. samples were contacted if they had had gastrointestinal symptoms after inauguration of the splash pad and had no clear epidemiological cause that could explain the infection. Because the data used for this study were drawn from epidemiological surveillance, retrieved, anonymized and stored under Spanish legislation , there was no mandatory requirement for its approval by an ethics committee. Clinical microbiological investigations Stool samples were requested from the cases that remained symptomatic after identification of the outbreak. All samples were sent to the Laboratory of the Public Health Agency of Barcelona. Stool analysis included detection of Salmonella spp., Campylobacter spp., Escherichia coli O157, norovirus genogroups I and II, type A enterotoxin and spore count for C. perfringens , and coagulase-positive staphylococci. Subsequently, the same stool samples were sent to the reference laboratory for outbreaks in Catalonia (Microbiology Service of the University Hospital Vall d’Hebron) for microscopy analysis of parasites, which included Cryptosporidium spp. and Giardia spp. Environmental investigations The day the outbreak was declared (August 30, 2018), the splash pad, consisting of 234 water jets, was inspected. It consists of 13 water lines with jets. Each line has its own vessel from which water is pumped to the jets. Ejected water is collected by gravity in a general tank, where it is filtered and disinfected with sodium hypochlorite. From there, water is recycled to the vessels. The fountains are situated in a permanently open urban area, with the possibility of animals or people wearing shoes passing through it. The water was analyzed in situ to determine turbidity, free chlorine, and combined chlorine. The installation and the automatic chlorination system were checked. Four water samples were taken from the tank, the pumping system, the water jets, and the vessel drain on August 30, 2018, and September 3 and 14, 2018, and were sent to the laboratory of Aigües de Barcelona to check for Giardia lamblia and Cryptosporidium spp. The samples were also sent to the Laboratory of the Public Health Agency of Barcelona for pathogens and indicators of fecal contamination. Statistical analysis We performed a descriptive analysis of the cases and the course of the outbreak. To evaluate the impact of the number of visits to the splash pad, we performed an exploratory Poisson regression, adjusting for age and sex in a single model and obtaining adjusted prevalence ratios (APR) with their 95% confidence intervals (95%CI). All analyses were conducted using STATA 15 software. On August 30, 2018, the Epidemiology Department of the Public Health Agency of Barcelona received an email from a mother whose child had played in a splash pad in the Sant Andreu district of Barcelona, an area with a socioeconomic indicator slightly below the average for the city . She explained that her child had AGE and cutaneous symptoms and knew of other users with similar symptoms after cooling off in the facilities. The nursing team telephoned the mother and asked her to share their telephone number with parents whose children had played in the same splash pad and also had a history of AGE symptoms. Numerous calls were received, and within 24 h, 37 cases were recorded. A cross-sectional study was conducted by the Epidemiology Department to identify both primary and secondary cases. Subsequently, team members inspected the interactive fountain area and made the decision to order its temporarily closure. People were considered primary cases if they had entered the splash pad area and had either gastrointestinal symptoms compatible with C. perfringens infection (diarrhea and abdominal pain) in the following 24 h, or intestinal symptoms compatible with Cryptosporidium spp. infection (diarrhea, abdominal pain, fever, nausea, and vomiting) in the following 1–12 days. Secondary cases were considered those without prior use of the splash pad who, after being in contact with a symptomatic case, developed symptoms of gastroenteritis compatible with both pathogens in the following 1–12 days. The study population were the people who cooled off in the interactive fountain between its opening on August 10, 2018, and its closure on August 30, 2018, and who had AGE symptoms. A specific questionnaire was designed for the outbreak and was administered by telephone interview to all suspected primary and secondary cases. The questionnaire recorded age, sex, date of exposure to the fountain area, date of symptom onset, symptom duration and characteristics, need for medical care (primary or hospital care), illness prior to cooling off in the area, contact with sick people prior to using the fountains, and the presence of other people in their environment who later developed symptoms. At the beginning of the epidemiological investigation, information was collected on food and visits to restaurants to rule out a foodborne outbreak. Information related to the cases was obtained from the outbreak record and the epidemiological surveys. The data were completed using the Clinical Health Shared Record of Catalonia. Due to the lack of a record of users of the facilities, in an initial phase of the investigation, cases were detected through word of mouth among residents of the area through social media (WhatsApp, Facebook and Twitter), with the collaboration of the first primary case. This, together with dissemination in the local press and television, allowed identification and recording of a larger number of affected people. Once the cause of the outbreak was determined, to reach the maximum number affected users of the facilities, a list of all fecal isolates of Cryptosporidium spp. in August and September was requested from reference laboratories in the same area as the fountain. To determine whether there was a history of using the fountains, cases with positive Cryptosporidium spp. samples were contacted if they had had gastrointestinal symptoms after inauguration of the splash pad and had no clear epidemiological cause that could explain the infection. Because the data used for this study were drawn from epidemiological surveillance, retrieved, anonymized and stored under Spanish legislation , there was no mandatory requirement for its approval by an ethics committee. Stool samples were requested from the cases that remained symptomatic after identification of the outbreak. All samples were sent to the Laboratory of the Public Health Agency of Barcelona. Stool analysis included detection of Salmonella spp., Campylobacter spp., Escherichia coli O157, norovirus genogroups I and II, type A enterotoxin and spore count for C. perfringens , and coagulase-positive staphylococci. Subsequently, the same stool samples were sent to the reference laboratory for outbreaks in Catalonia (Microbiology Service of the University Hospital Vall d’Hebron) for microscopy analysis of parasites, which included Cryptosporidium spp. and Giardia spp. The day the outbreak was declared (August 30, 2018), the splash pad, consisting of 234 water jets, was inspected. It consists of 13 water lines with jets. Each line has its own vessel from which water is pumped to the jets. Ejected water is collected by gravity in a general tank, where it is filtered and disinfected with sodium hypochlorite. From there, water is recycled to the vessels. The fountains are situated in a permanently open urban area, with the possibility of animals or people wearing shoes passing through it. The water was analyzed in situ to determine turbidity, free chlorine, and combined chlorine. The installation and the automatic chlorination system were checked. Four water samples were taken from the tank, the pumping system, the water jets, and the vessel drain on August 30, 2018, and September 3 and 14, 2018, and were sent to the laboratory of Aigües de Barcelona to check for Giardia lamblia and Cryptosporidium spp. The samples were also sent to the Laboratory of the Public Health Agency of Barcelona for pathogens and indicators of fecal contamination. We performed a descriptive analysis of the cases and the course of the outbreak. To evaluate the impact of the number of visits to the splash pad, we performed an exploratory Poisson regression, adjusting for age and sex in a single model and obtaining adjusted prevalence ratios (APR) with their 95% confidence intervals (95%CI). All analyses were conducted using STATA 15 software. Epidemiological investigations A total of 80 epidemiological surveys were conducted between August 30th and September 19th; during that period, 71 people met the case definition. Among the 71 cases, 39 (54.9%) were women, and the median age was 6.7 (interquartile range (IQR): 3.4–20.7) years. The average incubation period was 2 (IQR: 1–8) days and the median symptom duration was 4.5 (IQR: 2–7) days. All cases had some AGE symptoms. The main symptoms were diarrhea (97.2%), abdominal pain (71.8%), nausea (29.6%), vomiting (29.6%), and fever (19.7%). Twenty-seven cases (38%) required medical care and 3 cases (4.2%) required hospital admission. A total of 61 (85.9%) primary cases and 10 (14.1%) secondary cases were recorded that were compatible with Cryptosporidium spp. infection (Table ). Primary cases included confirmed cases with positive results for Cryptosporidium spp. or/and Clostridium perfringens or/and G. lamblia in feces, and 42 untested cases without a sample. Secondary cases refer to cases without a history of splash pad, who were in contact with primary cases. The first case developed symptoms 6 days after inauguration of the splash pad. The epidemic curve of the outbreak extended from August 15 to September 14, 2018, 14 days after closure of the fountain area. The last primary case developed symptoms 4 days after closure of the fountains. Subsequently, all secondary cases (with symptoms compatible with cryptosporidiosis) were cohabitants of primary cases, suggests that entering the fountain area was the common point of infection (Fig. ). Poisson analysis showed an association between a larger number of times a person entered the area and the presence of cutaneous symptoms (APR: 1.54; 95%CI: 1.06–2.25) and requiring hospitalization (APR: 2.01; 95%CI: 1.09–3.73), with a weak association with younger age (APR: 0.97; 95%CI: 0.96–0.99) (Table ): Microbiological investigations Stool samples were collected from 25 (35.2%) cases, of which 19 (76%) were positive: nine (36%) simultaneously showed the presence of Cryptosporidium spp. and C. perfringens , 7 (28%) were positive only for Cryptosporidium spp., two (8%) were positive only for C. perfringens , and 1 (4%) was positive for both Cryptosporidium spp. and Giardia lamblia (Table ). Environmental investigations In the first 2 health inspections (August 30, 2018 and September 3, 2018), several breaches of public swimming pool regulations were detected. Likewise, water treatment was found to be inadequate. Daily records of the facility showed a lack of compliance with water turbidity requirements (> 5 NTU) and combined free chlorine values (> 0.6 ppm). The automatic chlorination system was found not to work properly. Analysis of the water samples, collected from the fountain on August 30, 2018, showed the presence of aerobic microorganisms (> 3,000 CFU/L) in all samples, coliform bacteria in 3 of the 4 samples, and C. perfringens in 2 of the 4 samples. These samples also showed the presence of Cryptosporidium spp. and G. lamblia . In the water samples collected on September 3 and 14, 2018, after cleaning and disinfection, no presence of C. perfringens or Cryptosporidium spp. was detected. Outbreak control measures After the outbreak was declared on August 30, 2018, the facilities were closed as a preventive measure. In addition, cleansing and disinfection treatments, including super chlorination, were performed. Information was provided to all infected individuals regarding standard hygiene precautions to avoid new cases appearing in their homes and in the community. Children whose fecal samples were positive for Cryptosporidium spp. were advised not to attend school or use water facilities for at least 15 days after symptom onset. The primary care centers in the area were contacted to inform them about the outbreak and request their collaboration in detecting new cases, especially those geographically closer to the fountain. Pediatricians specialized in infectious diseases collaborated by providing a reference for the management of cases in primary care, especially those requiring treatment. A total of 80 epidemiological surveys were conducted between August 30th and September 19th; during that period, 71 people met the case definition. Among the 71 cases, 39 (54.9%) were women, and the median age was 6.7 (interquartile range (IQR): 3.4–20.7) years. The average incubation period was 2 (IQR: 1–8) days and the median symptom duration was 4.5 (IQR: 2–7) days. All cases had some AGE symptoms. The main symptoms were diarrhea (97.2%), abdominal pain (71.8%), nausea (29.6%), vomiting (29.6%), and fever (19.7%). Twenty-seven cases (38%) required medical care and 3 cases (4.2%) required hospital admission. A total of 61 (85.9%) primary cases and 10 (14.1%) secondary cases were recorded that were compatible with Cryptosporidium spp. infection (Table ). Primary cases included confirmed cases with positive results for Cryptosporidium spp. or/and Clostridium perfringens or/and G. lamblia in feces, and 42 untested cases without a sample. Secondary cases refer to cases without a history of splash pad, who were in contact with primary cases. The first case developed symptoms 6 days after inauguration of the splash pad. The epidemic curve of the outbreak extended from August 15 to September 14, 2018, 14 days after closure of the fountain area. The last primary case developed symptoms 4 days after closure of the fountains. Subsequently, all secondary cases (with symptoms compatible with cryptosporidiosis) were cohabitants of primary cases, suggests that entering the fountain area was the common point of infection (Fig. ). Poisson analysis showed an association between a larger number of times a person entered the area and the presence of cutaneous symptoms (APR: 1.54; 95%CI: 1.06–2.25) and requiring hospitalization (APR: 2.01; 95%CI: 1.09–3.73), with a weak association with younger age (APR: 0.97; 95%CI: 0.96–0.99) (Table ): Stool samples were collected from 25 (35.2%) cases, of which 19 (76%) were positive: nine (36%) simultaneously showed the presence of Cryptosporidium spp. and C. perfringens , 7 (28%) were positive only for Cryptosporidium spp., two (8%) were positive only for C. perfringens , and 1 (4%) was positive for both Cryptosporidium spp. and Giardia lamblia (Table ). In the first 2 health inspections (August 30, 2018 and September 3, 2018), several breaches of public swimming pool regulations were detected. Likewise, water treatment was found to be inadequate. Daily records of the facility showed a lack of compliance with water turbidity requirements (> 5 NTU) and combined free chlorine values (> 0.6 ppm). The automatic chlorination system was found not to work properly. Analysis of the water samples, collected from the fountain on August 30, 2018, showed the presence of aerobic microorganisms (> 3,000 CFU/L) in all samples, coliform bacteria in 3 of the 4 samples, and C. perfringens in 2 of the 4 samples. These samples also showed the presence of Cryptosporidium spp. and G. lamblia . In the water samples collected on September 3 and 14, 2018, after cleaning and disinfection, no presence of C. perfringens or Cryptosporidium spp. was detected. After the outbreak was declared on August 30, 2018, the facilities were closed as a preventive measure. In addition, cleansing and disinfection treatments, including super chlorination, were performed. Information was provided to all infected individuals regarding standard hygiene precautions to avoid new cases appearing in their homes and in the community. Children whose fecal samples were positive for Cryptosporidium spp. were advised not to attend school or use water facilities for at least 15 days after symptom onset. The primary care centers in the area were contacted to inform them about the outbreak and request their collaboration in detecting new cases, especially those geographically closer to the fountain. Pediatricians specialized in infectious diseases collaborated by providing a reference for the management of cases in primary care, especially those requiring treatment. The data suggest that the route of transmission of the outbreak was water from an interactive fountain, between August 10 and 30, 2018, and the infectious agents that caused it were Cryptosporidium spp. and C. perfringens . Environmental investigations were consistent with epidemiological findings and revealed severe deficiencies in the design and maintenance of the splash pad. Both pathogens were identified in water samples collected from different points of the facilities, and in the samples from people who had gastroenteritis. Cryptosporidium spp. was found in 68% of the fecal samples and C. perfringens in 44%; both pathogens together were found in 36% of the samples. Closure of the facilities, following the declaration of the outbreak on August 30, ended the emergence of new primary cases after September 3, 2018. The observed results are consistent with outbreaks with similar characteristics described in this type of recreational area in other countries, with reports of pathogens such as Giardia spp., Shigella spp., and norovirus . However, there is no literature on similar outbreaks in this type of facility where the disease was caused by C. perfringens . This pathogen usually causes food poisoning, although it has less frequently been related to waterborne outbreaks . Of note, C. perfringens may have played a substantial role in the development or exacerbation of gastrointestinal symptoms, especially in persons affected by both pathogens. Our analyses found associations between a greater number of visits to the splash pad, younger age, and an increased risk of hospitalization and cutaneous symptoms, supporting a causal association consistent with dose-response exposure. Similar analyses in Maine (2018) found that individuals who swallowed pond water or immersed themselves under water, were approximately 3 times more likely to become ill than those who did not . Measures to control and prevent transmission of enteric pathogens through untreated recreational water include epidemiologic investigations, regular monitoring of water quality, microbial source tracking, and health policy and communications. Investigations include health inspection of the septic system, identification of agricultural animal waste runoff or discharge, monitoring wildlife activity in public areas, and the identification of improper disposal of solid waste . In Spain, Royal Decree 742/2013 sets out specific requirements regarding microbiological criteria and swimming pool monitoring: for every 100 mL of pool water analyzed, no E. coli or Pseudomonas aeruginosa should be detected. In addition, Legionella spp. monitoring is mandatory in heated pools or pools with aeration in the pool vessel, and concentrations must be lower than 100 CFU/L . However, due to the lack of specific regulation regarding the use of splash pads, we believe that their design and construction, as well as the requirements for their maintenance, do not fit with their real use. For this reason and the increasing installation of these types of facilities as climate shelters in cities , we believe it is essential that the relevant authorities approve a specific regulation regarding these types of fountains. The design of the installation analyzed in this study only included disinfection with sodium hypochlorite, and the chlorine levels detected varied among the different points of the installation. Furthermore, Cryptosporidium has been associated with swimming pool outbreaks due to its strong resistance to chlorine and resistance to elimination by filtration . Other countries with more experience in the use of this type of facility have guidelines that recommend the use of ultraviolet light (in addition to chlorination) for water disinfection, since this method has proven to be more effective in eliminating cyst-forming pathogens such as Cryptosporidium spp . Enteric pathogens can be transmitted when individuals ingest untreated recreational water contaminated with feces or vomit introduced in water by other swimmers or by storm water runoff and sewage system overflow and discharge, as well as leaks from septic or municipal wastewater system, dumped boating waste, and animal taste in or near swimming areas . The installation studied here was at high risk of microbiological contamination, since its area was not closed to prevent the transit of users with shoes or the entry of animals. Additionally, the users of these facilities are usually young children, increasing the probability of fecal contamination of water, due to the use of diapers and a greater degree of incontinence. These circumstances should be corrected by closing the perimeter of the facilities and recommending adequate hygiene measures prior to their use (e.g., use of showers, absence of footwear, appropriate clothing). We believe these recommendations should be included in national guidelines, as in other countries such as the Netherlands and Canada . The main limitation of this study is that, due to the lack of records on the people visiting the facilities during the days it remained in operation, we were unable to estimate the total number of people who became ill. For this reason, and because active case detection was only carried out in the city of Barcelona, it is highly likely that not all cases of infection after splash pad use were detected. Given that several cases had used the facilities repeatedly, the incubation period could not be accurately calculated and, consequently, the last date of splash pad use was recorded as the exposure date. In contrast, a strength of this study was the dissemination of information through social media, which allowed information to be collected from a large number of affected individuals. The use of splash pads without appropriate recirculation and disinfection systems can put human health at risk for waterborne diseases. To date, Spain lacks a specific regulation on these facilities. Areas designed for recreational water use and cooling off should comply with the regulations that apply to swimming pools and spas, taking into account the design of the tanks, water recirculation systems, and adequate disinfection systems. Given the climate emergency, which will lead to an increase in the abovementioned facilities and climate shelters, urgent action is needed. Prior to the opening of more interactive fountain areas with these characteristics, public health authorities should be involved in verifying compliance with the necessary requirements to ensure the safety of the population.
Malnutrition and Disability: A Retrospective Study on 2258 Adult Patients Undergoing Elective Spine Surgery
4c715609-1378-4eb7-9f54-f33e551e586c
11943876
Musculoskeletal System[mh]
Spine surgery is used to treat disorders of the spine, including end-stage spine arthritis, damaged discs, slipping vertebrae, the narrowing of the space within the spinal canal, curvatures and rotations of the column, or unstable fractures. Axial and/or radicular pain and the consequences of the spine condition negatively impact patients’ daily activities . Routinely, not routinely, or suggested evaluations at pre-admission pave the way to surgery depending on specific subject characteristics. Most recent assessments include health questionnaires, which bring the perspective of the patient (patient-reported outcome measures, PROMs) into the diagnostic–therapeutic journey. These tools allow clinicians and researchers to appreciate the impact of the illness on functional ability and, therefore, to examine the recovery process in a context that is important for the patient . The Oswestry disability index (ODI) is one of the most prominent measures that is unique to both conservative and surgical spine care; it quantifies personal care, movements, sleeping habits, and social life . Condition-specific tools like the ODI are often administered together with other questionnaires that provide information on the general health status, with the best known being the 36-item Short-Form Health Survey (SF-36) . The functional capacity needed for daily living activities can also be undermined by a state of malnutritional . Malnutrition is a clinically relevant, multifaceted condition whose etiology lies either in the failure of the individual to meet their nutritional requirements, hypo- or hyperphagia, or in the unspecified alteration of the nutritional status due to ailments or medications. Malnutrition is known to negatively affect the resilience for proper physical recovery after surgery . Considering the phenomenological overlap of malnutrition and disability, it would be reasonable to suggest that the two conditions might coexist in some patients undergoing spine surgery. Several bespoke retrospective cohort studies, database inquiries, or prospective research works have already explored the clinical and surgical significance of being malnourished before spine surgery [ , , ], with malnutrition being often identified as hypoalbuminemia or as an altered body mass index (BMI) [ , , ]. However, malnourishment is a polyhedric disorder. Like disability, malnutrition is investigated through dedicated patient-reported questionnaires that screen, for instance, for appetite or involuntary weight changes. Unlike disability, malnutrition in spine surgery is measured through tools that are not condition-specific. The unique signs of malnourishment, like hypophagia, an abnormal body composition, or muscular weakness, require long interviews, sophisticated medical devices, and allocated personnel that examine the patient’s nutritional status. The introduction of nutritional assessment tools is often centre-dependent and they may be not routinely employed. The absence of this information hinders the study of historical data from disease registries. However, it is possible to extrapolate objective information about the patient’s nutritional status from a combination of clinical and laboratory tests to determine a potential diagnosis of malnutrition or a risk of malnutrition. Examples are a difference between the current and ideal body weight of more than 10%, combinations of the lymphocyte count and albumin, and inflammatory markers in the presence of underweight . This retrospective investigation aimed to examine the existence of a relationship between malnutrition and disability in patients undergoing spine surgery. We hypothesised that malnutrition, identified as a combination of multidimensional parameters extracted from routinely collected data, would show an association with the reported degree of disability and physical health. If so, we sought to explore whether malnourished compared to well-fed patients reported higher disability and lower physical scores at admission visits. 2.1. Study Design, Setting, and Participants This investigation was designed as a cross-sectional study involving patients enrolled in the institutional spine disease registry (SpineReg) of the IRCCS Orthopedic Institute Galeazzi. The registry was approved as a prospective research study in September 2015, adding phone interviews before and after surgery to appraise the recovery trends through patient-reported outcome measures. The research was registered in a public trial registry (ClinicalTrials.gov number: NCT03644407; date: 10 November 2015). Any patient waiting for spine surgery at our Italian hospital has been included in SpineReg ever since. The primary data extraction from SpineReg was performed on 19 March 2021 and included all patients with complete records at baseline regarding sex (reported by the doctor as male or female); years of age ≥ 18; a spine diagnosis, excluding those requiring emergency or oncological surgery; the physical status classification system according to the American Society of Anesthesiologists (ASAPS); the ODI; and the summary measure of physical health (PH) from the SF-36. Spine diseases included cervical disorders, complications, deformities (degenerative or idiopathic), disc disease or herniation, spondylolisthesis (degenerative or idiopathic), spondylosis, and stenosis. The calendar time was set for surgeries performed between 2016 and 2019, as, before 2016, the records were less complete, and, after 2020, there were changes to the healthcare systems and population characteristics because of the COVID-19 pandemic. We allowed incomplete records for height, actual body weight (ABW), and BMI. We planned to manually retrieve laboratory results from blood specimens routinely collected at pre-admission to be linked to the primary data extracted from SpineReg. The extraction concerned only the analytes of nutritional interest, as previously described in our technical article : C-reactive protein (CRP, mg·dL −1 ), actual haemoglobin (AHB, g·dL −1 ), mean corpuscular volume (MCV, fL), mean corpuscular haemoglobin (MCH, pg), mean corpuscular haemoglobin concentration (MCHC, g·dL −1 ), neutrophil count (NEUC, 10 3 ·μL −1 ), lymphocytic count (LYMC, 10 3 ·μL −1 ), prealbumin (PALB, mg·dL −1 ), and albumin (ALB, g·dL −1 ). 2.2. Equations to Ascertain the Risk of Malnutrition and Its Diagnosis Nutritional disorders cover two conditions: undernutrition and overnutrition. In line with this, we applied different calculations that could be performed with our routinely collected parameters to identify patients at risk of undernutrition or overnutrition and those potentially malnourished . Briefly, we calculated the risk of being undernourished or overnourished using the following equations: the body weight difference (BWd) based on the difference between ABW and ideal body weight (IBW); the geriatric nutritional risk index (GNRI), which identifies older patients at risk of undernutrition based on a combination of BWd and circulating ALB; the instant nutritional assessment (INA) derived from ALB and LYMPC; the product of the latter two analytes (LxA); the score of protein malnutrition and acute inflammation (PMA) based on ALB and CRP; the score of protein malnutrition and acute/chronic inflammation (PMAC) based on ALB, PALB, CRP, and the neutrophil–lymphocyte ratio (NLR); the iron deficit malnutrition (IDM) from ABW, the ideal haemoglobin (IHB), and AHB; the vitamin B deficit malnutrition (VBD) based on MCV, MCH, or MCHC. Patients presumably suffering from undernutrition were instead classified using the semi-gold-standard criterion of the Global Leadership Initiative on Malnutrition (GLIM). 2.3. Data Extraction and Linkage The primary SpineReg query obtained data from 3456 patients with eleven variables, encompassing the unique patient code, sex, years of age, metres of height, kilograms of ABW, BMI, orthopaedic condition, ASAPS, year of surgery, ODI, and PH. The manual extraction of laboratory data aimed at collecting nine analytes from the pre-operative routine: CRP, AHB, MCV, MCH, MCHC, NEUC, LYMC, PALB, and ALB. We required no complete records for any of the laboratory data, but at least one record had to be present. Once the linkage was completed, the quality evaluation of the method was verified through the random selection of ten patients per clinical unit per year (a total of 160 patients), followed by a double check of the accuracy of the data from SpineReg and the laboratory parameters. The final database included data from 2258 patients ( ). There was poor coverage during the blood linkage process as we eliminated 33.80% of the patients from the primary database. There were no inconsistencies between the primary source (SpineReg) and linked data (laboratory) as only new information was integrated. 2.4. Data Cleaning and Coding The features of the data extracted from the registry and the subsequent categorisation and coding are reported in . The categorisation process of the ODI and SF-36 scales was based on the quartiles of the cohort, as shown in , with the quartile method being used for the preliminary assessment of exposure–outcome relationships, rather than for a definitive analysis, because of its disadvantages . The features of the data extracted from the laboratory software system, the reference limits in males and females, and the subsequent coding are reported in . The nature, categorisation, and coding of the indicators of a malnutritional status are reported in . The data cleaning phase included the conversion of the measurement units of laboratory analytes if a difference existed regarding the measurement units required by the equations. We also assigned a hard limit of 0.01 mg·dL −1 to 45 values of CRP that instead reported <0.02 mg·dL −1 . All analytes were categorised according to the constraints of reference intervals. The missing values for each variable of the locked database are reported in . No imputation method or dropping was applied to deal with missing values. A summary of the study is reported in the . 2.5. Statistics We planned to report cohort variables as frequencies if categorical or means and standard deviations if continuous. Inferential statistics were performed using the Python programming language (version 3.10.9), by means of the numpy (version 1.21.4), scipy (version 1.10.1), pingouin (version 3.5.2), statsmodels (version 0.13.5), pandas (version 2.0.3), and scikit-learn (version 1.0.2) libraries. Graphics were generated using the matplotlib (version 3.5.2) and seaborn (version 0.12.2) libraries. Sex differences were investigated using the rank-based non-parametric Mann–Whitney U test. The false discovery rate (FDR) due to multiple tests was controlled using the Benjamini–Hochberg FDR procedure . We considered statistical significance at the 95% confidence level (α = 0.05) and effect sizes as rank-biserial correlations (RBCs) (<0.1 = very weak; ≥0.1 and <0.3 = weak; ≥0.3 and <0.5 = moderate; ≥0.5 and <0.7 = strong; ≥0.7 and <1.0 = very strong). The association between malnutrition and disability or physical health was assessed through a two-step procedure designed first to explore the existence of a general association (descriptive analysis) and second to explore the existence of a direction/pattern (analytical analysis). Malnutrition (nine equations) was assumed to be the exposure, and the clinical scores (ODI or SF-36) were considered outcome measures. The descriptive investigation was based on the hypothesis that the categories of malnutrition would show an association with the ODI or PH. One-way analysis of covariance (ANCOVA) based on the ordinary least squares algorithm was used to determine whether there were any statistically significant differences between the outcome means, adjusted for sex, age, and diagnosis, across different levels of malnutrition. Multicollinearity was controlled using the variance inflation factor (VIF), with a dynamic threshold set to the lower bound of the 95% confidence interval for the VIF. The FDR was controlled using the Benjamini–Hochberg FDR procedure. We considered statistical significance at the 95% confidence level (α = 0.05) and effect sizes as the partial eta-squared η 2 (~0.01 = small; ~0.06 = medium; ~0.14 or higher = large). Assumptions of the one-way ANCOVA were planned to test for appropriateness. The D’Agostino–Pearson omnibus test was used to assess the normality of the residuals, the Goldfeld–Quandt homoscedasticity test verified the presence of homoscedasticity, and the Ljung–Box test verified the absence of autocorrelation of the residuals . The Holm–Bonferroni step-down procedure controlled for the family-wise error rate. We planned to report the adjusted- p and η 2 for all analyses and means ± standard error [95% CI] only for statistically significant associations. The analytical investigation, run only on findings that were found to be statistically significant from the ANCOVA, was based on the assumption that malnourished compared to well-fed patients would report a higher ODI and lower PH. The rank-based non-parametric Jonckheere–Terpstra test for ordered alternatives was used to determine if there was a statistically significant monotonic trend between the ordinal malnutrition variable and the ODI or PH. The test was run on the cohorts defined by age group and stratified according to sex and diagnosis, using the Benjamini–Hochberg FDR procedure to control the false discovery rate. Statistical significance was evaluated at the 95% confidence level (α = 0.05) and effect sizes in terms of the r-effect (~0.1 = small; ~0.3 = medium; ~0.5 or higher = large). We planned to report the adjusted- p and r-effect with means and the number of observations. Charts were used to report the output from the non-parametric kernel regression technique. The modelling tool was run with the radial basis function kernel support vector regression algorithm , where each of the continuous malnutrition scores was an independent variable, the ODI and PH were dependent variables, and age and sex were adjusting covariates. The findings reported in this article adhere to the statement for Reporting Studies Conducted Using Observational Routinely Collected Health Data (RECORD) and were prepared considering the Sex and Gender Equity in Research (SAGER) guidelines. This investigation was designed as a cross-sectional study involving patients enrolled in the institutional spine disease registry (SpineReg) of the IRCCS Orthopedic Institute Galeazzi. The registry was approved as a prospective research study in September 2015, adding phone interviews before and after surgery to appraise the recovery trends through patient-reported outcome measures. The research was registered in a public trial registry (ClinicalTrials.gov number: NCT03644407; date: 10 November 2015). Any patient waiting for spine surgery at our Italian hospital has been included in SpineReg ever since. The primary data extraction from SpineReg was performed on 19 March 2021 and included all patients with complete records at baseline regarding sex (reported by the doctor as male or female); years of age ≥ 18; a spine diagnosis, excluding those requiring emergency or oncological surgery; the physical status classification system according to the American Society of Anesthesiologists (ASAPS); the ODI; and the summary measure of physical health (PH) from the SF-36. Spine diseases included cervical disorders, complications, deformities (degenerative or idiopathic), disc disease or herniation, spondylolisthesis (degenerative or idiopathic), spondylosis, and stenosis. The calendar time was set for surgeries performed between 2016 and 2019, as, before 2016, the records were less complete, and, after 2020, there were changes to the healthcare systems and population characteristics because of the COVID-19 pandemic. We allowed incomplete records for height, actual body weight (ABW), and BMI. We planned to manually retrieve laboratory results from blood specimens routinely collected at pre-admission to be linked to the primary data extracted from SpineReg. The extraction concerned only the analytes of nutritional interest, as previously described in our technical article : C-reactive protein (CRP, mg·dL −1 ), actual haemoglobin (AHB, g·dL −1 ), mean corpuscular volume (MCV, fL), mean corpuscular haemoglobin (MCH, pg), mean corpuscular haemoglobin concentration (MCHC, g·dL −1 ), neutrophil count (NEUC, 10 3 ·μL −1 ), lymphocytic count (LYMC, 10 3 ·μL −1 ), prealbumin (PALB, mg·dL −1 ), and albumin (ALB, g·dL −1 ). Nutritional disorders cover two conditions: undernutrition and overnutrition. In line with this, we applied different calculations that could be performed with our routinely collected parameters to identify patients at risk of undernutrition or overnutrition and those potentially malnourished . Briefly, we calculated the risk of being undernourished or overnourished using the following equations: the body weight difference (BWd) based on the difference between ABW and ideal body weight (IBW); the geriatric nutritional risk index (GNRI), which identifies older patients at risk of undernutrition based on a combination of BWd and circulating ALB; the instant nutritional assessment (INA) derived from ALB and LYMPC; the product of the latter two analytes (LxA); the score of protein malnutrition and acute inflammation (PMA) based on ALB and CRP; the score of protein malnutrition and acute/chronic inflammation (PMAC) based on ALB, PALB, CRP, and the neutrophil–lymphocyte ratio (NLR); the iron deficit malnutrition (IDM) from ABW, the ideal haemoglobin (IHB), and AHB; the vitamin B deficit malnutrition (VBD) based on MCV, MCH, or MCHC. Patients presumably suffering from undernutrition were instead classified using the semi-gold-standard criterion of the Global Leadership Initiative on Malnutrition (GLIM). The primary SpineReg query obtained data from 3456 patients with eleven variables, encompassing the unique patient code, sex, years of age, metres of height, kilograms of ABW, BMI, orthopaedic condition, ASAPS, year of surgery, ODI, and PH. The manual extraction of laboratory data aimed at collecting nine analytes from the pre-operative routine: CRP, AHB, MCV, MCH, MCHC, NEUC, LYMC, PALB, and ALB. We required no complete records for any of the laboratory data, but at least one record had to be present. Once the linkage was completed, the quality evaluation of the method was verified through the random selection of ten patients per clinical unit per year (a total of 160 patients), followed by a double check of the accuracy of the data from SpineReg and the laboratory parameters. The final database included data from 2258 patients ( ). There was poor coverage during the blood linkage process as we eliminated 33.80% of the patients from the primary database. There were no inconsistencies between the primary source (SpineReg) and linked data (laboratory) as only new information was integrated. The features of the data extracted from the registry and the subsequent categorisation and coding are reported in . The categorisation process of the ODI and SF-36 scales was based on the quartiles of the cohort, as shown in , with the quartile method being used for the preliminary assessment of exposure–outcome relationships, rather than for a definitive analysis, because of its disadvantages . The features of the data extracted from the laboratory software system, the reference limits in males and females, and the subsequent coding are reported in . The nature, categorisation, and coding of the indicators of a malnutritional status are reported in . The data cleaning phase included the conversion of the measurement units of laboratory analytes if a difference existed regarding the measurement units required by the equations. We also assigned a hard limit of 0.01 mg·dL −1 to 45 values of CRP that instead reported <0.02 mg·dL −1 . All analytes were categorised according to the constraints of reference intervals. The missing values for each variable of the locked database are reported in . No imputation method or dropping was applied to deal with missing values. A summary of the study is reported in the . We planned to report cohort variables as frequencies if categorical or means and standard deviations if continuous. Inferential statistics were performed using the Python programming language (version 3.10.9), by means of the numpy (version 1.21.4), scipy (version 1.10.1), pingouin (version 3.5.2), statsmodels (version 0.13.5), pandas (version 2.0.3), and scikit-learn (version 1.0.2) libraries. Graphics were generated using the matplotlib (version 3.5.2) and seaborn (version 0.12.2) libraries. Sex differences were investigated using the rank-based non-parametric Mann–Whitney U test. The false discovery rate (FDR) due to multiple tests was controlled using the Benjamini–Hochberg FDR procedure . We considered statistical significance at the 95% confidence level (α = 0.05) and effect sizes as rank-biserial correlations (RBCs) (<0.1 = very weak; ≥0.1 and <0.3 = weak; ≥0.3 and <0.5 = moderate; ≥0.5 and <0.7 = strong; ≥0.7 and <1.0 = very strong). The association between malnutrition and disability or physical health was assessed through a two-step procedure designed first to explore the existence of a general association (descriptive analysis) and second to explore the existence of a direction/pattern (analytical analysis). Malnutrition (nine equations) was assumed to be the exposure, and the clinical scores (ODI or SF-36) were considered outcome measures. The descriptive investigation was based on the hypothesis that the categories of malnutrition would show an association with the ODI or PH. One-way analysis of covariance (ANCOVA) based on the ordinary least squares algorithm was used to determine whether there were any statistically significant differences between the outcome means, adjusted for sex, age, and diagnosis, across different levels of malnutrition. Multicollinearity was controlled using the variance inflation factor (VIF), with a dynamic threshold set to the lower bound of the 95% confidence interval for the VIF. The FDR was controlled using the Benjamini–Hochberg FDR procedure. We considered statistical significance at the 95% confidence level (α = 0.05) and effect sizes as the partial eta-squared η 2 (~0.01 = small; ~0.06 = medium; ~0.14 or higher = large). Assumptions of the one-way ANCOVA were planned to test for appropriateness. The D’Agostino–Pearson omnibus test was used to assess the normality of the residuals, the Goldfeld–Quandt homoscedasticity test verified the presence of homoscedasticity, and the Ljung–Box test verified the absence of autocorrelation of the residuals . The Holm–Bonferroni step-down procedure controlled for the family-wise error rate. We planned to report the adjusted- p and η 2 for all analyses and means ± standard error [95% CI] only for statistically significant associations. The analytical investigation, run only on findings that were found to be statistically significant from the ANCOVA, was based on the assumption that malnourished compared to well-fed patients would report a higher ODI and lower PH. The rank-based non-parametric Jonckheere–Terpstra test for ordered alternatives was used to determine if there was a statistically significant monotonic trend between the ordinal malnutrition variable and the ODI or PH. The test was run on the cohorts defined by age group and stratified according to sex and diagnosis, using the Benjamini–Hochberg FDR procedure to control the false discovery rate. Statistical significance was evaluated at the 95% confidence level (α = 0.05) and effect sizes in terms of the r-effect (~0.1 = small; ~0.3 = medium; ~0.5 or higher = large). We planned to report the adjusted- p and r-effect with means and the number of observations. Charts were used to report the output from the non-parametric kernel regression technique. The modelling tool was run with the radial basis function kernel support vector regression algorithm , where each of the continuous malnutrition scores was an independent variable, the ODI and PH were dependent variables, and age and sex were adjusting covariates. The findings reported in this article adhere to the statement for Reporting Studies Conducted Using Observational Routinely Collected Health Data (RECORD) and were prepared considering the Sex and Gender Equity in Research (SAGER) guidelines. The patients’ characteristics are reported in . The coding of the patients according to the risk of being malnourished or the presumed diagnosis of undernutrition could not be applied to all observations due to missing data ( ). BWd-derived undernourished patients amounted to 36, while overnourished patients amounted to 672 out of 1041. GNRI categorisation showed 10 individuals with a low risk out of 151. INA-based categories indicated 172 out of 1336 with energy malnutrition, one with protein malnutrition, and one with protein–energy malnutrition. Through coding with LxA, we found 388 with moderate and 26 with poor nutrition out of 1336. PMA indicated 123 with a high, 92 with a moderate, and 440 with a low malnutritional risk out of 1335. The PMAC percentiles displayed 331 subjects <25th, 331 ≥25th and <50th, 333 ≥50th and <75th, and 334 ≥75th out of 1329. The IDM percentiles showed 259 < 25th, 257 ≥ 25th and <50th, 261 ≥ 50th and <75th, and 260 ≥ 75th out of 1037. The VBD-based labelling of functional vitamin B deficiency identified 452 patients out of 2258. Considering the ascertained diagnosis based on GLIM, the equation could not be applied in 1784 with a high BMI, whereas 44 were identified as potentially suffering from clean undernutrition, 16 from DRM with inflammation, and 95 from DRM without inflammation out of 1939. CRP, MCV, and LYMC were similar between sexes. Non-identical values between females and males were instead found for age (56.31 ± 15.03 vs. 54.98 ± 15.72; p = 0.042; 0.05), height (1.62 ± 0.07 vs. 1.75 ± 0.07; p < 0.001; 0.84), body weight (65.40 ± 11.71 vs. 80.79 ± 12.28; p < 0.001; 0.65), BMI (25.00 ± 4.29 vs. 26.13 ± 3.48; p < 0.001; 0.19), AHB (13.40 ± 1.14 vs. 14.94 ± 1.25; p < 0.001; 0.67), MCH (29.31 ± 2.20 vs. 29.85 ± 2.24; p < 0.001; 0.19), MCHC (32.92 ± 0.97 vs. 33.61 ± 1.04; p < 0.001; 0.41), NEUC (4.27 ± 1.80 vs. 4.56 ± 1.89; p < 0.001; 0.11), PALB (25.52 ± 4.73 vs. 30.32 ± 5.46; p < 0.001; 0.51), and ALB (4.30 ± 0.26 vs. 4.42 ± 0.27; p < 0.001; 0.26). Males tended to report lower ODI (39.62 ± 18.06 vs. 48.39 ± 17.16; p < 0.001; 0.28) and greater PH (35.23 ± 7.73 vs. 32.69 ± 7.35; p < 0.001; 0.20) scores than females. Considering the reference limits as reported in , we observed high CRP in 201 out of 1251 females and 143 out of 897 males, low AHB in 46 out of 1313 females and 120 out of 945 males, macrocytic MCV in 132 out of 1313 females and 225 out of 945 males, hypochromic MCH in 68 out of 1313 females and 36 out of 945 males, low MCHC in 259 out of 1313 females and 76 out of 945 males, neutropenia in 11 out of 1313 females and 13 out of 945 males, lymphopenia in 35 out of 1313 females and 63 out of 945 males, low PALB in 218 out of 791 females and 31 out of 561 males, and hypoalbuminemia in 2 out of 784 females and 0 out of 552 males. 3.1. Younger Adult Patients (<40 Years Old) The reported levels of disability and physical function among the youngest cohort group, based on malnutrition categorisation, are reported in . Hereafter, we report the results of the ANCOVA for disability scores based on malnutrition indices: BWd (adjusted- p = 0.046; 0.04), INA (adjusted- p = 0.283; 0.01), LxA (adjusted- p = 0.576; <0.01), PMA (adjusted- p = 0.742; 0.01), PMAC (adjusted- p = 0.636; 0.02), IDM (adjusted- p = 0.475; 0.02), VBD (adjusted- p = 0.801; <0.01), and GLIM (adjusted- p = 0.713; <0.01). Concerning the PH, the findings were the following: BWd (adjusted- p = 0.036; 0.06), INA (adjusted- p = 0.283; 0.01), LxA (adjusted- p = 0.576; <0.01), PMA (adjusted- p = 0.742; 0.01), PMAC (adjusted- p = 0.789; 0.01), IDM (adjusted- p = 0.475; 0.02), VBD (adjusted- p = 0.801; <0.01), and GLIM (adjusted- p = 0.573; 0.01). In younger adult patients, only the BWd was statistically significant in terms of both disability (ODI) and the functional status (PH). Younger adult patients identified as having normal nutrition reported an adjusted mean disability score of 32.83 ± 2.08 [28.72–36.95] ( n = 70), those who were overnourished reported 34.33 ± 2.34 [29.71–38.94] ( n = 56), and those at risk of being undernourished reported 43.97 ± 4.71 [34.66–53.27] ( n = 14). The trend of the ODI reported by males was monotonic (adjusted- p = 0.048; 0.13): it was 28.24 in well-nourished ( n = 42), 34.32 in overnourished ( n = 31), and 55.33 in undernourished ( n = 3) individuals. Moreover, younger patients admitted with a disc herniation showed a monotonic relationship between the disability scores across the BWd-based malnutrition levels (adjusted- p = 0.008; 0.25): 35.00 in well-nourished ( n = 13), 48.75 in overnourished ( n = 12), and 48.75 in undernourished ( n = 4) individuals. The estimated means of physical function were 40.12 ± 0.91 [38.31–41.92] in those with normal nutrition ( n = 70), 37.97 ± 1.02 [35.94–39.99] in those who were overnourished ( n = 56), and 35.28 ± 2.06 [31.20–39.36] in those where a risk of undernutrition was identified ( n = 14). The trend of the PH was monotonic in males (adjusted- p = 0.023; 0.18): it was 42.64 in well-nourished ( n = 42), 35.55 in overnourished ( n = 31), and 35.79 in undernourished ( n = 3) patients. A pattern of monotonicity was also found for younger patients diagnosed with a complication (adjusted- p = 0.012; 0.53; 45.73 in six well-nourished; 34.03 in two overnourished; 39.29 in one undernourished), disc herniation (adjusted- p = 0.004; 0.29; 40.03 in 13 well-nourished; 32.06 in 12 overnourished; 35.61 in 4 undernourished), and spondylosis (adjusted- p = 0.014; 0.74; 45.71 in three well-nourished; 35.01 in two overnourished). In , we report the non-linear relationships between the BWd-based malnutrition and ODI or PH scores for the youngest sample group, adjusted for age and sex. In section A2 of the figure, the values of ABW-IBW associated with the minimum ODI (39.89 in females and 37.28 in males) were 4.28 in females and 5.25 in males, while those associated with the maximum ODI (50.93 in females and 51.34 in males) were 27.22 in females and 29.54 in males. In section B2, the ABW-IBW values that correspond to the minimum PH scores (30.77 in females and 30.87 in males) are 27.22 in females and 29.54 in males, while the maximum PH scores (35.91 in females and 36.58 in males) correspond to −8.10 in females and −6.80 in males. 3.2. Adult Patients (40–70 Years Old) The disability and physical function reported by the subgroup of adults and based on the malnutrition levels are reported in . The findings from the ANCOVA run with the ODI were the following: BWd (adjusted- p = 0.001; 0.02), INA (adjusted- p = 0.467; <0.01), LxA (adjusted- p = 0.802; <0.01), PMA (adjusted- p = 0.006; 0.02), PMAC (adjusted- p < 0.001; 0.04), IDM (adjusted- p = 0.087; 0.010), VBD (adjusted- p = 0.211; 0.001), and GLIM (adjusted- p = 0.013; 0.01). The results obtained with the PH scores were the following: BWd (adjusted- p = 0.004; 0.02), INA (adjusted- p = 0.467; <0.01), LxA (adjusted- p = 0.430; <0.01), PMA (adjusted- p = 0.007; 0.01), PMAC (adjusted- p < 0.001; 0.04), IDM (adjusted- p = 0.313; 0.01), VBD (adjusted- p = 0.176; <0.01), and GLIM (adjusted- p = 0.471; 0.01). Overall, the ODI and PH reported by adult patients were significantly associated with the categories of BWd, PMA, or PMAC, whereas GLIM was associated only with the ODI. The adjusted disability scores amongst BWd-based labelling were 44.50 ± 1.18 [42.19–46.82] in those with normal nutrition ( n = 214), 46.36 ± 0.81 [44.76–47.95] in those with overnutrition ( n = 445), and 58.08 ± 4.02 [50.18–65.98] in those who were potentially undernourished ( n = 18). No sex- or diagnosis-specific monotonic trend was found. Based on PMA, we observed the following estimated ODI values: 44.04 ± 0.79 [42.48–45.60] (no risk; n = 440), 47.24 ± 1.01 [45.26–49.22] (low risk; n = 273), 50.35 ± 2.18 [46.07–54.63] (moderate risk; n = 58), and 49.18 ± 1.91 [45.44–52.92] (high risk; n = 76). The monotonic trend was statistically meaningful for both females (adjusted- p = 0.003; 0.12) and males (adjusted- p = 0.011; 0.14). We found a statistically meaningful monotonic pattern also for adult patients diagnosed with a complication (adjusted- p = 0.002; 0.31) or degenerative deformity (adjusted- p = 0.017; 0.26). Among those admitted for a complication, the mean ODI values were 50.76 in those with no risk ( n = 42), 53.26 in those with a low risk ( n = 34), 64.38 in those with a moderate risk ( n = 8), and 67.31 in those with a high risk ( n = 13). The ODI levels in those with a degenerative deformity were 44.91 in those with no risk ( n = 35), 46.38 in those with a low risk ( n = 24), 62.20 in those with a moderate risk ( n = 5), and 56.43 in those with a high risk ( n = 7). Similarly, the adjusted disability scores across the PMAC-based groups were 42.33 ± 1.18 [40.07–44.64] (<25th; n = 203), 44.05 ± 1.14 [41.81–46.29] (≥25th and <50th; n = 210), 47.66 ± 1.11 [45.47–49.84] (≥50th and <75th; n = 222), and 49.55 ± 1.15 [47.29–51.81] (≥75th; n = 208). The trend was monotonic for both females (adjusted- p = 0.021; 0.13) and males (adjusted- p = 0.001; 0.24) and for patients with a diagnosis of disc disease (adjusted- p = 0.031; 0.22; 43.32 in 50 cases <25th; 41.77 in 56 cases ≥25th and <50th; 49.67 in 51 cases ≥50th and <75th; 50.46 in 46 cases ≥75th), a complication (adjusted- p = 0.045; 0.29; 51.92 in 13 cases <25th; 46.95 in 19 cases ≥25th and <50th; 52.33 in 30 cases ≥50th and <75th; 62.74 in 35 cases ≥75th), and a cervical disorder (adjusted- p = 0.023; 0.30; 31.97 in 32 cases <25th; 42.48 in 21 cases ≥25th and <50th; 43.05 in 21 cases ≥50th and <75th; 42.46 in 13 cases ≥75th). The adjusted disability levels when using GLIM were 44.69 ± 0.50 [43.71–45.68] (BMI high; n = 1179), 46.96 ± 4.47 [38.19–55.72] (clean undernutrition; n = 15), 50.67 ± 2.78 [45.21–56.12] (DRM without inflammation; n = 39), and 60.16 ± 7.72 [45.02–75.30] (DRM with inflammation; n = 5). There was a monotonic trend for the male sex (adjusted- p = 0.016; 0.10; 40.59 in 499 cases with BMI high; 46.50 in two cases with clean undernutrition; 61.25 in four cases with DRM without inflammation; 57.00 in one case with DRM with inflammation) and in patients diagnosed with stenosis (adjusted- p = 0.050; 0.17; 43.39 in 87 cases with BMI high; 63.00 in two cases with DRM without inflammation), idiopathic spondylolisthesis (adjusted- p = 0.045; 0.24; 38.69 in 48 cases with BMI high; 86.00 in one case with DRM without inflammation), and spondylosis (adjusted- p = 0.029; 0.34; 43.59 in 29 cases with BMI high; 59.50 in two cases with DRM without inflammation). Considering the estimated PH, adult patients with BWd-based normal nutrition ( n = 214) had scores of 34.60 ± 0.48 [33.66–35.54], those who were overnourished ( n = 445) reported 33.38 ± 0.33 [32.73–34.02], and those with undernutrition ( n = 18) reported 31.77 ± 1.63 [28.58–34.97]. The trend was monotonic for females (adjusted- p = 0.001; 0.11; 34.58 in 111 cases with normal nutrition; 32.31 in 291 cases with overnutrition; 31.09 in 15 cases with undernutrition) and in those with disc disease (adjusted- p = 0.004; 0.16; 37.16 in 22 well-nourished; 33.43 in 36 overnourished; 32.77 in two undernourished). The adjusted physical scores based on the PMA-derived risk of malnutrition were 34.31 ± 0.33 [33.67–34.95] (no risk; n = 440), 33.03 ± 0.41 [32.22–33.84] (low risk; n = 273), 32.28 ± 0.89 [30.52–34.03] (moderate risk; n = 58), and 32.51 ± 0.78 [30.97–34.04] (high risk; n = 76). The pattern was statistically monotonic for both females (adjusted- p = 0.001; 0.12) and males (adjusted- p = 0.028; 0.10) and for patients diagnosed with a complication (adjusted- p = 0.015; 0.22; 32.92 in 42 cases with no risk; 30.40 in 34 cases with low risk; 28.45 in 8 cases with moderate risk; 30.33 in 13 cases with high risk), a degenerative deformity (adjusted- p = 0.017; 0.25; 33.73 in 35 cases with no risk; 32.20 in 24 cases with low risk; 26.73 in 5 cases with moderate risk; 30.42 in 7 cases with high risk), a cervical disorder (adjusted- p = 0.009; 0.28; 37.56 in 59 cases with no risk; 36.21 in 19 cases with low risk; 37.00 in 4 cases with moderate risk; 28.08 in 5 cases with high risk), and spondylosis (adjusted- p = 0.040; 0.38; 34.58 in 12 cases with no risk; 32.85 in 9 cases with low risk; 27.28 in 3 cases with moderate risk; 29.72 in 5 cases with high risk). The PMAC-based estimated physical health scores reported by patients were 35.41 ± 0.48 [34.47–36.35] (<25th; n = 203), 33.60 ± 0.47 [32.68–34.51] (≥25th and <50th; n = 210), 33.28 ± 0.46 [32.39–34.17] (≥50th and <75th; n = 222), and 32.30 ± 0.47 [31.38–33.22] (≥75th; n = 208). Similar to PMA, a monotonic trend was also present in females (adjusted- p = 0.001; 0.21) and males (adjusted- p = 0.001; 0.25), as well as in those admitted for complications (adjusted- p = 0.049; 0.24; 32.46 in 13 cases <25th; 33.28 in 19 cases ≥25th and <50th; 31.34 in 30 cases ≥50th and <75th; 29.81 in 35 cases ≥75th), cervical disorders (adjusted- p = 0.002; 0.46; 39.55 in 32 cases <25th), 36.49 in 21 cases ≥25th and <50th; 34.77 in 21 cases ≥50th and <75th; 33.13 in 13 cases ≥75th), and spondylosis (adjusted- p = 0.042; 0.48; 34.99 in 7 cases <25th; 34.01 in 5 cases ≥25th and <50th; 33.61 in 6 cases ≥50th and <75th; 29.49 in 11 cases ≥75th). The non-linear relationships adjusted for age and sex among BWd, PMA, PMAC, and GLIM and the ODI or PH scores in the subgroup of adult patients are reported in and . In section A1 of , the scores of ABW-IBW that were associated with the minimum ODI (46.07 in females and 43.98 in males) were −0.91 and 0.72 in females and males, respectively. The maximum ODI scores (50.83 in females and 50.66 in males) corresponded to 45.73 in females and 46.35 in males. The minimum PH (29.03 in females and 29.16 in males) matched 47.43 in females and 57.20 in males, while −4.19 in females and −2.65 in males matched the maximum PH scores (33.35 in females and 33.10 in males). As shown in section A2, the values of PMA that were associated with the minimum ODI (44.72 in females and 34.55 in males) were 0.02 in females (no nutrition-related risk) and 3.20 in males (high nutrition-related risk). The scores that were associated with the maximum ODI (63.64 in females and 53.27 in males) were 5.37 in females (high nutrition-related risk) and 5.26 in males (high nutrition-related risk). As shown in section B2, the scores of 29.42 in females and 31.98 in males were the minimum PH scores corresponding to the PMA values of 12.15 in females (high nutrition-related risk) and 10.76 in males (high nutrition-related risk), while the maximum PH scores (33.60 in females and 36.59 in males) matched 7.23 in females (high nutrition-related risk) and 0.02 in males (no nutrition-related risk). As shown in section A3, the values of the PMAC scores that were associated with the minimum ODI (47.25 in females and 39.80 in males) were 0.03 in females (<25th) and 0.03 in males (<25th), while those associated with the maximum ODI (56.34 in females and 56.23 in males) were 1.42 in females (≥75th) and 1.92 in males (≥75th). The values of the PMAC scores in section B3 that were associated with the minimum PH scores (30.65 in females and 31.25 in males) were 2.26 in females (≥75th) and 2.96 in males (≥75th), while those that were associated with the maximum PH scores (34.33 in females and 34.59 in males) were 0.03 in females (<25th) and 0.03 in males (<25th). 3.3. Older Adult Patients (≥70 Years Old) In , we report the disability and physical health reported by older adults, stratified according to different malnutrition codifications. The outputs from the ANCOVA assessing differences in the ODI were as follows: BWd (adjusted- p = 0.612; <0.01), GNRI (adjusted- p = 0.160; 0.02), INA (adjusted- p = 0.509; <0.01), LxA (adjusted- p = 0.258; 0.01), PMA (adjusted- p = 0.079; 0.02), PMAC (adjusted- p = 0.010; 0.04), IDM (adjusted- p = 0.005; 0.06), VBD (adjusted- p = 0.037; 0.01), GLIM (adjusted- p = 0.346; 0.01). The descriptive analysis of PH found the following: BWd (adjusted- p = 0.148; 0.02), GNRI (adjusted- p = 0.246; 0.01), INA (adjusted- p = 0.509; <0.01), LxA (adjusted- p = 0.258; 0.01), PMA (adjusted- p = 0.079; 0.03), PMAC (adjusted- p = 0.009; 0.05), IDM (adjusted- p = 0.721; 0.01), VBD (adjusted- p = 0.049; 0.01), GLIM (adjusted- p = 0.346; 0.01). In older adults, PMAC, IDM, and VBD were statistically significant in terms of the disability scores, whereas the PH was associated with the PMAC and VBD groups. The estimated disability levels at a greater PMAC-based risk were 45.06 ± 2.40 [40.39–49.72] (<25th; n = 45), 46.91 ± 1.87 [43.22–50.60] (≥25th and <50th; n = 71), 51.94 ± 1.95 [48.11–55.78] (≥50th and <75th; n = 65), and 52.13 ± 1.56 [49.06–55.21] (≥75th; n = 101). The pattern was monotonic for females (adjusted- p = 0.001; 0.37; 40.00 in 19 cases <25th; 53.17 in 47 cases ≥25th and <50th; 56.45 in 38 cases ≥50th and <75th; 56.35 in 62 cases ≥75th) and for patients with stenosis (adjusted- p = 0.018; 0.33; 36.55 in 11 cases <25th; 51.39 in 18 cases ≥25th and <50th; 52.19 in 21 cases ≥50th and<75th; 51.57 in 30 cases ≥75th). The mean adjusted disability scores across IDM categories were 43.81 ± 2.38 [39.11–48.51] (<25th; n = 44), 43.95 ± 2.42 [39.18–48.71] (≥25th and <50th; n = 45), 51.67 ± 2.02 [47.70–55.65] (≥50th and <75th; n = 60), and 52.79 ± 1.93 [48.98–56.60] (≥75th; n = 74). Female subjects showed a statistically meaningful, monotonic relationship (adjusted- p = 0.001; 0.39; 48.83 in 35 cases <25th; 49.93 in 40 cases ≥25th and <50th; 56.35 in 37 cases ≥50th and <75th; 60.50 in 26 cases ≥75th). When using VBD, patients with an adequate vitamin B status ( n = 346) had reported estimated disability levels of 48.07 ± 0.85 [46.40–49.75], against 51.05 ± 1.31 [48.48–53.63] in those with a functional deficiency ( n = 147), with the trend being monotonic for patients diagnosed with a complication (adjusted- p = 0.013; 0.37) or degenerative spondylolisthesis (adjusted- p = 0.044; 0.20). Among those admitted for a complication, the mean ODI score was 17.87 in 153 cases with an adequate vitamin B status and 64.23 in 13 cases with a functional vitamin B deficiency. Concerning degenerative spondylolisthesis, we found a mean disability score of 44.66 in 70 cases with an adequate vitamin B status, compared to 51.83 in 30 cases with a functional vitamin B deficiency. Older adults stratified according to PMAC reported an adjusted mean physical health score of 33.42 ± 0.95 [31.55–35.29] (<25th; n = 45), 32.71 ± 0.75 [31.29–34.18] (≥25th and <50th; n = 71), 30.61 ± 0.78 [29.07–32.14] (≥50th and <75th; n = 65), and 30.47 ± 0.63 [29.23–31.70] (≥75th; n = 101). We found a monotonic trend in females (adjusted- p = 0.003; 0.30; 34.45 in 19 cases <25th; 29.62 in 47 cases ≥25th and <50th; 29.71 in 38 cases ≥50th and <75th; 29.15 in 62 cases ≥75th) and in patients diagnosed with stenosis (adjusted- p = 0.004; 0.45; 35.25 in 11 cases <25th; 29.98 in 18 cases ≥25th and <50th; 29.95 in 21 cases ≥50th and <75th; 28.82 in 30 cases ≥75th). The estimated PH across VBD levels was 31.50 ± 0.35 [30.81–32.18] in patients with an adequate vitamin B status ( n = 346), while, for those with a functional deficiency ( n = 147), the mean was 30.47 ± 0.54 [29.42–31.53]. The pattern between the exposure variable and the outcome was monotonic for patients diagnosed with a complication (adjusted- p = 0.012; 0.26; 30.98 in 33 cases with adequate vitamin B status and 27.34 in 13 cases with functional vitamin B deficiency). In and , we report the relationships between PMAC, IDM, and VBD and the ODI or PH scores for the older adult sample group, adjusted for age and sex. As shown in section A1 of , the values of the PMAC score that corresponded to the minimum ODI (53.25 in females and 40.14 in males) were 1.98 in females (≥75th) and 0.03 in males (<25th), while those matching the maximum ODI (62.17 in females and 48.34 in males) were 0.80 in females (≥25th and <50th) and 3.28 in males (≥75th). As shown in section B1, the PMAC scores that were associated with the minimum PH score (26.06 in females and 31.03 in males) were 0.80 in females (≥25th and <50th) and 0.56 in males (≥75th), while those associated with the maximum PH score (30.71 in females and 33.76 in males) were 0.04 in females (<25th) and 1.52 in males (≥75th). As shown in section A2, the values of IDM that corresponded to the minimum ODI (44.83 in females and 44.81 in males) were 416.48 in females (<25th) and 413.60 in males (<25th), while those associated with the maximum ODI (females: 44.32, males: 40.37) were 25.28 in females (< 25th) and 1556.00 in males (≥75th). As shown in section B2, the values of the IDM score that matched with the minimum PH score (30.44 in females and 29.94 in males) were 1220.48 in females (≥75th) and 1556.00 in males (≥75th), while those associated with the maximum PH score (33.88 in females and 33.88 in males) were 655.76 in females (≥50th and <75th) and 659.84 in males (≥50th and <75th). The reported levels of disability and physical function among the youngest cohort group, based on malnutrition categorisation, are reported in . Hereafter, we report the results of the ANCOVA for disability scores based on malnutrition indices: BWd (adjusted- p = 0.046; 0.04), INA (adjusted- p = 0.283; 0.01), LxA (adjusted- p = 0.576; <0.01), PMA (adjusted- p = 0.742; 0.01), PMAC (adjusted- p = 0.636; 0.02), IDM (adjusted- p = 0.475; 0.02), VBD (adjusted- p = 0.801; <0.01), and GLIM (adjusted- p = 0.713; <0.01). Concerning the PH, the findings were the following: BWd (adjusted- p = 0.036; 0.06), INA (adjusted- p = 0.283; 0.01), LxA (adjusted- p = 0.576; <0.01), PMA (adjusted- p = 0.742; 0.01), PMAC (adjusted- p = 0.789; 0.01), IDM (adjusted- p = 0.475; 0.02), VBD (adjusted- p = 0.801; <0.01), and GLIM (adjusted- p = 0.573; 0.01). In younger adult patients, only the BWd was statistically significant in terms of both disability (ODI) and the functional status (PH). Younger adult patients identified as having normal nutrition reported an adjusted mean disability score of 32.83 ± 2.08 [28.72–36.95] ( n = 70), those who were overnourished reported 34.33 ± 2.34 [29.71–38.94] ( n = 56), and those at risk of being undernourished reported 43.97 ± 4.71 [34.66–53.27] ( n = 14). The trend of the ODI reported by males was monotonic (adjusted- p = 0.048; 0.13): it was 28.24 in well-nourished ( n = 42), 34.32 in overnourished ( n = 31), and 55.33 in undernourished ( n = 3) individuals. Moreover, younger patients admitted with a disc herniation showed a monotonic relationship between the disability scores across the BWd-based malnutrition levels (adjusted- p = 0.008; 0.25): 35.00 in well-nourished ( n = 13), 48.75 in overnourished ( n = 12), and 48.75 in undernourished ( n = 4) individuals. The estimated means of physical function were 40.12 ± 0.91 [38.31–41.92] in those with normal nutrition ( n = 70), 37.97 ± 1.02 [35.94–39.99] in those who were overnourished ( n = 56), and 35.28 ± 2.06 [31.20–39.36] in those where a risk of undernutrition was identified ( n = 14). The trend of the PH was monotonic in males (adjusted- p = 0.023; 0.18): it was 42.64 in well-nourished ( n = 42), 35.55 in overnourished ( n = 31), and 35.79 in undernourished ( n = 3) patients. A pattern of monotonicity was also found for younger patients diagnosed with a complication (adjusted- p = 0.012; 0.53; 45.73 in six well-nourished; 34.03 in two overnourished; 39.29 in one undernourished), disc herniation (adjusted- p = 0.004; 0.29; 40.03 in 13 well-nourished; 32.06 in 12 overnourished; 35.61 in 4 undernourished), and spondylosis (adjusted- p = 0.014; 0.74; 45.71 in three well-nourished; 35.01 in two overnourished). In , we report the non-linear relationships between the BWd-based malnutrition and ODI or PH scores for the youngest sample group, adjusted for age and sex. In section A2 of the figure, the values of ABW-IBW associated with the minimum ODI (39.89 in females and 37.28 in males) were 4.28 in females and 5.25 in males, while those associated with the maximum ODI (50.93 in females and 51.34 in males) were 27.22 in females and 29.54 in males. In section B2, the ABW-IBW values that correspond to the minimum PH scores (30.77 in females and 30.87 in males) are 27.22 in females and 29.54 in males, while the maximum PH scores (35.91 in females and 36.58 in males) correspond to −8.10 in females and −6.80 in males. The disability and physical function reported by the subgroup of adults and based on the malnutrition levels are reported in . The findings from the ANCOVA run with the ODI were the following: BWd (adjusted- p = 0.001; 0.02), INA (adjusted- p = 0.467; <0.01), LxA (adjusted- p = 0.802; <0.01), PMA (adjusted- p = 0.006; 0.02), PMAC (adjusted- p < 0.001; 0.04), IDM (adjusted- p = 0.087; 0.010), VBD (adjusted- p = 0.211; 0.001), and GLIM (adjusted- p = 0.013; 0.01). The results obtained with the PH scores were the following: BWd (adjusted- p = 0.004; 0.02), INA (adjusted- p = 0.467; <0.01), LxA (adjusted- p = 0.430; <0.01), PMA (adjusted- p = 0.007; 0.01), PMAC (adjusted- p < 0.001; 0.04), IDM (adjusted- p = 0.313; 0.01), VBD (adjusted- p = 0.176; <0.01), and GLIM (adjusted- p = 0.471; 0.01). Overall, the ODI and PH reported by adult patients were significantly associated with the categories of BWd, PMA, or PMAC, whereas GLIM was associated only with the ODI. The adjusted disability scores amongst BWd-based labelling were 44.50 ± 1.18 [42.19–46.82] in those with normal nutrition ( n = 214), 46.36 ± 0.81 [44.76–47.95] in those with overnutrition ( n = 445), and 58.08 ± 4.02 [50.18–65.98] in those who were potentially undernourished ( n = 18). No sex- or diagnosis-specific monotonic trend was found. Based on PMA, we observed the following estimated ODI values: 44.04 ± 0.79 [42.48–45.60] (no risk; n = 440), 47.24 ± 1.01 [45.26–49.22] (low risk; n = 273), 50.35 ± 2.18 [46.07–54.63] (moderate risk; n = 58), and 49.18 ± 1.91 [45.44–52.92] (high risk; n = 76). The monotonic trend was statistically meaningful for both females (adjusted- p = 0.003; 0.12) and males (adjusted- p = 0.011; 0.14). We found a statistically meaningful monotonic pattern also for adult patients diagnosed with a complication (adjusted- p = 0.002; 0.31) or degenerative deformity (adjusted- p = 0.017; 0.26). Among those admitted for a complication, the mean ODI values were 50.76 in those with no risk ( n = 42), 53.26 in those with a low risk ( n = 34), 64.38 in those with a moderate risk ( n = 8), and 67.31 in those with a high risk ( n = 13). The ODI levels in those with a degenerative deformity were 44.91 in those with no risk ( n = 35), 46.38 in those with a low risk ( n = 24), 62.20 in those with a moderate risk ( n = 5), and 56.43 in those with a high risk ( n = 7). Similarly, the adjusted disability scores across the PMAC-based groups were 42.33 ± 1.18 [40.07–44.64] (<25th; n = 203), 44.05 ± 1.14 [41.81–46.29] (≥25th and <50th; n = 210), 47.66 ± 1.11 [45.47–49.84] (≥50th and <75th; n = 222), and 49.55 ± 1.15 [47.29–51.81] (≥75th; n = 208). The trend was monotonic for both females (adjusted- p = 0.021; 0.13) and males (adjusted- p = 0.001; 0.24) and for patients with a diagnosis of disc disease (adjusted- p = 0.031; 0.22; 43.32 in 50 cases <25th; 41.77 in 56 cases ≥25th and <50th; 49.67 in 51 cases ≥50th and <75th; 50.46 in 46 cases ≥75th), a complication (adjusted- p = 0.045; 0.29; 51.92 in 13 cases <25th; 46.95 in 19 cases ≥25th and <50th; 52.33 in 30 cases ≥50th and <75th; 62.74 in 35 cases ≥75th), and a cervical disorder (adjusted- p = 0.023; 0.30; 31.97 in 32 cases <25th; 42.48 in 21 cases ≥25th and <50th; 43.05 in 21 cases ≥50th and <75th; 42.46 in 13 cases ≥75th). The adjusted disability levels when using GLIM were 44.69 ± 0.50 [43.71–45.68] (BMI high; n = 1179), 46.96 ± 4.47 [38.19–55.72] (clean undernutrition; n = 15), 50.67 ± 2.78 [45.21–56.12] (DRM without inflammation; n = 39), and 60.16 ± 7.72 [45.02–75.30] (DRM with inflammation; n = 5). There was a monotonic trend for the male sex (adjusted- p = 0.016; 0.10; 40.59 in 499 cases with BMI high; 46.50 in two cases with clean undernutrition; 61.25 in four cases with DRM without inflammation; 57.00 in one case with DRM with inflammation) and in patients diagnosed with stenosis (adjusted- p = 0.050; 0.17; 43.39 in 87 cases with BMI high; 63.00 in two cases with DRM without inflammation), idiopathic spondylolisthesis (adjusted- p = 0.045; 0.24; 38.69 in 48 cases with BMI high; 86.00 in one case with DRM without inflammation), and spondylosis (adjusted- p = 0.029; 0.34; 43.59 in 29 cases with BMI high; 59.50 in two cases with DRM without inflammation). Considering the estimated PH, adult patients with BWd-based normal nutrition ( n = 214) had scores of 34.60 ± 0.48 [33.66–35.54], those who were overnourished ( n = 445) reported 33.38 ± 0.33 [32.73–34.02], and those with undernutrition ( n = 18) reported 31.77 ± 1.63 [28.58–34.97]. The trend was monotonic for females (adjusted- p = 0.001; 0.11; 34.58 in 111 cases with normal nutrition; 32.31 in 291 cases with overnutrition; 31.09 in 15 cases with undernutrition) and in those with disc disease (adjusted- p = 0.004; 0.16; 37.16 in 22 well-nourished; 33.43 in 36 overnourished; 32.77 in two undernourished). The adjusted physical scores based on the PMA-derived risk of malnutrition were 34.31 ± 0.33 [33.67–34.95] (no risk; n = 440), 33.03 ± 0.41 [32.22–33.84] (low risk; n = 273), 32.28 ± 0.89 [30.52–34.03] (moderate risk; n = 58), and 32.51 ± 0.78 [30.97–34.04] (high risk; n = 76). The pattern was statistically monotonic for both females (adjusted- p = 0.001; 0.12) and males (adjusted- p = 0.028; 0.10) and for patients diagnosed with a complication (adjusted- p = 0.015; 0.22; 32.92 in 42 cases with no risk; 30.40 in 34 cases with low risk; 28.45 in 8 cases with moderate risk; 30.33 in 13 cases with high risk), a degenerative deformity (adjusted- p = 0.017; 0.25; 33.73 in 35 cases with no risk; 32.20 in 24 cases with low risk; 26.73 in 5 cases with moderate risk; 30.42 in 7 cases with high risk), a cervical disorder (adjusted- p = 0.009; 0.28; 37.56 in 59 cases with no risk; 36.21 in 19 cases with low risk; 37.00 in 4 cases with moderate risk; 28.08 in 5 cases with high risk), and spondylosis (adjusted- p = 0.040; 0.38; 34.58 in 12 cases with no risk; 32.85 in 9 cases with low risk; 27.28 in 3 cases with moderate risk; 29.72 in 5 cases with high risk). The PMAC-based estimated physical health scores reported by patients were 35.41 ± 0.48 [34.47–36.35] (<25th; n = 203), 33.60 ± 0.47 [32.68–34.51] (≥25th and <50th; n = 210), 33.28 ± 0.46 [32.39–34.17] (≥50th and <75th; n = 222), and 32.30 ± 0.47 [31.38–33.22] (≥75th; n = 208). Similar to PMA, a monotonic trend was also present in females (adjusted- p = 0.001; 0.21) and males (adjusted- p = 0.001; 0.25), as well as in those admitted for complications (adjusted- p = 0.049; 0.24; 32.46 in 13 cases <25th; 33.28 in 19 cases ≥25th and <50th; 31.34 in 30 cases ≥50th and <75th; 29.81 in 35 cases ≥75th), cervical disorders (adjusted- p = 0.002; 0.46; 39.55 in 32 cases <25th), 36.49 in 21 cases ≥25th and <50th; 34.77 in 21 cases ≥50th and <75th; 33.13 in 13 cases ≥75th), and spondylosis (adjusted- p = 0.042; 0.48; 34.99 in 7 cases <25th; 34.01 in 5 cases ≥25th and <50th; 33.61 in 6 cases ≥50th and <75th; 29.49 in 11 cases ≥75th). The non-linear relationships adjusted for age and sex among BWd, PMA, PMAC, and GLIM and the ODI or PH scores in the subgroup of adult patients are reported in and . In section A1 of , the scores of ABW-IBW that were associated with the minimum ODI (46.07 in females and 43.98 in males) were −0.91 and 0.72 in females and males, respectively. The maximum ODI scores (50.83 in females and 50.66 in males) corresponded to 45.73 in females and 46.35 in males. The minimum PH (29.03 in females and 29.16 in males) matched 47.43 in females and 57.20 in males, while −4.19 in females and −2.65 in males matched the maximum PH scores (33.35 in females and 33.10 in males). As shown in section A2, the values of PMA that were associated with the minimum ODI (44.72 in females and 34.55 in males) were 0.02 in females (no nutrition-related risk) and 3.20 in males (high nutrition-related risk). The scores that were associated with the maximum ODI (63.64 in females and 53.27 in males) were 5.37 in females (high nutrition-related risk) and 5.26 in males (high nutrition-related risk). As shown in section B2, the scores of 29.42 in females and 31.98 in males were the minimum PH scores corresponding to the PMA values of 12.15 in females (high nutrition-related risk) and 10.76 in males (high nutrition-related risk), while the maximum PH scores (33.60 in females and 36.59 in males) matched 7.23 in females (high nutrition-related risk) and 0.02 in males (no nutrition-related risk). As shown in section A3, the values of the PMAC scores that were associated with the minimum ODI (47.25 in females and 39.80 in males) were 0.03 in females (<25th) and 0.03 in males (<25th), while those associated with the maximum ODI (56.34 in females and 56.23 in males) were 1.42 in females (≥75th) and 1.92 in males (≥75th). The values of the PMAC scores in section B3 that were associated with the minimum PH scores (30.65 in females and 31.25 in males) were 2.26 in females (≥75th) and 2.96 in males (≥75th), while those that were associated with the maximum PH scores (34.33 in females and 34.59 in males) were 0.03 in females (<25th) and 0.03 in males (<25th). In , we report the disability and physical health reported by older adults, stratified according to different malnutrition codifications. The outputs from the ANCOVA assessing differences in the ODI were as follows: BWd (adjusted- p = 0.612; <0.01), GNRI (adjusted- p = 0.160; 0.02), INA (adjusted- p = 0.509; <0.01), LxA (adjusted- p = 0.258; 0.01), PMA (adjusted- p = 0.079; 0.02), PMAC (adjusted- p = 0.010; 0.04), IDM (adjusted- p = 0.005; 0.06), VBD (adjusted- p = 0.037; 0.01), GLIM (adjusted- p = 0.346; 0.01). The descriptive analysis of PH found the following: BWd (adjusted- p = 0.148; 0.02), GNRI (adjusted- p = 0.246; 0.01), INA (adjusted- p = 0.509; <0.01), LxA (adjusted- p = 0.258; 0.01), PMA (adjusted- p = 0.079; 0.03), PMAC (adjusted- p = 0.009; 0.05), IDM (adjusted- p = 0.721; 0.01), VBD (adjusted- p = 0.049; 0.01), GLIM (adjusted- p = 0.346; 0.01). In older adults, PMAC, IDM, and VBD were statistically significant in terms of the disability scores, whereas the PH was associated with the PMAC and VBD groups. The estimated disability levels at a greater PMAC-based risk were 45.06 ± 2.40 [40.39–49.72] (<25th; n = 45), 46.91 ± 1.87 [43.22–50.60] (≥25th and <50th; n = 71), 51.94 ± 1.95 [48.11–55.78] (≥50th and <75th; n = 65), and 52.13 ± 1.56 [49.06–55.21] (≥75th; n = 101). The pattern was monotonic for females (adjusted- p = 0.001; 0.37; 40.00 in 19 cases <25th; 53.17 in 47 cases ≥25th and <50th; 56.45 in 38 cases ≥50th and <75th; 56.35 in 62 cases ≥75th) and for patients with stenosis (adjusted- p = 0.018; 0.33; 36.55 in 11 cases <25th; 51.39 in 18 cases ≥25th and <50th; 52.19 in 21 cases ≥50th and<75th; 51.57 in 30 cases ≥75th). The mean adjusted disability scores across IDM categories were 43.81 ± 2.38 [39.11–48.51] (<25th; n = 44), 43.95 ± 2.42 [39.18–48.71] (≥25th and <50th; n = 45), 51.67 ± 2.02 [47.70–55.65] (≥50th and <75th; n = 60), and 52.79 ± 1.93 [48.98–56.60] (≥75th; n = 74). Female subjects showed a statistically meaningful, monotonic relationship (adjusted- p = 0.001; 0.39; 48.83 in 35 cases <25th; 49.93 in 40 cases ≥25th and <50th; 56.35 in 37 cases ≥50th and <75th; 60.50 in 26 cases ≥75th). When using VBD, patients with an adequate vitamin B status ( n = 346) had reported estimated disability levels of 48.07 ± 0.85 [46.40–49.75], against 51.05 ± 1.31 [48.48–53.63] in those with a functional deficiency ( n = 147), with the trend being monotonic for patients diagnosed with a complication (adjusted- p = 0.013; 0.37) or degenerative spondylolisthesis (adjusted- p = 0.044; 0.20). Among those admitted for a complication, the mean ODI score was 17.87 in 153 cases with an adequate vitamin B status and 64.23 in 13 cases with a functional vitamin B deficiency. Concerning degenerative spondylolisthesis, we found a mean disability score of 44.66 in 70 cases with an adequate vitamin B status, compared to 51.83 in 30 cases with a functional vitamin B deficiency. Older adults stratified according to PMAC reported an adjusted mean physical health score of 33.42 ± 0.95 [31.55–35.29] (<25th; n = 45), 32.71 ± 0.75 [31.29–34.18] (≥25th and <50th; n = 71), 30.61 ± 0.78 [29.07–32.14] (≥50th and <75th; n = 65), and 30.47 ± 0.63 [29.23–31.70] (≥75th; n = 101). We found a monotonic trend in females (adjusted- p = 0.003; 0.30; 34.45 in 19 cases <25th; 29.62 in 47 cases ≥25th and <50th; 29.71 in 38 cases ≥50th and <75th; 29.15 in 62 cases ≥75th) and in patients diagnosed with stenosis (adjusted- p = 0.004; 0.45; 35.25 in 11 cases <25th; 29.98 in 18 cases ≥25th and <50th; 29.95 in 21 cases ≥50th and <75th; 28.82 in 30 cases ≥75th). The estimated PH across VBD levels was 31.50 ± 0.35 [30.81–32.18] in patients with an adequate vitamin B status ( n = 346), while, for those with a functional deficiency ( n = 147), the mean was 30.47 ± 0.54 [29.42–31.53]. The pattern between the exposure variable and the outcome was monotonic for patients diagnosed with a complication (adjusted- p = 0.012; 0.26; 30.98 in 33 cases with adequate vitamin B status and 27.34 in 13 cases with functional vitamin B deficiency). In and , we report the relationships between PMAC, IDM, and VBD and the ODI or PH scores for the older adult sample group, adjusted for age and sex. As shown in section A1 of , the values of the PMAC score that corresponded to the minimum ODI (53.25 in females and 40.14 in males) were 1.98 in females (≥75th) and 0.03 in males (<25th), while those matching the maximum ODI (62.17 in females and 48.34 in males) were 0.80 in females (≥25th and <50th) and 3.28 in males (≥75th). As shown in section B1, the PMAC scores that were associated with the minimum PH score (26.06 in females and 31.03 in males) were 0.80 in females (≥25th and <50th) and 0.56 in males (≥75th), while those associated with the maximum PH score (30.71 in females and 33.76 in males) were 0.04 in females (<25th) and 1.52 in males (≥75th). As shown in section A2, the values of IDM that corresponded to the minimum ODI (44.83 in females and 44.81 in males) were 416.48 in females (<25th) and 413.60 in males (<25th), while those associated with the maximum ODI (females: 44.32, males: 40.37) were 25.28 in females (< 25th) and 1556.00 in males (≥75th). As shown in section B2, the values of the IDM score that matched with the minimum PH score (30.44 in females and 29.94 in males) were 1220.48 in females (≥75th) and 1556.00 in males (≥75th), while those associated with the maximum PH score (33.88 in females and 33.88 in males) were 655.76 in females (≥50th and <75th) and 659.84 in males (≥50th and <75th). In this historical study involving 2258 patients registered in SpineReg between 1 January 2016 and 31 December 2019, aged ≥18 years, who were not undergoing emergency surgery or tumour surgery, we aimed (1) to explore the existence of an association between the attributed malnutritional status and reported disability or physical fitness and (2) to investigate whether there was an ordered pattern in the disability and physical function scores across the different levels of malnutrition at the admission visit before surgery. Malnourished patients were classified according to our previously issued proxy measures of the potential malnutrition risk (BWd, GNRI, INA, LxA, PMA, PMAC, IDM, VDB) and the probable diagnosis (GLIM) . The cohort of the study consisted of 58.15% of female subjects, who tended to be older and shorter and had lower weights. Differences in blood tests reflected the well-known sex differences, with the blood count tending to display higher values in males. Females reported greater levels of disability, by almost nine points on average, and slightly lower scores for PH. These sex differences are not new and could depend on various psychosocial factors and distinct coping mechanisms . Physical function can be influenced by fear avoidance thoughts, being more present in females . Women are also scheduled for surgery when the disease state is more advanced . The overall prevalence of good nourishment was 31.99% when considering BWd, 93.38% for GNRI, 86.98% for INA, 69.01% for LxA, 53.93% for PMA, and 79.98% for VDB. Based on the GLIM, 2.27% had clean undernutrition, 4.90% had DRM without inflammation, and 0.83% had DRM with inflammation; for the remaining 92.01%, it was not possible to calculate the GLIM because the BMI was high. The high variability in these prevalences is due to the constitutive features of the surrogate equations and the inapplicability of some indices ( ). Using the classic BMI categorisation, those not having a body mass within the normal ranges were found to constitute 49.58% of the whole study cohort. The primary findings of this investigation can be summarised as follows. Among the 335 younger adults aged less than 40 years, only the BWd equation showed significant differences in terms of the reported disability and functional status while controlling for sex, age, and diagnosis. The predictor variable BWd accounted for approximately 4% of the variance in the ODI and 6% in PH, having a discernible impact in determining how a patient perceived their physical status. We found a clear monotonic trend that was sex- and diagnosis-specific, with the consistency of small effects of malnutrition on the outcome measures being observable only in males. A moderate-to-strong magnitude of the ordered relationship was found concerning disc herniation, complications, and spondylosis. The kernel regression curves illustrated that the least positive difference between ABW and IBW corresponded almost to the lowest disability score and the smallest negative difference corresponded to the highest physical health score (sections A2 and B2 in ). The analysis of the subgroup of 1430 adult patients between 40 and 70 years old revealed that both functional ability scores were statistically significant in terms of malnutrition based on BWd, PMA, and PMAC after adjusting for sex, age, and diagnosis. The practical significance may be limited given the small effect sizes: BWd explained about 2% of the variance in both the ODI and PH, PMA explained 4% in the ODI and 1% in PH, and PMAC explained 4% in both the ODI and PH. Considering BWd, the physical score had an ordered trend only amongst females (small effect) and those with disc disease (small effect), with patterns that revealed higher PH scores in well-nourished compared to undernourished or overnourished patients. Concerning the kernel curves (sections A1 and B1 in ), we observed the lowest values of the ODI and the highest PH in those who did not deviate too much from the ideal weight. We found a distinguishable monotonic pattern in the ODI and PH across the ordered groups of patients with PMA- and PMAC-derived malnutrition for both sexes (small-to-medium effect) and admission for complications (medium-to-large effect). Significant monotonic patterns in both PROMs were observed in the ordered groups of PMA among those with degenerative deformities (medium effect). Significant patterns in the PH scores were also found across the PMA groups of cervical disorders and spondylosis, as well as among those with disc disease (ODI), cervical disorders (ODI and PH), or spondylosis (PH) concerning PMAC (small-to-medium effect). Overall, as the nutritional risk with PMA or the percentile of PMAC increased, there was a trend towards the worsening of both the disability and physical scores (sections A2, A3, B2, and B3 in ). Coding based on GLIM revealed a significant association with the ODI across subjects when controlled for sex, age, and diagnosis, despite the small effect size. A statistically significant but moderately strong monotonic trend in the disability scores was observed across the groups of GLIM when considering the male sex or a diagnosis of stenosis, idiopathic spondylolisthesis, or spondylosis. In particular, subjects across the subgroups with a BMI ≥ 20 (not undernourished) or categorised as having clean undernutrition reported reduced disability compared to those with DRM with or without inflammation. Considering the subgroup of 493 older adults ≥ 70 years of age, we found significant associations between the ODI and PH based on the PMAC- and VBD-derived malnutrition levels, as well as in the IDM and VBD groups with the ODI alone. Despite the statistical significance, less than 5% of the variance in the outcome measures was accounted for by malnutrition predictors, revealing an overall small effect between the features. However, there was a significant and moderate monotonic trend between the PMAC groups and ODI for females and those with a diagnosis of stenosis, with increasing disability levels reported by those with higher percentiles of PMAC. The categorisation of older patients according to the PMAC equation was similarly able to show a meaningful and substantial pattern with the functional scores in the same subgroups of females and those with an admission for stenosis. The IDM-derived coding showed that the more serious the deficiency, the higher the disability scores reported by the patients. A significant monotonic trend across the groups was observed only for females. Patients labelled with a functional vitamin B deficiency had reported higher levels of disability and lower physical status compared to those with an adequate vitamin B status. The significance of the trend’s monotonicity was diagnosis-specific. Medium effects of malnutrition on both outcomes were observable in those admitted for complications, whereas the degenerative spondylolisthesis subgroup showed a smaller effect only for the disability level. Poor visual interpretability resulted from the kernel regression curves ( ). However, the violin plots in sections A2 and B2 in reasonably demonstrated the greatest ODI and worst PH scores in subjects with the most severe iron deficit malnutrition. The same interpretation can be drawn from sections A3 and B3, where the vitamin B status seems to clearly illustrate different score distributions. Several parallels can be drawn between our findings and the existing scientific knowledge. First, the prevalence of BMI-derived malnutrition in our cohort is in line with current evidence showing rates of around 50% . This consistency reinforces the recognition of malnutrition as a common red flag in the pre-operative risk assessment of spine disorders other than spine oncology . Second, we found that small variations in body weight can influence functional outcomes differently across age groups. Specifically, younger adults with a mild weight excess (4–5 kg above IBW) exhibited the lowest disability levels, while those weighing 7–8 kg less than the IBW had the highest physical function scores. Similarly, adults with a slight weight deficit (3–4 kg below IBW) had the highest physical function. These relationships were lost in subjects over 70 years of age. This is not a new discovery as it is known that the body mass in older patients has lower clinical significance compared to the body composition and fat mass . Among older adults with spine pathologies, the back muscular area was associated with the ODI . These findings suggest that a body weight assessment, while providing useful information in adult patients, is not appropriate for older adults, who should instead be assessed regarding their body composition and sarcopenia . Third, the discrimination of functional ability using IDM and VBD, which are predominantly based on red blood cell count analytes, was found to be important only in the geriatric subgroup. Prior research has linked pre-operative disability to markers of iron homeostasis, with the latter being often useful in discriminating patients with iron deficit anaemia . To the authors’ knowledge, the employment of VBD in assuming a certain degree of nutrient deficit is new. The most common vitamin deficiency studied in the orthopaedic field remains hypovitaminosis D, which has been extensively linked to greater disability and worse outcomes after spinal surgery . Our results reinforce the notion that iron deficiency anaemia is a modifiable risk factor in patients undergoing major orthopaedic surgery, together with vitamin D deficiency . More attention is warranted in exploring the role of a nutritional deficiency of B vitamins in aggravating anaemia. Fourth, we found negligible and non-statistically significant differences in the PROMs when inspecting the GNRI, INA, and LxA categories. This was unexpected as these equations are based on albumin and lymphocytes, which are commonly used as nutritional biomarkers for prognosis and survival after spine surgery . The lack of results could be due to the large number of patients missing albumin values ( ), which may have reduced the statistical power. However, inconsistent results were also observed by other authors investigating ODI differences among spine patients stratified using a combination of albumin and lymphocyte counts . Notably, PMA and PMAC, which incorporate inflammatory markers like CRP and/or NLR in addition to albumin, were more valid in identifying malnourished patients . This finding suggests that the integration of markers of inflammation into pre-operative nutritional assessment, as a reflection of what could be disease-related malnutrition, might improve risk stratification in spine surgery patients. Limitations First, the malnutritional status was attributed based on surrogate equations. Despite the impossibility of utilising other tools, this method does not reflect the current path in malnutrition screening and diagnosis. Second, the sex analysis stratified by age group elevated our clinical understanding, reduced the noise, and allows generalisation to both sexes in the general population. However, the overall data reliability might be low given that different raters dichotomised the sex and reported PROMs. Third, the involvement of the entire demographic spectrum under the same clinical setting guarantees the repeatability and reproducibility of the findings. Nevertheless, caution should be exercised in generalising these findings beyond the context of our sample. For example, the subgroup of young adults involved only 140 observations, and the small size per group could have reduced the validity. Fourth, the GLIM criteria were designed to diagnose undernutrition (BMI < 20). However, the surrogate equation of the GLIM comprises also the category of BMI ≥ 20. This choice was made to avoid excluding 1179 adults (almost 95.23% of the age subgroup) from the subgroup analysis. Fifth, our investigation did not consider the roles of other clinical, mental, and social components, which could have yielded more accurate results in depicting the precise nature of the relationship. Future research should consider these limitations to produce more influential results that elucidate the primary determinants of disability, physical function, and recovery after surgery. First, the malnutritional status was attributed based on surrogate equations. Despite the impossibility of utilising other tools, this method does not reflect the current path in malnutrition screening and diagnosis. Second, the sex analysis stratified by age group elevated our clinical understanding, reduced the noise, and allows generalisation to both sexes in the general population. However, the overall data reliability might be low given that different raters dichotomised the sex and reported PROMs. Third, the involvement of the entire demographic spectrum under the same clinical setting guarantees the repeatability and reproducibility of the findings. Nevertheless, caution should be exercised in generalising these findings beyond the context of our sample. For example, the subgroup of young adults involved only 140 observations, and the small size per group could have reduced the validity. Fourth, the GLIM criteria were designed to diagnose undernutrition (BMI < 20). However, the surrogate equation of the GLIM comprises also the category of BMI ≥ 20. This choice was made to avoid excluding 1179 adults (almost 95.23% of the age subgroup) from the subgroup analysis. Fifth, our investigation did not consider the roles of other clinical, mental, and social components, which could have yielded more accurate results in depicting the precise nature of the relationship. Future research should consider these limitations to produce more influential results that elucidate the primary determinants of disability, physical function, and recovery after surgery. In this retrospective study, we have illustrated that the attributed malnutritional status was able to discern differences in self-reported disease-specific functional ability levels among spine patients subjected to elective spine surgery. The consistency of the ordered trends of the relationship between malnutrition and disability or physical fitness depended on age, sex, and the spine pathology. Different malnutrition equations were found to carry varying relevance across age strata: younger adults were most affected by deviations from the ideal body weight, middle-aged patients by inflammation-related protein depletion, and older adults by disruptions in iron homeostasis. Our results encourage researchers to discover methods for historical data manipulation that could help to validate risk prediction models and enable clinical dietitians to incorporate personalised nutrition care as adjunct therapies in prehabilitation. Multidisciplinarity in prehabilitation is essential, but the modalities, clinical value, and sustainability remain to be determined. It is plausible to suggest that interventions will have a greater benefit when applied to more serious cases. Future efforts should seek to identify clusters of observations and thresholds of features to reasonably establish malnutrition risk-based stratification models of patients and discern those cases that will benefit the most form nutritional prehabilitation.
Health Care Utilization With Telemedicine and In-Person Visits in Pediatric Primary Care
7409bb24-13e0-47ad-a0bf-a391b43f3e81
11584922
Pediatrics[mh]
Telemedicine emerged as a popular vehicle for health care delivery during the COVID-19 pandemic despite few large studies demonstrating its ability to meet the needs of more than 12 million US children now using it annually. , , , , As we enter a postpandemic era, it is important to understand how telemedicine compares with traditional in-person care to guide resource allocation in the pediatric outpatient setting and to support national policy decisions to preserve ongoing pediatric telemedicine access. , Previous studies have demonstrated that telemedicine is acceptable to patients and physicians to provide outpatient pediatric care. , , However, few studies have focused on telemedicine’s effectiveness in substituting for in-person pediatric visits or its associated downstream emergency department (ED) and hospital utilization. The results of a few small studies have found no significant association between telemedicine and increases in downstream health care utilization, but these results require confirmation in a larger population. , The Kaiser Permanente Northern California (KPNC) telemedicine options include telephone visits and video visits. Previous work from our group demonstrated lower rates of medication prescribing and imaging orders and slightly higher subsequent in-person visits associated with telemedicine primary care visits compared with in-person primary care visits in an adult population. However, the effect of telemedicine on health care utilization in the pediatric population is unknown. Further evidence is needed regarding the efficacy of pediatric primary care telemedicine by quantifying differences in the need for downstream follow-up care. , In the current study, we compared medication prescribing and imaging and laboratory ordering during an in-person office or telemedicine visit and health care utilization within 7 days after the visit. Setting This cohort study examined primary care pediatric visits in a large, integrated health care delivery system, KPNC, which includes nearly 4.5 million members whose demographic characteristics are reflective of the regional population. KPNC uses a comprehensive electronic health record (EHR) that includes outpatient, emergency, inpatient, laboratory, imaging, and pharmacy history. The EHR also offers a patient portal where patients can self-schedule office or telemedicine pediatric visits. Since 2016, KPNC members have had the choice of telephone, video, or in-person primary care visits. Primary care visits are scheduled based on patients’ preferences and are not directive. The capitated system does not bill patients or insurers for telemedicine visits. The institutional review board of the Kaiser Foundation Research Institute approved the study protocol and materials. The institutional review board granted a waiver of informed consent because this data-only study was determined to be minimal risk. This cohort study followed the Strengthening the Reporting of Observational Studies in Epidemiology ( STROBE ) reporting guideline. Study Population We studied all completed primary care pediatrics appointments from January 1 through December 31, 2022, including only index visits (1) with a chief concern other than a routine well-child visit and (2) without any other clinical visits within the 7 days prior to define a relatively distinct patient-initiated, care-seeking episode. The health system recommended in-person visits for routine pediatric health care (ie, vaccination appointments, well-child appointments) and telemedicine visits for SARS-CoV-2 infection–associated concerns. For this reason, these visit types were excluded from the study population. All study data were obtained using the EHR and other automated data sources. Outcome Measures For each study index visit, we identified any medication prescribing, laboratory orders, and imaging orders associated with the visit, including a subset of prescriptions specifically for antibiotics. We grouped the clinical concern of visits using the diagnosis grouping system of the International Statistical Classification of Diseases and Related Health Problems, Tenth Revision (ICD-10) for pediatric diagnoses in EDs. To characterize short-term follow-up health care utilization, we extracted all primary care office visits with a pediatrician, ED visits, and hospitalizations within 7 days after the index primary care visit, including same-day visits. We examined each outcome in the full sample of all patient visits as well as stratified by area of clinical concern. Covariates We compared outcomes associated with index visit type, accounting for covariates with literature precedence for an association with visit-type choice, , , including patient sociodemographic characteristics (age, sex, race and ethnicity, neighborhood socioeconomic status, and language), technology access in the prior year (neighborhood internet access, mobile portal access), clinic visit barriers (driving distance from home to primary care facility, paid facility parking), video visit experience (any in prior year), if the visit was with the patient’s usual primary care practitioner or another in the same practice, clinical comorbidities and health care utilization history (Elixhauser score 0 or ≥1, any ED visit in the prior year, or any hospitalization in the prior year), and context of the studied index visit, including appointment booking day (Monday through Thursday, Friday, or Saturday or Sunday), visit day (Monday through Thursday or Friday through Sunday), visit time (morning or afternoon), days between appointment booking and visit (same day, 1 day, 2-7 days, or ≥8 days), ICD-10 grouping, medical center, and calendar month. We used the patient’s residential address from the EHR to define patient neighborhood socioeconomic status (2010 US census measures at the census block group level) and neighborhood residential high-speed internet access level (Federal Communications Commission census tract–level data) (eTable in ). Race and ethnicity were ascertained by self-report. Categories were Asian, Black, Hispanic, White, and other (American Indian or Alaska Native, Hawaiian or Pacific Islander, and unknown). Statistical Analysis We used multivariable logistic regression to examine associations between index visit type (in-person, telephone, or video) and outcomes (medication prescribing and/or imaging or laboratory ordering at the index visit and 7-day in-person visits, ED visits, or hospitalizations), with adjustment for all aforementioned covariates. We examined each outcome using a separate logistic regression model since each outcome represents a clinically distinct action and outcome. Standard errors were adjusted for repeated visits by the same patient by clustering observations by patient with a robust variance estimator. For easier interpretation, we calculated an adjusted rate for each outcome from the multivariable logistic regression and adjusted difference between telephone or video visits and office visits using marginal standardization (using the margins postestimation command in Stata, version 17.0 [StataCorp LLC]). All analyses were conducted using 2-sided tests for significance and P < .05 as the threshold for significance, in Stata, version 17.0. This cohort study examined primary care pediatric visits in a large, integrated health care delivery system, KPNC, which includes nearly 4.5 million members whose demographic characteristics are reflective of the regional population. KPNC uses a comprehensive electronic health record (EHR) that includes outpatient, emergency, inpatient, laboratory, imaging, and pharmacy history. The EHR also offers a patient portal where patients can self-schedule office or telemedicine pediatric visits. Since 2016, KPNC members have had the choice of telephone, video, or in-person primary care visits. Primary care visits are scheduled based on patients’ preferences and are not directive. The capitated system does not bill patients or insurers for telemedicine visits. The institutional review board of the Kaiser Foundation Research Institute approved the study protocol and materials. The institutional review board granted a waiver of informed consent because this data-only study was determined to be minimal risk. This cohort study followed the Strengthening the Reporting of Observational Studies in Epidemiology ( STROBE ) reporting guideline. We studied all completed primary care pediatrics appointments from January 1 through December 31, 2022, including only index visits (1) with a chief concern other than a routine well-child visit and (2) without any other clinical visits within the 7 days prior to define a relatively distinct patient-initiated, care-seeking episode. The health system recommended in-person visits for routine pediatric health care (ie, vaccination appointments, well-child appointments) and telemedicine visits for SARS-CoV-2 infection–associated concerns. For this reason, these visit types were excluded from the study population. All study data were obtained using the EHR and other automated data sources. For each study index visit, we identified any medication prescribing, laboratory orders, and imaging orders associated with the visit, including a subset of prescriptions specifically for antibiotics. We grouped the clinical concern of visits using the diagnosis grouping system of the International Statistical Classification of Diseases and Related Health Problems, Tenth Revision (ICD-10) for pediatric diagnoses in EDs. To characterize short-term follow-up health care utilization, we extracted all primary care office visits with a pediatrician, ED visits, and hospitalizations within 7 days after the index primary care visit, including same-day visits. We examined each outcome in the full sample of all patient visits as well as stratified by area of clinical concern. We compared outcomes associated with index visit type, accounting for covariates with literature precedence for an association with visit-type choice, , , including patient sociodemographic characteristics (age, sex, race and ethnicity, neighborhood socioeconomic status, and language), technology access in the prior year (neighborhood internet access, mobile portal access), clinic visit barriers (driving distance from home to primary care facility, paid facility parking), video visit experience (any in prior year), if the visit was with the patient’s usual primary care practitioner or another in the same practice, clinical comorbidities and health care utilization history (Elixhauser score 0 or ≥1, any ED visit in the prior year, or any hospitalization in the prior year), and context of the studied index visit, including appointment booking day (Monday through Thursday, Friday, or Saturday or Sunday), visit day (Monday through Thursday or Friday through Sunday), visit time (morning or afternoon), days between appointment booking and visit (same day, 1 day, 2-7 days, or ≥8 days), ICD-10 grouping, medical center, and calendar month. We used the patient’s residential address from the EHR to define patient neighborhood socioeconomic status (2010 US census measures at the census block group level) and neighborhood residential high-speed internet access level (Federal Communications Commission census tract–level data) (eTable in ). Race and ethnicity were ascertained by self-report. Categories were Asian, Black, Hispanic, White, and other (American Indian or Alaska Native, Hawaiian or Pacific Islander, and unknown). We used multivariable logistic regression to examine associations between index visit type (in-person, telephone, or video) and outcomes (medication prescribing and/or imaging or laboratory ordering at the index visit and 7-day in-person visits, ED visits, or hospitalizations), with adjustment for all aforementioned covariates. We examined each outcome using a separate logistic regression model since each outcome represents a clinically distinct action and outcome. Standard errors were adjusted for repeated visits by the same patient by clustering observations by patient with a robust variance estimator. For easier interpretation, we calculated an adjusted rate for each outcome from the multivariable logistic regression and adjusted difference between telephone or video visits and office visits using marginal standardization (using the margins postestimation command in Stata, version 17.0 [StataCorp LLC]). All analyses were conducted using 2-sided tests for significance and P < .05 as the threshold for significance, in Stata, version 17.0. Of 782 596 primary care visits scheduled by 438 638 patients, 450 443 (57.6%) were in-person office visits and 332 153 (42.4%) were telemedicine visits (143 960 video visits [18.4%] and 188 193 telephone visits [24.0%]). Overall, 25.3% of visits were for patients younger than 2 years; 48.9%, for female patients, and 51.1%, for male patients. A total of 19.6% of visits were for Asian individuals; 6.5%, for Black individuals; 29.2%, for Hispanic individuals; 33.7%, for White individuals; and 11.0%, for individuals with other race. Of the total visits, 93.4% were for English-speaking individuals and 21.3% for those who resided in neighborhoods with low socioeconomic status. Approximately half of all visits (51.8% overall, 53.0% in-person, 53.1% telephone, and 46.0% video) were with the patient’s usual pediatrician . Telemedicine and in-person visits were used differentially based on area of clinical concern. For example, telephone and video visits delivered most visits for mental health and were used the least for musculoskeletal and connective tissue disorders . After adjustment, there was more medication prescribing for in-person visits (39.8%) compared with video visits (29.5%; adjusted difference, −10.3%; 95% CI, −10.6% to −10.0%) and telephone visits (27.3%; adjusted difference, −12.5%; 95% CI, −12.5% to −12.7%) (eFigure 1 in ) and more laboratory ordering for in-person visits (24.6%) compared with video visits (7.8%; adjusted difference, −16.8%; 95% CI, −17.0% to −16.6%) and telephone visits (8.5%; adjusted difference, −16.2%; 95% CI, −16.3% to −16.0%) (eFigure 2 in ). Similarly, imaging ordering was higher for in-person visits (8.5%) compared with video visits (4.0%; adjusted difference, −4.5%; 95% CI, −4.6% to −4.4%) and telephone visits (3.5%; adjusted difference, −5.0%; 95% CI, −5.1% to −4.9%) (eFigure 3 in ). After adjustment, fewer in-person follow-up visits occurred for index visits that were in-person (4.3%) compared with video (14.4%; adjusted difference, 10.1%; 95% CI, 9.9%-10.3%) or telephone (15.1%; adjusted difference, 10.8%; 95% CI, 10.7%-11.0%) visits. Compared with antibiotic prescribing for in-person visits (17.8%), there was less antibiotic prescribing associated with video visits (12.1%; adjusted difference, −5.7%; 95% CI, −5.9% to −5.2%) and telephone visits (10.1%; adjusted difference, −7.7%; 95% CI, −7.9% to −7.5%). Adjusted rates of in-person follow-up care after video or telephone appointments were higher across all areas of clinical concern . Most downstream in-person pediatrician follow-up visits following a telephone or video visit were in the first 24 hours of the index visit (49.2% for telephone and 47.3% for video), while most downstream in-person pediatrician visits following an index in-person visit (91.7%) occurred more than 24 hours after the index visit . There were fewer ED visits following an in-person visit (1.75%) compared with after video visits (2.04%; adjusted difference; 0.29%; 95% CI, 0.21%-0.38%) and telephone visits (2.00%; adjusted difference, 0.25%; 95% CI, 0.18%-0.33%) . However, the adjusted percentage of patients hospitalized following an in-person visit (0.14%) was similar to those hospitalized following a video visit (0.12%; adjusted difference, −0.02%; 95% CI −0.04% to 0.00%) or telephone visit (0.08%; adjusted difference, −0.02%; 95% CI, −0.07 to −0.04%). In this cohort study of a large, integrated health care system in 2022, telephone and video pediatrics primary care visits were associated with less overall physician prescribing and ordering, modest increases in subsequent short-term in-person visits, and small increases in downstream ED encounters, without clinically significant differences in hospitalizations. Telephone or video visits were associated with less prescribing and ordering compared with in-person visits, consistent with previous studies. , , , Greater physician prescribing observed in previous direct-to-consumer (DTC) telemedicine studies was not observed. , , , All members of the health system are assigned a primary care physician, and the current study found that a large percentage of scheduled visits were with the patient’s usual pediatrician. Our results may differ from studies of DTC models, as visits in those models occur outside a primary care relationship. A recent study by Wittman et al noted similar findings and reported lower rates of downstream follow-up visits and antibiotic prescriptions associated with telemedicine visits delivered by primary care physicians compared with DTC practitioners. When stratified by area of clinical concern, telemedicine was associated with lower medication prescribing and imaging and laboratory ordering across all areas of clinical concern except for antibiotic prescribing for ophthalmological concerns. Slightly more in-person follow-up visits and ED visits occurred after an index video or telephone visit compared with an in-person visit in our study. This result is consistent with telemedicine studies in adults, , although it contrasts with the results of 2 small, single-center pediatric telemedicine studies , that did not find a statistically significant difference in downstream health care utilization following a telemedicine visit. Reilly et al did not find an association between telemedicine and increased health care reutilization rates in an urban, academic pediatrics health system during a pandemic period, although their follow-up period was restricted to 72 hours or less. Consistent with our results, Sprecher et al found that telemedicine was associated with decreased antibiotic prescribing for pediatric telemedicine appointments compared with in-person visits and found no association between telemedicine use and unplanned downstream ED visits. The sample size of 782 596 visits in our study may have provided sufficient power to detect these differences, and additional studies of different health care settings are needed. We found no significant difference in downstream hospitalization rates between visit types. Our results suggest that although telemedicine was associated with more in-person follow up visits, it may not have resulted in excess missed or delayed diagnoses leading to clinical decompensation and subsequent hospitalization. Notably, 49.2% and 47.3% of the in-person visits scheduled after an index telephone or video visit, respectively, occurred within 1 day of the index visit. This finding appears to support a role for telemedicine in identifying patients who need prompt in-person evaluation to meet their care needs. These promptly scheduled in-person follow-up visits likely reflect in-person visits requested by the pediatrician to collect clinical information only available with a face-to-face interaction, such as vital signs, physical examination findings, or laboratory testing, although we cannot be certain that these visits were driven by pediatricians’ requests or parents’ concerns. The lack of significant difference in downstream hospitalization suggests that patients who initially made telemedicine visits did not have worse outcomes than those who initially made in-person visits, even if some needed to return for in-person follow-up visits. Limitations These findings should be interpreted considering several limitations. First, in this observational study of outcomes related to visit types, unmeasured confounders may exist. Patients with more comorbidities are likely to require more laboratory and imaging orders, and these patients may have self-triaged to in-person visits. Also, while we adjusted for comorbidities, we were unable to adjust for acuity of the chief concern. Additionally, while we attempted to identify distinct care-seeking episodes by using a 7-day washout period, it is still possible that prescribing, ordering, and downstream visits were unrelated to the index visit. However, this issue may be smaller in a pediatric population, which tends to have fewer chronic comorbidities. Our data were drawn from an all-ages cohort of patients seeking primary care, and Elixhauser score was used to adjust for comorbidities. The Elixhauser score was derived from an adult population and may be insensitive to identify pediatric patients with comorbidities. This study was conducted in a large, integrated health care setting where telemedicine was widely available before the pandemic and the findings may not be generalizable to other settings. Lastly, we are unable to distinguish between downstream in-person visits that were unplanned and visits advised by patients’ pediatricians. These findings should be interpreted considering several limitations. First, in this observational study of outcomes related to visit types, unmeasured confounders may exist. Patients with more comorbidities are likely to require more laboratory and imaging orders, and these patients may have self-triaged to in-person visits. Also, while we adjusted for comorbidities, we were unable to adjust for acuity of the chief concern. Additionally, while we attempted to identify distinct care-seeking episodes by using a 7-day washout period, it is still possible that prescribing, ordering, and downstream visits were unrelated to the index visit. However, this issue may be smaller in a pediatric population, which tends to have fewer chronic comorbidities. Our data were drawn from an all-ages cohort of patients seeking primary care, and Elixhauser score was used to adjust for comorbidities. The Elixhauser score was derived from an adult population and may be insensitive to identify pediatric patients with comorbidities. This study was conducted in a large, integrated health care setting where telemedicine was widely available before the pandemic and the findings may not be generalizable to other settings. Lastly, we are unable to distinguish between downstream in-person visits that were unplanned and visits advised by patients’ pediatricians. In this cohort study, primary care pediatric telemedicine was associated with less medication prescribing and laboratory ordering compared with in-person visits across most areas of clinical concern. Based on our results, health care systems and private pediatrics practices seeking to initiate or broaden their pediatric telemedicine programs should expect that primary pediatric care delivered by telephone or video may be associated with modest increases in in-person follow-up visits and slightly higher ED utilization but negligible differences in downstream hospitalizations compared with traditional in-person visits.